[cxx-abi-dev] Run-time array checking

Dennis Handly dhandly at cup.hp.com
Sat Sep 8 05:46:02 UTC 2012


>From: John McCall <rjmccall at apple.com>
>> It seems strange that the code for signed is different than unsigned but
>> the Standard says that signed could overflow and implementation defined.

>This conversation is about how to handle various possible values that the
>first size expression in an array-new expression might take.  That expression
>must be of integer type, but it's permitted to have signed integer type, and
>so therefore can be negative.  In this case, C++11 demands that we throw
>an exception of a certain type, std::bad_array_new_length.

>This is unrelated to the semantics of overflow in signed arithmetic.

I may been stretching it but I was suggesting that the Standard says
signed and unsigned are different under overflow so that indexing for
new with signed int could have negative values but not unsigned.

>> But do we care?  For that architecture, the implementation-defined limit
>> can be set to < SIZE_MAX.

>On a platform with an intentionally constrained size_t, maybe not.

But if it is constrained, then wouldn't (size_t)-1 would always be invalid?
(Assuming size_t is constrained too.)

>>> I guess you could make an abstract argument that an
>>> array allocation which could have succeeded with a different bound
>>> should always produce std::bad_array_new_length

>The point is that if the spec says "throw a std::bad_array_new_length",
>we can't just throw a normal std::bad_alloc, because that's not compliant.

Yes, I was saying that the abstract argument wouldn't be valid because
some bounds would be bad_array_new_length and other (smaller) could be
bad_alloc.

Basically I see these ranges, some overlap:

1) allocation succeeds
2) bad_alloc: fails but at one time it is possible
3) bad_alloc: fails because of configuration limits or possible competing
              processes
4) bad_array_new_length: because just too big, overflow or negative

I.e. the Standard should not force an implementation to tell the difference
between 2) and 3).

>as I read it, the standard implies that we shouldn't even be calling
>operator new[] if we have an invalid size, so we can't handle this by
>just having operator new[] always throw the more specific exception.

Except operator new[] takes a size_t (which if unsigned) you would probably
is always assume was valid (since it doesn't overflow), and just let
the allocator check if too large.

>Possibly only constant after optimization, but yes, that's what I meant.
John.

Ok, I was thinking of some type of inequality or range propagation that
could possibly bless it.  Or other advanced AI technology.


More information about the cxx-abi-dev mailing list