[cxx-abi-dev] Run-time array checking

Dennis Handly dhandly at cup.hp.com
Thu Sep 6 23:35:41 UTC 2012


>From: Mike Herrick <mjh at edg.com>
>As part of the changes for C++11, there are new requirements on checking
>of the value of the expression in a new[] operation.  5.3.4p7 says:
>If the value of that expression is less than zero or such that the size
>of the allocated object would exceed the implementation-defined limit,

How does the runtime know the value is negative and not a large unsigned
number?  Or this is moot, we treat it as large and if it is too big,
we fail for that?

It almost seems that only the compiler knows if the type is signed?

And of course the mentioned (size_t)-1 would always be too big.

>1) Have the compiler generate inline code to do the bounds checking before
>calling the existing runtime routines.  The problem with this is that there
>is no IA-64 ABI standard way to throw a std::bad_array_new_length exception
>once a violation has been detected (so we'd need to add something like
>__cxa_throw_bad_array_new_length).

Sounds good, even if the runtime calls it directly.

>2) Have the runtime libraries do the checking and throw
>std::bad_array_new_length as needed.  In order to do this (in a backwards
>compatible way) I think we'd need to add new versions of
>__cxa_vec_new2/__cxa_vec_new3 where the element_count is signed and the
>number of initializers in the array is passed as a new argument.

It can't be signed.  I.e. we must allow for large unsigned values.  At least
in 32 bit mode.

3) A new routine, say __cxa_vec_new_check, that takes a signed

>We're leaning towards the first option in the hopes that a back end can more
>easily optimize away some of the added checking
Mike Herrick Edison Design Group

For constant values?  It can do that and so can the frontend.

>From: Florian Weimer <fweimer at redhat.com>
>On 09/06/2012 02:46 PM, Mike Herrick wrote:
>> 3) A new routine, say __cxa_vec_new_check, that takes a signed element_count

>You need two separate element counts which are multiplied by 
>__cxa_vec_new_check with an overflow check

It seems like it.

>Does anybody actually use the __cxa_vec_new* interfaces?
Florian Weimer / Red Hat Product Security Team

I thought you just about had to use them, if you want compact code?


>From: Mike Herrick <mjh at edg.com>
>On Sep 6, 2012, at 1:52 PM, John McCall wrote:
>> For what it's worth, clang has always done this overflow checking
>>(counting negative counts as an overflow in the signed->unsigned
>>computation),

Do you handle large unsigned?  Or you don't have 32 bit?  Or you can't
allocate 2Gb there?

>>> 2) Have the runtime libraries do the checking and throw
> 
>> Well, if we can use (size_t) -1 as a signal value, we don't need any
>>new entrypoints.  That would be safe on any platform where there are
>>values of size_t which cannot possibly be allocated

Right, for 32 bit, you have to have some bytes for instructions.  ;-)
And for 64 bit, the hardware may not support all bits.

>> Don't get me wrong, adding new entrypoints is definitely cleaner.  The
>>main problem with adding and using new entrypoints is that it means that
>>old, C++98-compliant code being recompiled will suddenly require new
>>things from the runtime, which introduces deployment problems.

Don't you have that for the new Standard, anyway?

>One approach around the lack of std::bad_array_new_length could be to
>have __cxa_throw_bad_array_new_length throw std::bad_alloc as a stopgap
>solution.

Sure.

>>> 3) A new routine, say __cxa_vec_new_check, that takes a signed
>>>> element_count
>> 
>> It would also need to know how much cookie to add.  The cookie causing
>>an overflow would certainly be an example of "the value of that
>>expression is ...  such that the size of the allocated object would
>>exceed the implementation-defined limit".

There is a problem with "implementation-defined limit".  For HP-UX there
are secret hardware limits that the compiler doesn't want to know about.
There are system config values that limit data allocation.  (Or is the latter
just the same as bad_alloc and not the new bad_array_new_length?)

Though I did have to do something tricky for the container member function
max_size(), where I assume the max is 2**48 bytes divided by
sizeof(value_type).


More information about the cxx-abi-dev mailing list