size_t

Tapani Tarvainen tarvaine at tukki.jyu.fi
Wed Aug 9 04:52:59 AEST 1989


In article <975 at cbnewsl.ATT.COM> dfp at cbnewsl.ATT.COM (david.f.prosser) writes:
...
> size_t must be big enough
>to hold the number of bytes of any validly created object.
...
>Of course, if there is a "hugealloc()" function provided which is the
>only access to objects that are bigger than what sizeof or size_t can
>describe, this is still a conforming implementation.  If a program
>makes use of such a function, then a larger than size_t integral type
>would be necessary.

It turns out this is exactly the case with TurboC: malloc(), calloc()
and realloc() won't allocate blocks bigger than 64K.  If you need
such, you must use farmalloc(), farcalloc(), farrealloc(), which
expect the block size as a long, so TC appears to be conforming
in this respect after all.  

Unfortunately this apparently means there is no standard-conforming
way to create objects bigger than 64K in TC, or indeed using the huge
model at all in any useful way.  I do hope Borland does something to
this in a future version of TC, either change the behaviour of huge or
provide a separate ANSI-huge model where everything is long that needs
to be and pointer declarations and arithmetic work automatically OK
so that I can take a conforming program that needs big blocks and
compile it without any changes, just by setting a compiler option.

Something related which I would call a bug is the behaviour of
calloc() that e.g., calloc(1000,1000) won't give an error or NULL but
silently truncates the product to 16960 (== 1000000 && 0x0ffff) and
allocates that amount.  What does the pANS say about overflow handling
in this situation?
-- 
Tapani Tarvainen                 BitNet:    tarvainen at finjyu
Internet:  tarvainen at jylk.jyu.fi  -- OR --  tarvaine at tukki.jyu.fi



More information about the Comp.std.c mailing list