sizeof (integral types)

Sho Kuwamoto sho at pur-phy
Thu Apr 27 07:08:17 AEST 1989


In article <5387 at xyzzy.UUCP> meissner at tiktok.UUCP (Michael Meissner) writes:
>* minimum value for an object of type int
>INT_MIN		-32767
>* maximum value for an object of type int
>INT_MAX		32767

The above was given only as an example of how an ANSI compliant C
could define these values, but why not make INT_MIN -32768?  This is
more than a knee jerk reaction against pascal.  I remember some
example program or something written for the mac in pascal.  Some
routine in the ROMs needed a 16 bit value, and the worst of it was
that the program in question needed to pass it 0xf000.  Because of
Pascal's strong typechecking, this value was not allowed, and they had
to put in some ugly hack.

Now I understand that you could always use an unsigned int for
something like this, but it seems un c-like to make 0xf000 somehow an
illegal value.  This is a contrived example, but suppose I was writing
something which scanned characters, and checked to see if the high bit
was set.  Would it be inelegant to say something like:

signed char c;
[...]
   if(c<0){
      [...]

In either case, I want to be able to access all possible values with
my bits, regardless of whether or not my variable is signed.

-Sho



More information about the Comp.lang.c mailing list