64 bit architectures and C/C++

Mark Brader msb at sq.sq.com
Sat May 18 11:15:20 AEST 1991


> > I think "long long" is a preferable approach. ... A programmer wishing
> > to do arithmetic on integer values exceeding what can be stored in
> > 32 bits has three options:
> [...]
> >   (c) use an integral type known to provide the required number of bits,
> >       and never port the program to machines where no such type exists.
> [...]
> > Now, what would we like to happen if a program that assumed 64-bit
> > integers existed was ported to a machine where they didn't?  We would
> > like the compilation to fail, that's what!  Suppose that the implementation
> > defines long to be 64 bits; then, to force such a failure, the programmer
> > would have to take some explicit action, like
> >	assert (LONG_MAX >= 0777777777777777777777);

> If you want the compilation to fail, then what's wrong with the
> following ?
> 
>     #if LONG_MAX < 0xFFFFffffFFFFffff		/* wrong, actually */
>     ??=error Long type not big enough for use.
>     #endif

I would take that to be "something like" my assert() example, and don't
have a strong preference between one and the other.  In ANSI C the use
of an integer constant larger than ULONG_MAX is a constraint violation
(3.1.3) anyway, so it really suffices to say

	01777777777777777777777;

and this has a certain charm to it.  But the compiler might issue only
a warning, rather than aborting the compilation.

> This causes the compilation to fail only when long is not big enough,
> does not require any new types in the implementation, and generates *no*
> messages on an 64-bit-long system. 

But whichever of these the programmer uses, *it has to be coded explicitly*.
My feeling is that enough of the 64-bit people [i.e. those worth $8 :-)]
will carelessly omit to do so, once 64-bit machines [i.e. those worth $8!
:-) :-)] become more common, as to create portability problems.  It will,
I fear, be exactly the situation that we've already seen where there's
much too much code around that assumes 32-bit ints.

> Notes: the use of a hex, rather than octal, constant with mixed case
> makes it easier to count the number of digits ...

The octal was for fun, since it was a "bad example" anyway.  However, if
one *is* going to amend it, it would be as well if the amended version
retained the correct value of the constant.  (It was LONG_MAX, not
ULONG_MAX, in that example.)


} But the standard also does not guarantee (as far as i know) that there
} doesn't exist [a type with] >32 bits.
} 
} What is wrong with simply implementing the following in a compiler?
} 	char	=	 8 bits
} 	short	=	16 bits
} 	int	=	32 bits
} 	long	=	64 bits

Nothing -- unless, as I explained above, it leads to a community of users
who *expect* long to have 64 bits.  My own preference would in fact be
to have a long long type, but for *both* long and long long to be 64 bits.

(The long long type would also imply such things as LL suffixes on
constants, %lld printf formats, appropriate type conversion rules,
and so on.  I haven't examined the standard exhaustively to see whether
there's anything where the appropriate extension is non-obvious, but
certainly for most things it is obvious.)

I would like to see "long long" established enough in common practice
that, in the *next* C standard, the section that now itemizes among other
things the following minimum values:

	SHRT_MAX		+32767
	INT_MAX			+32767
	LONG_MAX		+2147483647

will, *if* 64-bit machines are sufficiently common by then, leave those
values unchanged and add:

	LLONG_MAX		+9223372036854775807

-- 
Mark Brader		    "'A matter of opinion'[?]  I have to say you are
SoftQuad Inc., Toronto	      right.  There['s] your opinion, which is wrong,
utzoo!sq!msb, msb at sq.com      and mine, which is right."  -- Gene Ward Smith

This article is in the public domain.



More information about the Comp.lang.c mailing list