64 bit architectures and C/C++

Buster Irby rli at buster.stafford.tx.us
Fri May 3 22:04:55 AEST 1991


cadsi at ccad.uiowa.edu (CADSI) writes:

>From article <1991May01.172042.5214 at buster.stafford.tx.us>, by rli at buster.stafford.tx.us (Buster Irby):
>> turk at Apple.COM (Ken "Turk" Turkowski) writes:
>> 
>>>It is necessary to have 8, 16, and 32-bit data types, in order to be able
>>>to read data from files.  I would suggest NOT specifying a size for the int
>> 
>> You assume a lot about the data in the file.  Is it stored in a specific
>> processor format (ala Intel vs Motorolla)?  My experience has been that
>> binary data is not portable anyway.

>Binary isn't in general portable.  However, using proper typedefs in
>a class one can move binary read/write classes from box to box.  I think
>the solution the the whole issue of sizeof(whatever) is to simply assume
>nothing.  Always typedef.  It isn't that difficult, and code I've done this
>runs on things ranging from DOS machines to CRAY's COS (and UNICOS) without
>code (barring the typedef header files) changes.

What kind of typedef would you use to swap the high and low bytes
in a 16 bit value?  An Intel or BIG_ENDIAN machine stores the
bytes in reverse order, while a Motorolla or LITTLE_ENDIAN
machine stores the bytes in normal order (High to low).  There is
no way to fix this short of reading the file one byte at a time
and stuffing them into the right place.  The point I was trying
to make is that reading and writing a data file has absolutely
nothing to do with data types.  As we have already seen, there
are a lot of different machine types that support C, and as far
as I know, all of them are capable of reading binary files,
independent of data type differences.

The only sane way to deal with this issue is to never assume
anything about the SIZE or the ORDERING of data types, which is
basically what the C standard says.  It tells you that a long >=
int >= short >= char.  It says nothing about actual size or byte
ordering within a data type.  

Another trap I ran accross recently is the ordering of bit
fields.  On AT&T 3B2 machines the first bit defined is the high
order bit, but on Intel 386 machines the first bit defined is the
low order bit.  This means that anyone who attempts to write this
data to a file and transport it to another platform is in for a
surprise, they are not compatible.  Again, the C standard says
nothing about bit ordering, and in fact cautions you against
making such assumptions.




More information about the Comp.lang.c mailing list