incrementing after a cast

Wayne Throop throopw at dg_rtp.UUCP
Tue Dec 16 07:27:37 AEST 1986


> braner at batcomputer.tn.cornell.edu (Moshe Braner)

> - When you cast a pointer to a pointer of another type, you are telling
> the compiler to use it differently (e.g. to read 2 instead of 1 byte
> from memory when dereferencing).  Since the compiler is aware of the cast
> in that sense, it COULD increment it according to the new type's size!
> I see no TECHNICAL obstacle here, only "legal" morass...

You are wrong, you know.  First, when you cast a pointer to a pointer of
another type, you are telling the compiler to CONVERT the pointer to a
pointer to that other type.  The bitwise value might be completely
different.  This is most common on word-addressed machines.  You are NOT
telling the compiler to treat the bits of that pointer as another type,
NOT telling it to "use [the value] differently".  You are telling it to
CHANGE THE VALUE to a new representation.

And thus, second, this forms a well-known technical obstacle to treating
casts of pointers (or casts of anything else, for that matter) as an
lvalue.  This is no legal quibbling.  It just can't be made to work on
some machines upon which C compilers are desirable.  Further, casts have
always been, are now, and (I surely hope) always will be CONVERSIONS.
Conversions, by their very nature, produce NEW VALUES.  I wouldn't
expect ((-x) = 10) to work, I wouldn't expect ((x+1) = 10) to work, I
wouldn't expect (((float)i) = 1.0) to work, and I wouldn't expect
(((char *)intptr) = &charvar) to work either, and all for the same
reason.

Can someone explain why anybody who has thought about it for more
than a few minutes WOULD expect that last example, or things like
((sometype *)p)++, to work?

--
Like punning, programming is a play on words.
                                --- Alan J. Perlis
-- 
Wayne Throop      <the-known-world>!mcnc!rti-sel!dg_rtp!throopw



More information about the Comp.lang.c mailing list