Incrementing after a cast

Ron Natalie <ron> ron at brl-sem.ARPA
Wed Dec 31 04:06:40 AEST 1986


In article <2029 at brl-adm.ARPA>, .csnet"@relay.cs.net> writes:
> If you insist on being strict, here's a definition of ((sometype *)p)++ 
> which will work on any machine:
> 
> 	"Think of the bits refered to as p as a (sometype *).
> 	 If don't have enough bits or you have too many bits 
> 	 do something reasonable.  Now increment the (sometype *)
> 	 you're thinking of.  Now put the results back into p.  
> 	 If you	 don't have enough bits or you have too many bits, 

Actually, that is not how cast works, so your definition is far
from "strict."  Consider instead the following:

	"Ignore casts when applied to lvalues."

Since
	( (type1 *) p)++

is defined to mean
	((type1 *) p) = ((type1 *) p) + 1

What gives the compiles heartburn is the cast to the lvalue, and if
you just ignore it, then you get what I think you are looking for.

However, this leads to problem #2.  What should the type of the expression
be?  type1 or the original type of p?  You can probably make arguments
either way, however in a "strict" definition, you had better decide which
it is.

-Ron



More information about the Comp.lang.c mailing list