correct code for pointer subtraction

Tapani Tarvainen tarvaine at tukki.jyu.fi
Mon Jan 9 06:55:35 AEST 1989


In article <9878 at drutx.ATT.COM> mayer at drutx.ATT.COM (gary mayer) writes:
>In article <18123 at santra.UUCP>, tarvaine at tukki.jyu.fi (Tapani Tarvainen) writes:
>
>> The same error occurs in the following program 
>> (with Turbo C 2.0 as well as MSC 5.0):
>>
>> main()
>> {
>>         static int a[30000];
>>         printf("%d\n",&a[30000]-a);
>> }
>>
>> output:  -2768
>
>I grant that this is probably not the answer you would like, but it
>is the answer you should expect once pointer arithmetic is understood.
>
[deleted explanation (very good, btw) about why this happens]
>
>In summary, be careful with pointers on these machines, and try to
>learn about how things work "underneath".  The C language is very
>close to the machine, and there are many times that this can have
>an effect - understanding and avoiding these where possible is what
>writing portable code is all about.

I couldn't agree more with the last paragraph.  My point, however,
was that the result above is 
(1) Surprising: It occurs in small memory model, where both ints 
and pointers are 16 bits, and the result fits in an int).
When I use large data model I expect trouble with pointer
arithmetic and cast to huge when necessary, but it shouldn't
be necessary with the small model (or at least the manual
should clearly say it is).
(2) Unnecessary: Code that does the subtraction correctly has
been presented here.  
(3) WRONG according to K&R or dpANS -- or does either say that
pointer subtraction is valid only when the difference *in bytes*
fits in an int?  If not, I continue to consider it a bug.

Another matter is that the above program isn't portable
anyway, because (as somebody else pointed out),
pointer difference isn't necessarily an int (according to dpANS).
Indeed, in Turbo C the difference of huge pointers is long,
and the program can be made to work as follows:

         printf("%ld\n", (int huge *)&a[30000] - (int huge *)a);

Actually all large data models handle this example correctly (in Turbo C),
and thus casting to (int far *) also works here,
but as soon as the difference exceeds 64K (or the pointers
have different segment values) they'll break too, only 
huge is reliable then (but this the manual _does_ explain).

To sum up: Near pointers are reliable up to 32K,
far up to 64K, anything more needs huge.

With this I think enough (and more) has been said about the behaviour
of 8086 and the compilers, however I'd still want somebody
with the dpANS to confirm whether or not this is a bug
- does it say anything about when pointer arithmetic may
fail because of overflow?

------------------------------------------------------------------
Tapani Tarvainen                 BitNet:    tarvainen at finjyu
Internet:  tarvainen at jylk.jyu.fi  -- OR --  tarvaine at tukki.jyu.fi



More information about the Comp.lang.c mailing list