Re^2: Using small memory model functions on huge arrays (was See below...)

e89hse at rigel.efd.lth.se e89hse at rigel.efd.lth.se
Tue May 29 13:30:44 AEST 1990


In article <9136 at hubcap.clemson.edu>, grimlok at hubcap.clemson.edu (Mike Percy) writes:
>e89hse at rigel.efd.lth.se writes:
>>>>char huge *files;
>>>>int i;
>>>> 
>>>>files=(char huge *)farmalloc(2000L*82L);
>>>> 
>>>>for (i=0; i<2000; i++)
>>>>   strcpy(&files[i],"Some kind of character string");
>>>> 
>>>>for (i=0; i<2000; i++)
>>>>   puts(&files[i]);
>>>> 
>
>> Either you had a bad day programing this or you're really lost:
>Looks like you're the one who was really lost.
 Probably I should have used a more humble langauge.
 
>>1) 	An int has the range -32768 <= n <= 32767, and 82*2000 is obviously not
>>	<= 32767 if you're using a 16-bit processor. (Ok we can argue a whole
>>	lot about that but I guess it's true for this the ase here.)
>1) farmalloc takes an unsigned long argument.  2000L*82L = 164000L, with
>a conversion to unsigned (which in this case does, effectively,
>nothing).

 Agreed, but:

1) for(i=0; i < 2000; i++)
    strcpy(&files[i],"gfdskjhfdsg");  (QUE????)

(and then somone suggested:

 for(i=0; i < 2000l*82l; i+=82)
  ... willcause mayor trouble if i is an int..., but that is a detail
  
>>2)   The idea with small memory model is that you use less than 64k of data, 
>>	otherwise you're probably better off using big model rather than try 
>>	to fix it, unless you're desperate for performance.
>2) The idea behind the small memory model is that you use less than 64k
>of static (i.e. pre-allocated) data.  You can use much more than that in
>dynamically allocated data space, although each hunk must be less than
>64K, unless you use far*alloc, which is what he did.

 I'm not sure if we are using the same defintion of small memory model, but in
Turbo-C at least, the definition of the memory models are basically:

tiny	: code+data < 64k
small   code < 64k && datae < 64k
medium	code unlimited && data < 64k
compact code < 64k && pre-init (that is static) < 64k and malloc unlimited
large	code unlimited && pre-init < 64k and malloc unlimited
huge	code unlimited && pre-init unlimited && malloc unlimited
(medium and compact is maybe the other way around)

 The advatage with small is that only a 16-bit is required to store ptrs thus
the standard functions only take 16-bit arguments and therefore cannot deal
 with far ptrs.

>>3)	If you really want to minimise the memory usage why do you have a fixed
>>	array? (Many lines are probably less than 81 characters long.) An 
>>	array with ptr to ptr or something would probably be more compact.
>Pointer to pointers would do the trick, from what I read into the
>article, he wants something like this:
> 
>char far * far *files;   /* want far ptr to far ptrs */
>                         /* avoid using huge ptrs if at all possible */ 
>files = (char far * far *) farcalloc(2000,sizeof(char far *)); 
>for(i = 0; i < 2000; i++) 
>  files[i] = (char far *) farmalloc(82); /* each of them points to 82 */
>                                         /*       chars               */

The char far * far * looks weird to me, but that migt be right. Anyhow what I
ment was that he probably wants to store a 2000 strings of varied size in
an array. And most strings won't be 81 chars, but less. Those it would be
more compact with an array of ptrs. Maybe something like this:

 area=malloc(BUFSIZE);
 files=malloc(2000*sizeof(char *));
 
 for(i=off=0; i < 2000; ++i) {
    strcpy(area+off,whatever (which is probably not a const string));
    files[i]=area+off;
    off+=strlen(area+off)+1;
 }

 for(i=0; i < 2000; +=i)
 	puts(files[i]);

 Well and then some more checks (to see that malloc is ok and off doesn't
exceed BUFSIZE).

 Henrik Sandell



More information about the Comp.lang.c mailing list