BIT Counting: C programming problem

Sean Malloy malloy at nprdc.arpa
Thu Jan 12 03:09:45 AEST 1989


In article <35329 at think.UUCP> barmar at kulla.think.com.UUCP (Barry Margolin) writes:
>I haven't seen anyone so far suggest the far end of the time/space
>trade-off: have a 64K table, with each element N containing the number
>of bits in N  . . .
		< some text deleted >
>       . . .                Someone else came close by suggesting a
>256-element table which would be consulted twice and then the results
>added.  But is an extra 64Kbytes of data much of a problem in these
>days of large and/or virtual memories?

>   . . .     But on 1Mb PC
>it's only 6% of your memory capacity, and it might be worth it
>depending on the application.

However, it's 10% of the effective address space for DOS, and using
64K for a bleeping lookup lookup table shoots any hope of using any of
the memory models that don't let you use more than 64K of data space.
Two or three TSR programs that had a 64K lookup table in addition to
whatever data and code space the programs took up would drive your
free memory into the floor.

Taking 256 times as much memory just so that you can use one lookup
rather than two lookups, a shift, an AND, and an add is the kind of
programming I would expect someone who's never had to deal with
limited memory space would come up with. It reminds me of the posting
a while back about the guy who was porting a program from a mainframe
to a PC and found that the first thing the program did was malloc 200K
bytes four times, and actually _used_ about 60K of one of those
spaces.


	Sean Malloy
	Navy Personnel Research & Development Center
	San Diego, CA 92152-6800
	malloy at nprdc.arpa



More information about the Comp.lang.c mailing list