BIT Counting: C programming problem

Joe English jeenglis at nunki.usc.edu
Fri Jan 27 13:21:41 AEST 1989


djones at megatest.UUCP (Dave Jones) writes:
<From article <8398 at dasys1.UUCP>, by ejablow at dasys1.UUCP (Eric Robert Jablow):
<> In article <35329 at think.UUCP> barmar at kulla.think.com.UUCP (Barry Margolin) writes:
<>>I haven't seen anyone so far suggest the far end of the time/space
<>>trade-off: have a 64K table, with each element N containing the number
<>>of bits in N 

<> That is a ridiculous idea.  For microscopic savings in speed, you
<> waste 63K of space.

<As I type this letter, I am using what is, by today's standards, an
<excellent, but rather commonplace workstation: a Sun3/60.  It has 8Meg
<of RAM and a virtual memory of about 42Meg, if I remember correctly.
<So 64K is less than two tenths of one percent of the virtual memory.

<Always bear in mind that "ridiculous" depends on the point of view of
<the ridiculer.

Well, this here ridiculer also thinks the idea is
ridiculous: Even though you've got 8 Meg of RAM, if
the program is running under Unix it's not going to
have it all to itself, and that 64K table is likely
to be swapped out.  So now the ultrafast straight
lookup algorithm may take a _large_ disk read and
has lost whatever speed advantages it might have
had.

Besides that, a 64K lookup table for bit-counting
strikes me as just plain inelegant and wasteful,
even if it's going to be run on a machine with 8
gigabytes of core.  Don't you know there are
starving Lisp processes out there? :-)


--Joe English

  jeenglis at nunki.usc.edu



More information about the Comp.lang.c mailing list