Interactive 2.2 problems

Ira Baxter baxter at zola.ics.uci.edu
Sun Jul 8 10:43:43 AEST 1990


In <8581 at cognos.UUCP> dbullis at cognos.UUCP (Dave Bullis) writes:

>In article <384 at denwa.uucp> jimmy at denwa.info.com (Jim Gottlieb) writes:
>>I know this isn't scientific but it gives an idea.  I started 10 yes(1)
>>programs in the background and immediately ran a 'ps -ef'.  The
>>following is how long each machine took to finish giving the ps(1)
>>reseults.
>>
>>6386E with 4 meg of RAM and 2.0.2:	6 seconds
>>6386E with 16 meg of RAM and 2.2:      31 seconds
>>
>>At Interactive Hollis's suggestion, I tried boosting NPROC and NBUF
>>with no noticable difference.

>In a previous life I was working with Convergent Technologies
>Mitiframes (68020, SysV.2).
>Normally with ran with 5-7 Megabytes.  We boosted that to 15
>one day and performance dropped thru the floor.
>Turns out the buffer cache increased so the kernel was spending
>all its time looking thru the cache!  We cut down NBUFS and all was fine.

Looking through the cache?  I thought it had hash tables to do this,
so it should take negligable time (O(1)). Only systems as stupid as
MSDOS have a single buffer chain :-{.  It appears that 2.0.2 has a
fixed number of hash buckets (NHBUF), so if you increase the memory
size a lot, the *length* of hash bucket chains can start to be
unreasonable.  I don't have any experience with this, but it seems
like raising NHBUF by a factor equal to your memory increase should
keep the loading on the hash table constant; then at least "looking
through the cache" would not be a problem.

Why doesn't UNIX set NHBUF dynamically?  Estimating the right value
is trivial:  NHBUF = RAMSIZE/BUFSIZE/AVGDESIREDCHAINLENGTH.
Desired average chain length should be 1 or 2.


--
Ira Baxter



More information about the Comp.unix.i386 mailing list