nfsd's swallow system????

John R. Deuel kink at uncle-bens.rice.edu
Mon Nov 13 20:13:21 AEST 1989


I ran into a strange situation this evening and I was wondering if anyone
had any clues.  All systems involved are running 4.0.3

I had created a large (127Mb) tar file on disk and was going to dump it to
tape using dd.  The tar file was nfs-mounted on my tapehost, so to my
tapehost, I said "dd if=tarfile of=/dev/rmt8 obs=126b" The fileserver with
the tarfile on it runs 8 nfsd's, so I figured the worst the load average
could get on it was 8.  Nope.  The server (a 3/280S-24) was at a load of
30 in 3 minutes.  The machine literally froze for every task except the
serving of the tarfile.  If I ^Z'd the dd, life would immediately return
to the server.  Upon resuming the dd, the machine would vanish again.

1.  Why can an NFS client task drive a server with only 8 nfsd's up to a
load of 30?

2.  Why, even when I nice -15 a process, does it also totally freeze when
I run this dd?  The nfsd's are running at 0.  I understand the priority of
processes w/ data coming in from the disk, but shouldn't my process get a
chance to run before the nfsd's accept more requests?

3.  I've had nfs clients go bonkers before, bombarding a server with
requests, but the load has never gone much above the number of nfsd's.  Is
it just that in the current situation my requests are coming in much
faster and slipping inside some scheduling bug window?

If anybody has info which might help explain this, I would love to hear
it.  Please mail to me and I'll summarize to the net, if there's interest.

Thanks in advance,

John R. Deuel  <kink at rice.edu>
Systems Programmer, Networking and Computing Systems
Rice University, Houston, Texas
(713) 527-4013



More information about the Comp.sys.sun mailing list