SCSI on Sun

Charles Hedrick hedrick at geneva.rutgers.edu
Fri Mar 2 15:44:28 AEST 1990


>The glossy on the 4/60 on the otherhand talks about fast synchrononous
>scsi. Does the "esp" in fact implement the draft SCSI-2 standard, and
>support the fast synchronous option? Do any of the Sun disks take
>advantage of this?  Is the fast SCSI option a driver issue, or is it all
>hardware?
>
>Is it possible that a 4/60 would make a better SCSI fileserver than a VME
>Sun 4 with the "si"?

It's very hard to give completely reliable answers comparing various
configurations without a rather large benchmarking lab.  Sun should really
be the one to tell us how the I/O performance on various configurations
compares.  But Rutgers has lots of Suns, and I've recently been trying
some simple tests on a bunch of them.  In general I find that on a test
designed to show up transfer rate differences (reading the raw disk with
dd), SMD is a factor of two faster than SCSI, and IPI (on a 390) is a
factor of two faster than SMD.  I don't see much difference between the
SS1 builtin SCSI, the builtin SCSI on a 4/370, and the Ciprico VME SCSI
controller.  However to the extent that there is a difference, the SS1 is
slightly faster than the Ciprico controller.  Results for the 4/370 are
hard to evaluate, because the users seem to keep a continuous load on it.
There's one indication that it may be slightly faster than the SS1, but I
can't be sure.

There are at least two differences between this simple test and real use
as a file server: (1) in real situations, you're going to be moving the
head around, not reading the raw disk sequentially (2) for a file server,
the NFS protocol is going to take some time, and the Ethernet (and
Ethernet controller) is going to be a performance limit.  In an attempt to
evaluate the effects of this, I tried "mkfile 2m" over NFS, and I also
tried reading a large file over NFS using dd (being careful to purge it
from the caches on both machines first -- which you can do by cd'ing to
the file system and trying to umount it.  Umount will fail, but it will
purge the cache entries for that file system.)  Again, this was not an
attempt to do a realistic test, but an attempt to see whether I could at
least construct tests where I could see performance differences among the
disks over NFS.  Simple sequential reads and writes seemed the most likely
circumstance.  The mkfile showed essentially no difference.  Clearly the
well-known problems of writing large files on NFS washed out any disk
performance differences.  Reading a large file showed a factor of 1.5
between SS1 SCSI and IPI.  I'd expect this to be an upper bound on the
difference you see in practice.  Real usage would involve lots of seeks,
which would tend to emphasize the access times of the disks (not just the
access time of the disk itself, but system configuration issue that would
affect the length of the disk queue, e.g. how many different disks there
are on the system).

My current guess -- and without more systematic testing it is only a guess
-- is that the SS1 is about as good a file server as anything, at least
for Ethernet.  If you want an FDDI file server, I might suggest using a
system with IPI disks.  However you'd also have to make sure that the NFS
implementation and the FDDI controller were capable of showing up the
speed of the disk.  At any rate, with Ethernet, I'd expect any differences
in controller speed to be dominated by differences in access time (and the
quality of the Ethernet subsystem).  This means using disks whose seek
time is fast, and configuring the system with lots of spindles.  You may
be better off spending your money buying lots of disks for an SS1, rather
than getting a higher-end system and not being able to afford as many
disks.  You want to split swapping on all of your systems between two
disks, use separate spindles for common used things (swap, /tmp, and /usr
on our systems), and at least for servers used to supply /usr, use lots of
memory on the server (though I confess that we haven't tried using lots of
memory on file servers yet).

Of course this doesn't answer your specific question, which is about Sun's
VME "si" controller.  It is certainly possible for controllers to be slow
enough to affect things, even with NFS.  The old Xylogics 450/451
certainly was.  My comments about SCSI apply only to the controllers I
tested, which are fairly new ones.

As for your other questions, the 4/60 uses a standard SCSI chip.  I don't
have any drives that implement SCSI-2, so I can't swear whether it
supports them, but I rather doubt it.  Even the current SCSI spec has a
synchronous option.  That's what they support.  However by default the
driver disables synchronous operation.  The header file claims that this
is because there have been problems with noise when using sync SCSI in
installations with long or noisy cables.  It's easy enough to enable
(twiddle one bit using adb).  The synchronous transfer is implemented by
the SCSI controller chip, but it is enabled or disabled by the device
driver.  I've tried things both ways.  I've yet to see any advantage to
using sync.  Maybe the advantage only shows up with faster disks than
we've got, or with lots of disks running at the same time.  (I'm using the
HP 97548S disks, 760MB 5.25 inch disks with 16 msec seek.)

If anybody has any better data about disk/controller performance, or
information about what configurations are the most cost-effective file
servers, I'd sure love to hear about it.  For the moment, I'm planning to
use Sparcstations as file servers, and try to split usage across as many
disks as I can afford.



More information about the Comp.sys.sun mailing list