How to choose SCSI

Chris Torek chris at mimsy.umd.edu
Fri Nov 24 09:39:17 AEST 1989


In article <6900004 at adaptex> neese at adaptex.UUCP writes:
>  Host adapters come in many flavors.  Some are well supported, others are
>not.  Some are intelligent, some are not.

... and not least, some work as advertised, and some are incredibly buggy.
There is a general tendency for `smart' to imply `buggy', but this is not
a hard-and-fast rule.  There is also a tendency for `first out' to imply
`buggy'.

This matters more to the person writing the code to deal with the adapter.

>  If you are going to be using MS-DOS and MS-DOS alone, then the low
>cost approach is not a bad one.  But if you intend to use UNIX/Novell/
>OS2, then the low cost approach will be a poor one.  These operating
>systems/environments are all multi-threaded.  That is they can issue
>more than one command at a time.  With an intelligent host adapter,
>this is easily done and managed by the host adapter.  With a low cost
>board, the software to do this work must be driven by the main CPU,
>which will incur considerable command overhead.

This analysis is a bit too simplistic (it leads to the UDA-50 mentality):
the work must be done; the work can be all in one place (the main CPU);
the workload can be distributed (shared between the CPU and the adapter);
using distributed computing means more total CPU power is being applied.
(So far, all is fine.)  The conclusion that gets made here is that the
distributed system will provide more total throughput.  Unfortunately,
this is often false.

Some of the `total power' being applied goes to overhead---
communication between the `smart' I/O board and the main CPU.  If the
protocol is fat, or if there are bugs in the `smart board', this
overhead can outweigh the savings from having moved some of the `hard
work' off the main CPU.  Then, too, there is this: the CPU in the
`smart' board may in fact be quite stupid.  If you give it any
significant work, it may take a significant amount of time to handle
it.  This will increase latency, and may even decrease total throughput
(give too much work to the slow processor, and the fast one will always
be waiting).

>[various points about support deleted]

>[various points about speed deleted]

>  The size of the buffer has a lot to do with the overall performance of the
>SCSI device.  Buffer sizes range from 16K to 256K.

(or more)

>If the buffer is just a buffer to smooth the data transfer, then the size of
>the buffer can be small if you have a host adapter capable of moving the data
>at the full rate of the SCSI bus.

As long as the SCSI bus is not busy, and/or the bus the adapter is on is
not busy, that is.  A smoothing buffer *can* work (e.g., the Emulex
massbus adapter for SBI VAXen has only a smoothing buffer), but typically
it works well only if the rest of the system is overdesigned (e.g., the
SBI).

>  If the device has a read ahead buffer, then sequential accesses will be
>much quicker.  Although the more fragmented the file system the worse the
>worse the performance.

Unix boxes with the 4BSD file system will gain a great deal from a read-ahead
buffer (provided you are reading `large' files).  Your exact gain will vary,
of course.

>Today SCSI overhead, for the device, is down to 1 milli-second and less.

1 ms is still quite significant---e.g., on a MIPS box running at 16 MHz
(DECstation 3100), the host CPU could execute about 16 thousand instructions.
SCSI manufacturers really need to cut the time down by at least an order
of magnitude.

>BENCHMARKING MADE EASY??

>[text showing how benchmarking SCSI devices is NOT easy]

Agreed.

It is worth noting that random seek behaviour rarely occurs in
practise, so that track-to-track seek time is more important than
end-to-end seek time (or the so-called `average' seek time, which
is made by assuming that the probability of moving from any one
cylinder to any other is independent, i.e., access is completely
random).  This is true of any system that does sane sector layout,
including BSD-file-system-Unix-boxes.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris at cs.umd.edu	Path:	uunet!mimsy!chris



More information about the Comp.unix.questions mailing list