If you have Xenix 386, run this for me

Bill Brothers billbr at xstor.UUCP
Sat Feb 2 10:03:13 AEST 1991


In article <10988 at uhccux.uhcc.Hawaii.Edu> bt455s39 at uhccux.UUCP (Carmen Hardina) writes:

 <blah blah benchmark, etc. etc.>

Something interesting has happened here. Massive insane irrelevant
testing. If you run the dd in question against the block device
(/dev/hd00), then size of disk buffers makes a radical difference
in the outcome, since the actual writes to the disk may occur AFTER
time has completed due to delayed writes. If you run against the 
raw device, total mis-information results, since SCO XENIX uses
a single buffer for raw transfers and copies the information into
user space. Voila! a bottleneck. Another problem is if
you are running single user (maintenance mode) or multi-user. 
multi-user will change the numbers appreciably. Around here, we
actually do the dd time test under the following conditions.

	1. single user mode
	2. kernel tuned to 50 buffers (yes thats right, 50!)
		This is so that we stress our drivers instead of the
		memory :-)
	3. time dd if=/dev/hd00 of=/dev/null bs=64k count=100
		This is so that the test will be consistent from
		UNIX to XENIX.
	
We use this as a _GROSS_ measurement of performance. There are
still many inaccuracies in doing this. 

The results that we have found seem to reflect OS changes as
opposed to disk/controller technology. Sure.. they make a 
difference, but not the same factor as OS. 

	XENIX -> UNIX 3.2.0 = About the same
	UNIX 3.2.0 -> 3.2.3 = 2X faster.

What happened? The Acer File System (AFS) seemed to have
gotten fixed in 3.2.2. On a Systempro 386 with 1 processor
we measured 75 Kb/s. Running 3.2.2 yielded 850 Kb/s.
Relieving the disk buffers to 650 pushed the number to
around 1 Mbyte/s. Going to MPX with two processors and a
distributed driver, and 650 disk buffers yields 1034 Kb/s.
Certainly better performance.  This is because the driver
gets 8-16K requests instead of 1K requests. The AFS clusters
sequential accesses. This was using SCSI host adapters and
SEAGATE WrenRunner II's.

What we found is that the overhead in the UNIX cacheing scheme
is the bottleneck as it tends to start being processor-bound.
So far, the maximum data rate through the UNIX file system
seems to be 1.5 Mbytes/s.

All tests here were done with Storage Dimensions drivers, but
reflect similar performance in the native drivers.

One other note: Just because one disk goes REAL FAST on a 
sequential test, It cannot be concluded that it will perform
better when accessed randomly at high IO rates. In UNIX,
the name of the game is fast access, not high data rate.
This is because most UNIX filesystems are helter-skelter.

Bill Brothers
Engineering Mgr.
Storage Dimensions, Inc.
uunet!xstor!billbr			| Information is power. -- Be empowered.



More information about the Comp.unix.xenix.sco mailing list