_UNIX_Today!_

John Richardson jr at frog.UUCP
Sat Sep 9 05:09:00 AEST 1989


In article <971 at utoday.UUCP>, greenber at utoday.UUCP (Ross M. Greenberg) writes:
> 
>  Text about a _UNIX_TODAY!_ magazine benchmark, and a request for some
> input/discussion on benchmarks to use in the future.
>
>

   Well, for the past year, I have been dealing with measuring I/O performance
of various systems, disk drives, controllers, etc in the UNIX/386 world.
This is for a product that is targeted to the OLTP data base market, where 
I/O performance is critical. I have written a short program that trys to
take a simple model of the access pattern of a data base running the infamous
'TP1' benchmark. I have found that it is mostly random reads of 2K bytes
in size across the disk. Since multiple users are accessing the disk,
the benchmark also needs to run multiple processes to check the overall
throughput (or degradation in some cases).

  This bench mark takes arguments for the size of the drive being accessed,
the size of the read, and the number of concurrent processes accessing the
drive. This has shown some interesting obervations:

1: In the DOS world, a lot of drives have good access times advertised based
   on accessing only 32 MB of the drive. (IE: An 80MB drive at 35ms has an
   access time of 21ms over only 32MB of the drive)
   The bench mark can be run on the whole drive to wring out this trickery.

2: Caching drives or controllers can slow down performance if the firmware
   writers do not do it right. This is because of the cache search time
   taking away from the time that the hardware can be started. Since this
   benchmark (and application that its modeling) hits the drive cache
   VERY infrequently. (the data base has its own 2-4MB cache, so it WON'T ask
   for a block within 2MB of I/O from a previous request)
   Drives that do not have a performance penalty in this case start the seek
   on the hardware while searching the cache. This makes up almost 4 ms of
   time difference.

3: Some operating systems/and or device drivers have their overall through-
   put GO DOWN as users are added. This means that if 1 process is run,
   and a given drive/controller gives 38 I/O's per second, running a second
   process knocks this down to a total through put of 34 I/O's per second.
   (17 per process)
   Adding processes makes it even lower.
   My theory is some contended for shared resource, like a buffer used
   for DMA that crosses a page boundry on controllers that do not support
   byte level scatter/gather.

  This is some information that I have found useful in evaluating systems,
controllers, and drives for OLTP applications.


					JR


John Richardson
Principle Engineer
Charles River Data Systems
983 Concord St. Framingham Mass 01701



More information about the Comp.unix.i386 mailing list