Sustained throughput to disk - DOS vs UNIX

Steve Ralston sralston at srwic.uucp
Thu Jul 26 15:52:59 AEST 1990


Hello all,

As part of my job, which is to certify large disk/tape subsystems for DOS
and UNIX based PC's, I wrote a C program to fill large disks with data
files as rapidly as possible.

My question is, why is the program able to achieve much better sustained disk
throughput running on DOS compared to running on UNIX/Xenix?

Without going into too much detail, the program uses a large memory buffer
(actually you can specify the size), which is initialized with data bytes,
and then repeatedly written to disk files using the [unbuffered] write()
function.  The program uses a default buffer size of 32 Kbytes.

Running the program in DOS, I can achieve sustained throughput to the disk
of about 600-800 Kbytes/sec (486/25 Microchannel w/ SCSI host adapter).
This allows me to fill a 640 Mbyte disk (MAXTOR 8760, formatted) in about
15 minutes.

The same program, running on UNIX, appears to be capable of "bursting" at
600-1000 Kbytes/sec, but does terrible in sustained throughput; less
than 100 Kbytes/sec.  It takes HOURS to fill a 640 Mb disk under UNIX.
Am I doing something incredibly stupid (UNIX-wise) in the program?

In DOS the output files are opened with:
    open(filename, O_CREAT|O_TRUNC|O_RDWR|O_BINARY, S_IREAD|S_IWRITE)
UNIX is almost the same, but without O_BINARY:
    open(filename, O_CREAT|O_TRUNC|O_RDWR, S_IREAD|S_IWRITE)
In both DOS and UNIX the buffer (32 Kbytes) is repeatedly written with:
    write(filename, buf_ptr, bufrsize)
inside a fairly tight 'for' loop that only does error checking on the
return value from the write() function call.

Any comments, suggestions and/or "let me point out your stupidity"'s would
be greatly appreciated.  Thanks in advance.
--
Steve Ralston						sralston at srwic.UUCP
235 N Zelta						voice: 316-686-2019
Wichita, KS 67206			..!uunet!ncrlnk!ncrwic!srwic!sralston



More information about the Comp.unix.i386 mailing list