tape record size limit of 2 K bytes not that short.

utzoo!decvax!ucbvax!unix-wizards utzoo!decvax!ucbvax!unix-wizards
Sun Nov 1 03:09:34 AEST 1981


>From walton at LL-XN Sun Nov  1 02:58:55 1981
The wisdom from the 1960's is that as you increase your tape block size
the probablilty of an error appearing in the block increases at a rate
much greater than proportional to the record size.  I have seen this
in black and white in IBM documentation many years ago.

We have had direct experience with this using 1600 BPI tapes and a
homebrew version of the V6 dump program a few years back.  That dump
wrote the table of contents as one record, of a length like 32K,
though I do not remember, and restore suffered an unreasonable number of
failures trying to read back the long record.  We changed dump and the
problem went away.

It would be nice to have some hard data on this phenomenon.  In its
absence, and remembering that many small computers have limited
buffer sizes, I would chose a block size just big enough to give
acceptably efficient tape utilization and not be too akward for
the programmer.  For 1600 BPI 2K bytes gives 71% utilization and
is not all that bad.  4K gives 83% and would be my personal choice,
while 8K gives 91% and would be my personal upper limit.

I know one application programmer who becomes slightly unfrendly
above 4K.  He lives in a PDP-11, and its his memory space you
are taking.



More information about the Comp.unix.wizards mailing list