Computational complexity of rm & ls

Leslie Mikesell les at chinet.chi.il.us
Thu Mar 16 02:56:14 AEST 1989


In article <16364 at mimsy.UUCP> chris at mimsy.UUCP (Chris Torek) writes:
>In article <7919 at chinet.chi.il.us> les at chinet.chi.il.us (Leslie Mikesell)
>writes:
>>The maximum optimal size probably varies with the OS version.  I've
>>been told that directory access becomes much less efficient when
>>the directory inode goes to triple indirect blocks (300 files?).
>
>300?!?  (Maybe 300! :-) [read `300 factorial'])


OK, OK.. indirect blocks, not triple indirect blocks.  Shows how much
you miss when all you have to go by is the man pages.  Back to the
subject though, a simple-minded test of catting 1000 tiny files
in one directory to /dev/null vs. 100 files in each of 10 directories
using xargs and explicit filenames so shell expansion would not be
a factor showed that the large directory took about twice the sys
time.  This confirmes my more casual observations that having large
directories on a machine hurts performance in general, at least under
SysV.  The worst case seems to be accessing large directories on a
remote machine using RFS.

Les Mikesell



More information about the Comp.unix.questions mailing list