Computational complexity of rm & ls

Carl M. Fongheiser cmf at cisunx.UUCP
Tue Mar 14 04:52:15 AEST 1989


In article <7919 at chinet.chi.il.us> les at chinet.chi.il.us (Leslie Mikesell) writes:
>The maximum optimal size probably varies with the OS version.  I've
>been told that directory access becomes much less efficient when
>the directory inode goes to triple indirect blocks (300 files?).

Yikes.  I'm sure you mean double indirect or even just indirect blocks.
A file with triple indirect blocks in it is unthinkably large; a directory
containing triple indirect blocks is even more unthinkable!

If you do manage a directory with triple indirect blocks, yes, your directory
access will be *very* slow.

>Under SysV directories never shrink by themselves, so access efficiency
>depends on how many files have ever been in that directory at once
>instead of how many are currently there.  If you have a scheme to
>determine which of 100 directories a particular file will be stored
>under, that calculation is almost certain to be faster than a search
>of many hundreds of extra files in the same directory.

Nonsense.  System V directories don't shrink, but they don't grow unless
they need to, either.  System V can and will fill in holes in directories.

				Carl Fongheiser
				University of Pittsburgh



More information about the Comp.unix.questions mailing list