unix question: files per directory

Robert Cousins rec at dg.dg.com
Sat Apr 15 01:04:04 AEST 1989


In article <9195 at alice.UUCP> andrew at alice.UUCP (Andrew Hume) writes:
>
>
>in the fifth edition, directories that could no longer fit in the directly
>mapped blocks caused unix to crash.
>
>nowadays, the only reason not have huge directories is that they
>make a lot of programs REAL slow; it takes time to scan all those dirents.

There is a more real limit to directory sizes in the System V file system:
There can only be 64K inodes per file system.  As I recall (and it has
been a while since I actually looked at it), the directory entry was
something like this:

	struct dirent {
		unsigned short inode; /* or some special 16 bit type */
		char filename[14];
	}

which yielded a 16 byte entry.  Since there is a maximum number of links
to a file (2^10 or 1024?), then the absolute maximum would be:

	64K * 1024 * 16 = 2 ^ 16 * 2 ^ 10 * 2 ^ 4 = 2 ^ 22 = 4 megabytes

This brings up one of the major physical limiations of the System V 
file system:  if you can have 2 ^ 24 blocks, and only 2 ^ 16 discrete
files, then to harness the entire file system space, each file will
(on average) have to be 2 ^ 8 blocks long or 128 K.  Since we know that
about 85% of all files on most unix systems are less than 8K and about 
half are under 1K, I personnally feel that the 16 bit inode number is
a severe handicap.

Comments?

Robert Cousins

Speaking for myself alone.



More information about the Comp.unix.questions mailing list