Berkeley file system tuning

Larry Taborek larry at macom1.UUCP
Fri Jan 13 23:48:48 AEST 1989


>From article <310 at sagpd1.UUCP>, by banderso at sagpd1.UUCP (Bruce Anderson):

>My second question is: how much effect does changing the block and
>fragment size have? The manual says that if you use an 8K block and
>fragment size it speeds up the file system but wastes space. Does
>anyone have a quantitative feel of how much the tradeoff is?

I kept some old copies of 4.2BSD documentation from my old job and
in Volume 2 of the documentation they have a section on the 4.2BSD
file system (A FAST FILE SYSTEM FOR UNIX, McKusick, Joy, Leffler, 
and Fabry)  From it I have select the following:

Space used	% Waste	Organization
775.2mb		0	Data only, no seperation
807.8		4.2	Data only, 512 byte boundry
828.7		6.9	512 byte Block
866.5		11.8	1024 byte block
948.5		22.4	2048 byte block
1128.3		45.6	4096 byte block

It also states:

"The space overhead in the 4096 (byte block) / 1024 (byte fragment) new
file system organization is empirically observed to be about the same 
as in the 1024 byte old file system organization."  ... "The net
result is about the same disk utilization when the new file
systems fragment size equals the old file systems block size."

Thus by determining your fragment size, you can compare it to the table
above to determine your amount of wasted space.  You can also
determine wether you have 2, 4, or 8 fragments per block, but I
believe that 4 is about right.  To high a fragment to block count
(8), and the data from fragments may have to be copied up to 7 times
to rebuild into a block (this would happen when a file would
grow beyond the size that 7 fragments could hold, and the file system
would copy these fragments into a block).  To low a fragment to
block count (2), and the block/fragment concept isn't helping
very much.

They also post a table that seems to show me that there is not
all that much difference between a 4K block FS and 8K block FS 
in speed
differences.  Instead, they state that the biggest factor that
helps speed things up is keeping at least 10% of the partition
free.

>When allocating inodes, what kind of ratio of disk space to inodes
>do people use? The default on the system is an inode for every 2KB
>of disk space in a file system but this seemed like an awfully high
>number of inodes. Is it?

It depends.  The number of inodes to the size of the file system 
default is meant as a good rule of thumb for most partitions.  If you
plan on holding usenet information on a partition (lots of small
files), then you may wish to lower this to 1KB of disk space in file
system to inode.  On the other hand, if you have a few very large files
filling a partition, then you may wish to raise the parameter to 8KB
of disk space in file system per inode.  When you look at this sort
of problem, you begin to understand why there are partitions, and what
use they satisfy.

>This is probably HP specific but if you define multiple swap sections,
>does it fill up the first before starting on the secondary ones or
>does it use all in a balanced manner?  If the first then obviously
>the primary swap space should be on the fastest drive but otherwise
>it doesn't matter.

What I noticed on BSD systems I used to administer was that the 
SECOND swap area was used exclusively until it filled, and then
the swap overflow went to the first.  To me, this made sense as the
second swap area was on our second physical disk, which generally
has less i/o then the first physical disk is expeced to have.  (Any
comments to this are appreciated).
-- 
Larry Taborek	..grebyn!macom1!larry		Centel Federal Systems
		larry at macom1.UUCP		11400 Commerce Park Drive
						Reston, VA 22091-1506
						703-758-7000



More information about the Comp.unix.questions mailing list