File Fragmentation

Chris Torek chris at mimsy.UUCP
Wed Jan 11 14:58:31 AEST 1989


In article <18068 at adm.BRL.MIL> slouder at note.nsf.gov (Steve Loudermilk) writes:
>I am involved in a local discussion about the benefits of "compacting" the
>information on our disks regularly.  By compacting I mean dumping to a
>different device, running "newfs" and then restoring a file system.

>One school of thought says this is necessary and should be done fairly
>frequently to avoid excessive fragmentation and inefficient disk I/O.
>The other school of thought says it isn't necessary because of the way 
>the Berkeley "fast file system" (BSD 4.2) handles assignment of
>blocks and fragments when a file is stored.  

The second school is usually correct.

>... Ultrix 2.3. ... running with at least 12% free space.  

If that `12% free' means that `df' shows 12% free, then you have plenty
of room.  If it means that df shows 2% free, then you have room.  Only
if df shows 110% full are you truly out of space.  This 10% `reserve'
is there to prevent fragmentation from becoming excessive.  It can be
adjusted if desired (see `man 8 tunefs').

Recent Berkeley releases have `fsck' report the amount of fragmentation.
On our machines it is typically under 1%.

If you do have have too much fragmentation, you may find that you
cannot write files even though there is some free space left.  Dumping
and restoring should reduce the fragmentation.  We have never found
it necessary to do this, and I have never heard of anyone who did.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris at mimsy.umd.edu	Path:	uunet!mimsy!chris



More information about the Comp.unix.questions mailing list