File Fragmentation

Paul De Bra debra at alice.UUCP
Wed Jan 11 10:44:32 AEST 1989


In article <18068 at adm.BRL.MIL> slouder at note.nsf.gov (Steve Loudermilk) writes:
}Hi,
}
}I am involved in a local discussion about the benefits of "compacting" the
}information on our disks regularly.  By compacting I mean dumping to a
}different device, running "newfs" and then restoring a file system.
}
}One school of thought says this is necessary and should be done fairly
}frequently to avoid excessive fragmentation and inefficient disk I/O.
}
}The other school of thought says it isn't necessary because of the way 
}the Berkeley "fast file system" (BSD 4.2) handles assignment of
}blocks and fragments when a file is stored.  
}

Disk fragmentation (or file-fragmentation as you call it) still occurs
in most versions of Unix, but the Berkeley "fast file system" keeps it
to a minimum.

On a BSD system I would think that a dump/newfs/restore should be done
every year or so. On other systems the file system can be messed up in
a matter of hours, but one (painful) solution is to unmount and fsck -S
all file systems once a day, and then one can keep the fragmentation
down for a long time. The old file system does not try to use the disk-
blocks in any sensible way. It keeps a queue of blocks being freed and
reuses them in that order. The V9 "bitmap" file system keeps fragmentation
more local although it doesn't keep it down quite as much as BSD I believe.

Paul.
-- 
------------------------------------------------------
|debra at research.att.com   | uunet!research!debra     |
------------------------------------------------------



More information about the Comp.unix.questions mailing list