AFS

William Sommerfeld wesommer at athena.mit.edu
Fri Jun 9 06:46:05 AEST 1989


In article <881 at mtxinu.UUCP> shore at mtxinu.COM (Melinda Shore) writes:

   Allow me to add a few items to the bad things list:

Allow me to rebut a few of them.

   1) The [protection] semantics really are different from Unix
     filesystem semantics.

The use of access control lists is necessary in large-scale
environments.  it is quite common to want to give read access to one
group of users (members of a class), and write access to a non-unit
subset of that group (the TA's).  Try doing that with vanilla UNIX
protections.  This is DIFFERENT, not BAD.

What is wrong is that the ACLs are assigned on a per-directory basis,
rather than a per-object basis; I've beaten up Al Spector and Mike
Kazar about this on more than one occasion.

           links to files
	   in different directories are not allowed.

.. because this would lead to indeterminate protection on the file..
Cross-directory *hard* links are not allowed, but symlinks exist just
as in  normal BSD.

   2)  Directories, which you and I consider to be files, aren't treated as
	   files by AFS.  *No* caching, which means that you can ls until the
	   cows come home but the 80th time is not going to be any faster than
	   the first.

Please check your facts; last I looked, they're cached just like files.
A significant part of the hair in the AFS client is involved with
keeping the local copy of a directory in synch with the master copy
when directory operations are done.

   3)  Performance.  The whole file is copied over at access time, which
	   speeds up future file accesses but can turn "grep string *" into a
	   fairly unpleasant experience.

Yes, but the user process doing the "grep" sees the bits as soon as
they're available, and doesn't have to wait for them to be written to
the cache.  At least for the configuration I timed (RT PC APC, 70MB
drive on server and client), AFS and NFS fetched over the wire at
about the same speed; once the bits were local, AFS was just as fast
as local disk.

   4)  Disk usage.  Because entire files are copied over it can be something
	   of a disk burner.

True; you want a large enough cache that the "working set" of files
you normally touch in the period of an hour or two fits in its
entirety; for normal users, 10MB is probably enough, while for "power
users" doing kernel builds, 30MB+ is more like it..

If your model is that all the "interesting" files are on file servers,
and workstation disks are only used for paging and temp space, then
it's not unreasonable to split the free space on the disk between the
AFS cache and swap space.

In actuality, the files are copied over in 64KB "chunks" so that files
larger than the cache can be manipulated, albeit less efficiently.

   5)  Administration is somewhat (!) complex.  

Agreed, but managing 10 AFS servers is only slightly harder than
managing one; the same is definitely not true of NFS.  The "design
center" for AFS is an installation of a dozen or two *servers*, each
with maybe a GB of disk, serving a few thousand users/workstations
simultaneously.

Have lots of space on one disk, while another is bursting at the
seams?  Do a "vos listvol <host> <partition>" to see which volumes are
hogging space, then do a "vos move <vol> <host1> <part1> <host2>
<part2>", wait for it to complete, and you're set.  The people using
the volumes probably won't even notice the change, even if they're
changing their files while the move is taking place!

					- Bill
--



More information about the Comp.unix.wizards mailing list