Setting up Home dirs...

Keith Moore moore at betelgeuse.cs.utk.edu
Thu Sep 20 15:35:41 AEST 1990


In article <2422 at dali> osyjm at caesar.cs.montana.edu (Jaye Mathisen) writes:
>
>How are other admin's setting up users home directories on a wide variety of
>machines?  Does each user have a home dir on each "logically" related
>set of machines?  Other ways?
>
>I've been playing with automount under Ultrix 4.0, but it doesn't seem to
>stand up to a lot of pounding...  How about using amd?

We use amd instead of Sun's automount, for several reasons -- but mainly
because it's more flexible, more robust, and it runs on all of our machines.
(It hasn't been entirely without problems, but most of these seem to 
be solved now.) All machines share a common amd.map file which is distributed 
with a shell script that does rcp's to each machine whenver someone makes a 
change to the master copy of the amd.map file.

(We avoid using YP because it has catastrophic failure modes -- several 
times it has eaten our entire campus net because so many machines were 
sending ether broadcast packets asking ``where's papa?'' that the YP 
servers (two SparcStations and a Sun 3/60 dedicated to nothing but YP 
service) could not keep up with the load...and of course every machine 
on the net was having to look at every broadcast packet to see what it
was for...which only made things worse.)

Our users' home directories (in the passwd file) are all of the form
/$color/homes/$user.  We don't imbed the name of the machine that does
the file service...because we want to have the freedom to move users around
between machines to balance load and disk usage between groups of users.
We use colors as partition names precisely because they are arbitrary.
Each machine has a symlink for each color from /$color/homes -> /amd/$color,
and the amd map associates a machine and disk partition with the particular
color.

(The ".../homes/..." part is an anachronism from the days when these were 
hard NFS mounts in /etc/fstab and the system would hang if you typed `pwd'
and any directory in any ancestor of your current directory happened to be 
an NFS mount point on a unreachable file server....Yuk!...anyway, mounting
the disk on /$color/homes rather than on, say, /homes/$color solved
that problem...and we haven't changed over yet.  Once we do, we will be
able to get rid of the symlinks and change the user's directories to
something simpler like /homes/$color.)

This scheme actually works remarkably well, but there are lots of little
things we've had to learn about.  The biggest problems we have found 
have been with mail -- sendmail isn't prepared to deal with the kinds of 
failure modes you run into in a distributed file system.   (e.g. What if 
a user's .forward file is missing because the file server that contains 
his home area is down?)  I've managed to solve these problems without 
patching sendmail by replacing the "local", "prog", and "file" mailers 
with small programs or shell scripts that do some error checking before
actually delivering the mail.  

Other problems have been due to NFS mapping root->nobody on remote mounts.
Most recent NFS server implementations provide a way around this, but we
still have a few machines that don't fix this problem.  We therefore have
a special version of "calendar" that does an "su" to the owner of the
calendar file in order to read it, in case it's not readable by "nobody".
This version of calendar also does "ypcat passwd" instead of reading
the /etc/passwd file, so it scans directories for every user in the entire
passwd map...we have to make sure that only one system in the entire 
"cluster" runs calendar, else things slow down to a crawl.  We run it
on our mail server, since the mail that calendar generates will end up
there anyway.

Since users home directories occasionally migrate, we discourage hard-coding
the /$color/homes/$user directory in shell scripts, etc.  ~$user works in
csh scripts, of course, but not in Bourne shell scripts, so we have a
directory named /home with an entry for every user that is a symbolic
link to that user's home directory.  This is maintained by the account
installation and deletion scripts, and checked nightly by a script run
from cron.

As I said above, this all works pretty well, and it's easy to administer.
We started using amd for this full-scale in mid-August or so, and the
bugs are pretty well worked out by now.  (The latest version of amd helps 
a lot -- versions before 5.2 or so did too many NFS NULLPROC requests, 
eating the net and the servers when we installed it on all of our systems.)

Keith Moore			Internet: moore at cs.utk.edu
University of Tenn. CS Dept.	BITNET: moore at utkvx
107 Ayres Hall, UT Campus	Telephone: +1 615 974 0822
Knoxville Tennessee 37996-1301	``Friends don't let friends use YP (or NIS)''



More information about the Comp.unix.admin mailing list