Managing a network of UNIX workstations

Blair P. Houghton bph at buengc.BU.EDU
Sun Jan 14 10:12:56 AEST 1990


In article <3949 at jhunix.HCF.JHU.EDU> barrett at jhunix.HCF.JHU.EDU (Dan Barrett) writes:
>
>	I may be managing a network of DECstation 3100's running Ultrix in
>the near future.  I have been managing VAXen for a long time, but never a
>network of workstations.  So, I have some questions:

Welcome to it.  I have 10 GPXes and 6 vs2000's (waiting for some
3100's on the horizon...) all clustered together, and it's easier
than it looks but harder than it should be...

>(1)	How do you handle inter-machine superuser privileges?
>	I do NOT want to put "root" in /.rhosts -- this is a big security
>	risk, right?

Huge.  Dangerous.  So call uid 0 something bizarre (passwordlike),
remove the word "root" from as many places as you can find, and be happy.

>(2)	How do you do transparent backups?  I want to pop a tape in ONE
>	tape drive and say "Back up ALL files from ALL workstations onto
>	this tape."

rdump never worked for us, either.  We've got things NFS'ed
all over the place (see below) and do dumps on all machines
separately;  this has the con that it's not convenient to
be running all over the building to swap tapes, especially
TK50's :-), but has the pros that dumps happen in parallel
and restores are much quicker.  It also encourages a mixed
full/incremental dump schedule, where a set of filesystems
that have had heavy alterations can be dumped at level 0
while all the others are getting a level 1 or 2.

>	Suppose I dedicate one workstation as the "main node", mount all
>	other workstation disks on the main node using NFS, and then back it
>	up.  This should work...?  But don't I have to worry about
>	inter-machine superuser privileges?  After all, we want to back up
>	EVERY file from EVERY machine.

It would be hideously slow to do it over NFS, but, There's
a keyword to put in the /etc/exports file for NFS that
allows uid 0 to have access to NFS'ed filesystems.  This
allows uid 0 to have access, not just "root", so it's also
rotten security.

>(3)	We'd like all users to have accounts on all workstations.  What's
>	the best way to maintain an inter-machine password file?  I've
>	heard vaguely of "yellow pages" but have never used it.

Get it; use it.  It's not hard to start.  Then you maintain
one big /etc/passwd file on the server, a few lines in
/etc/passwd on each of the clients, and save yourself
several minutes of work per change to the password file.

>(4)	We'd like a system where the entire network appears to each user as
>	if it were one huge "machine".  A user would log onto this "machine"
>	and not care which workstation s/he were actually using.  (Maybe the
>	"machine" would automatically log the user onto the workstation with
>	the lightest system load.  I've seen this done with VMS systems at
>	other schools.)  Can this entire scheme be done?  Transparently?

Absolutely.  NFS and YP give you this.  Simply have
user-partitions exported to all machines in the cluster.
Then the user is logged onto the workstation he's sitting
at, and occasionally accessing a file on the server.  If
you do it the other way, then _all_ computation is done on
a remote host.  It still apears transparent.  I still get
users at the _end_ of a semester asking me "where's the big
computer that runs all these graphics terminals," usually
just as I'm asking them to take their books off it...:-)

>(5)	Should we put disks on every workstation, or have one fileserver and
>	many diskless workstations?  Which is better?  Easier to maintain?

Having one fileserver means that that one machine takes a lot of load,
so it should be significantly more powerful than the rest of the
machines.

We have fully-exported partitions scattered all over the
cluster, so that, for instance, when programs rummage
through the CAD libraries on one machine they aren't
causing collisions with the main user partition, which is
on another machine.

This also avoids duplication and frees up some disk space,
since each (non-diskless-system) station requires / and
/usr locally, but /usr/local and /usr/spool/mail and
whatnot can be mounted remotely.

>	My idea is to have one or two fileservers, make the other
>	workstations use NFS, but put a small disk on each workstation for
>	swapping only.  Good?  Bad?  What's better?

Depends on what space you need for the stuff you're putting on those
servers; and don't expect any sort of usable performance if you
want to log into one of the serving stations, if it has more than
half your stuff on it...

Looking again, I've got perhaps four stations that have any
major serving to do, and at least 6 (the vs2000's) that
serve nothing, having only the necessities and some swap
space on them.  This works rather well.

>(6)	Does anybody make a removable media drive, like the Syquist
>	44-megabyte cartridge drive, for the DS3100?

I have no idea.

All in all, I'd say that small VAXen work well as local
clusters without any dedicated server to support them;
though we have been unable to do one or two things that
would be much better suited to having a large VAX as a
central server, but that's probably more a factor in
the sort of work we do, which occasionally requires
installing 100-200Mb CAD tool packages.

				--Blair
				  "Good luck.  You don't need as
				   much of it as you might think."



More information about the Comp.unix.ultrix mailing list