load control system (1 of 8) (repost)

Keith Muller muller at sdcc3.UUCP
Thu Feb 21 18:12:21 AEST 1985


This is part 1 of the load control system. This part MUST be unpacked
BEFORE any other part.


# This is a shell archive.  Remove anything before this line,
# then unpack it by saving it in a file and typing "sh file".
#
# Wrapped by sdcc3!muller on Sat Feb  9 13:40:15 PST 1985
# Contents:  client/ control/ h/ scripts/ server/ man/ README NOTICE Makefile
#	man/Makefile man/ldc.8 man/ldd.8 man/ldq.1 man/ldrm.1
 
echo x - README
sed 's/^@//' > "README" <<'@//E*O*F README//'
TO INSTALL: (you MUST be root) (January 24, 1985 version)

1) Select a group id for load control system to use. No user should be in this
   group. Add this group to /etc/groups and call it lddgrp.
   ** By default the group id 25 is used. **

2) Look at the file h/common.h. Make sure that LDDGID is defined to be the
   same group id as you selected in step 1.

3) cd to the scripts directory. Inspect the paths used in the file makedirs.
   The script makedirs creates the required directories with the proper modes
   groups and owners. The .code directories are where the real executable
   files are hidden, protected by group access (the directory is protected
   from all "other" access). Each directory which contains programs that you
   want load controlled must have a .code subdirectory.

   NOTE: You really do not have to change makedirs at all except to ADD
   any additional directories you want controlled. It is perfectly safe to
   just run this system on any 4.2 system without ANY path changes (this
   includes sun, vax and pyramid versions).

4) If you alter or add any pathnames in makedirs, you might have to adjust
   the makefiles. For each subdirectory (client, server, control) adjust
   or add the paths in the Makefiles. 

5) If you alter any pathname in makedirs you will have to check all the h
   files in the directory h. Change any paths as required. 

6) run makedirs (if you have an older release of ldd: You should shut down
   the ldd server and remove the old status and errlog file. Then run 
   makedirs.) Makedirs can be run any number of times without harm. It will
   reset the owners and groups of all directories to the correct state.

7) In the top level directory (The same directory as this README file is in),
   run make. then make install. All the binaries are now in place.

8) Start the ldd server:
	/etc/ldd [-T cycle] [-L load]

   The server will detach itself and wait for requests. You should get no
   messages from the server. The two flags are optional. The -L flag
   specifies the number of seconds between each load average check. The
   -L flag specifies the load average queueing starts. If neither are
   specified the defaults are used. (see the manual page for ldd). You
   can change the defaults by editing h/server.h. ALRMTIME is the cycle
   time, and MAXLOAD is the load average.

   The following are good values to start with:

   machine		cycle 			load
   ----------------------------------------------------------
   pyramid 90x		25			10.0
   pyramid 90mx		15			15.0
   vax 780		50			9.0
   vax 750		60			7.5
   vax 730		60			6.0
   sun 2		60			6.5

9) add the following lines to /etc/rc.local (change path and add any ldd
   arguements as selected from the above table). See the man page on ldd
   for more info.

if [ -f /etc/ldd ]; then
	/bin/rm -f /usr/spool/ldd/sr/errors
	/etc/ldd & echo -n ' ldd'			>/dev/console
fi

10) for each directory to be controlled select those programs you want under
    the load control system. The programs you select should be jobs that 
    usually do not require user interaction, though nasty systems like macsyma
    might be load controlled anyway. Never load control things that have time
    response requirements. The jobs you select will determine the overall
    usefullness of the load control system. For the load control system to
    be completely effective, all the programs that cause any significant load
    on the system should be placed under load control. For example the cc
    command is a very typical of a program that should be load controlled.
    When run, cc uses large amount of resources which increases as the size
    of the program being compiled increases. When there are many cc's running
    simultaneously the machine gets quite overloaded and your system thrashes.
    A poor choice would be a command like cat. Sure cat can do a lot of i/o,
    but even ten cat's reading very large files do not impact the system
    very much. Troff is a very good command to load control. It is not very 
    interactive, and a lot of them running would bring even slow a cray.
    Watching your system when it is overoaded with ps au should tell you which
    programs on your system need to be load controlled.

    The following is a list of programs I have under load control:

    /bin/cc /bin/make /bin/passwd /usr/bin/pc /usr/bin/pix /usr/bin/liszt
    /usr/bin/lisp /usr/bin/vgrind /usr/ucb/f77 /usr/ucb/lint /usr/ucb/nroff
    /usr/ucb/spell /usr/ucb/troff /usr/ucb/yacc

    The following is the list of places to look for other candidates for load
    control:
	a) /bin
	b) /usr/bin
	c) /usr/ucb
	d) /usr/new
   	e) /usr/local
	f) /usr/games

    i)  some programs use argv[0] to pass data (so far only the ucb pi
	does this when called by pix). These programs must be treated
	differently (since they mangle argv[0], it cannot be used to
	determine which binary to execute). A special client called
	.NAMEclient where NAME is the actual name of the program must be
	created. These special programs must be specified in the 
	client/Makefile.  See the sample for $(SPEC1) which is for a program
	called test in /tmp. Run the script onetime/saddldd for these programs.

    ii) run the script scripts/addldd with each program to be load controlled
	that requires a STATUS MESSAGE ("Queued waiting to run.") as an
	arguement (i.e. addldd /bin cc make)

    iii)run the script scripts/qaddldd with each program to be load controlled
	that DOES NOT require a STATUS MESSAGE as an arguement
	(i.e. qaddldd /usr/bin nroff)

    addldd/qaddldd/saddldd moves the real binary into the .code file and
    replaces it with a "symbolic link" to either .client (for addldd and
    qaddldd) or a .NAMEclient (for saddldd) So the command:
	addldd /bin cc
    moves cc to /bin/.code/cc and creates the symbolic link /bin/cc
    to /bin/.client.

11) any changes to any file in the load control system from now on
    will be correctly handled by a make install from the top level directory.

12) the script script/rmldd can be used to remove programs from the ldd system.

13) Compilers like cc and pc should have all the intermediate passes protected.
    Each pass must be in group lddgrp and have the others access turned off
    For example:
	chmod 0750 /lib/c2
	chgrp lddgrp /lib/c2

14) When the system is running you might have to adjust the operating 
    parameters of ldd for the job mix and the capacity of your machine.
    Use ldc to adjust these parameters while the load control system is
    running and watch what happens. The .h files as supplied use values that
    will safely work on any machine, but might not be best values for your
    specific needs. In the vast majority of cases, only the load point
    and cycle time need to be changed and these can be set with arguements to
    ldd when it is first invoked.  Be careful as radical changes to
    the defaults might make defeat the purpose of ldd. If things ever get
    really screwed up, you can just kill -9 the server (or from ldc: abort
    server) and things will run just like the load control doesn't exsist.
    (Note the pid of the currently running ldd is always stored in the lock
    file "spool/ldd/sr/lock"). (See the man page on ldd for more).

15) If load control does not stop the system load to no more than the load
    limit + 2.5 then there are programs that are loading down the machine
    which are not under load control. Find out what they are and load control
    them. 

16) To increase the response of the system you can lower the load threshold.
    Of course if the threshold gets too low the system can end up with long
    wait times for running. Long wait times are usually around 3000 seconds
    for super loaded vaxes. On the very fast pyramids, 500 seconds (48 users
    and as many large cc as the students can get running) seems the longest
    delay I have seen. You can also play with the times between checks. This
    has some effect on vaxes but 50 - 60 seconds seems optimal. On pyramids
    it is quite different. Since the throughput is so very much greater
    than vaxes (four times greater at the very least), the load needs to be
    checked at least every 25 seconds. If this check time is too long you
    risk having the machine go idle for a number of seconds. Since the whole
    point is to squeeze out every last cpu cycle out of the machine, idle
    time must be avoided. Watching the machine with vmstat or the mon program
    is useful for this. Try to keep the user percentage of the cpu as high
    as possible. Try to have enough jobs runnable so the machine doesn't
    go idle do to a lack of jobs (yes this can happen with lots of disk io).

17) If you want/need more info on the inner workings of the ldd system, you
    can read the comments in the .h files and the source files. If you have
    problems drop me a line. I will be happy to answer any questions.

    Keith Muller
    University of California, San Diego
    Mail Code C-010
    La Jolla, CA  92093
    ucbvax!sdcsvax!muller
    (619) 452-6090
@//E*O*F README//
chmod u=r,g=r,o=r README
 
echo x - NOTICE
sed 's/^@//' > "NOTICE" <<'@//E*O*F NOTICE//'
DISCLAIMER
  "Although each program has been tested by its author, no warranty,
  express or implied, is made by the author as to the accuracy and
  functioning of the program and related program material, nor shall
  the fact of distribution constitute any such warranty, and no
  responsibility is assumed by the author in connection herewith."
  
  This program cannot be sold, distributed or copied for profit, without
  prior permission from the author. You are free to use it as long the
  author is properly credited with it's design and implementation.

  Keith Muller
  Janaury 15, 1985 
  San Diego, CA
@//E*O*F NOTICE//
chmod u=r,g=r,o=r NOTICE
 
echo x - Makefile
sed 's/^@//' > "Makefile" <<'@//E*O*F Makefile//'
#
#	Makefile for ldd server and client 
#
#

all:
	cd server; make ${MFLAGS}
	cd client;  make ${MFLAGS}
	cd control;  make ${MFLAGS}

lint: 
	cd server; make ${MFLAGS} lint
	cd client;  make ${MFLAGS} lint
	cd control;  make ${MFLAGS} lint

install: 
	cd server; make ${MFLAGS} install
	cd client;  make ${MFLAGS} install
	cd control;  make ${MFLAGS} install
	cd man; make ${MFLAGS} install

clean:
	cd server; make ${MFLAGS} clean
	cd client;  make ${MFLAGS} clean
	cd control;  make ${MFLAGS} clean
@//E*O*F Makefile//
chmod u=r,g=r,o=r Makefile
 
echo mkdir - client
mkdir client
chmod u=rwx,g=rx,o=rx client
 
echo mkdir - control
mkdir control
chmod u=rwx,g=rx,o=rx control
 
echo mkdir - h
mkdir h
chmod u=rwx,g=rx,o=rx h
 
echo mkdir - scripts
mkdir scripts
chmod u=rwx,g=rx,o=rx scripts
 
echo mkdir - server
mkdir server
chmod u=rwx,g=rx,o=rx server
 
echo mkdir - man
mkdir man
chmod u=rwx,g=rx,o=rx man
 
echo x - man/Makefile
sed 's/^@//' > "man/Makefile" <<'@//E*O*F man/Makefile//'

#
# Makefile for ldd manual pages
#

DEST=	/usr/man

TARG=	$(DEST)/man8/ldd.8 $(DEST)/man8/ldc.8 $(DEST)/man1/ldrm.1 \
	$(DEST)/man1/ldq.1

all:

install: $(TARG)

$(DEST)/man8/ldd.8: ldd.8
	install -c -o root ldd.8 $(DEST)/man8

$(DEST)/man8/ldc.8: ldc.8
	install -c -o root ldc.8 $(DEST)/man8

$(DEST)/man1/ldrm.1: ldrm.1
	install -c -o root ldrm.1 $(DEST)/man1

$(DEST)/man1/ldq.1: ldq.1
	install -c -o root ldq.1 $(DEST)/man1

clean:
@//E*O*F man/Makefile//
chmod u=r,g=r,o=r man/Makefile
 
echo x - man/ldc.8
sed 's/^@//' > "man/ldc.8" <<'@//E*O*F man/ldc.8//'
@.TH LDC 8 "24 January 1985"
@.UC 4
@.ad
@.SH NAME
ldc \- load system control program
@.SH SYNOPSIS
@.B /etc/ldc
[ command [ argument ... ] ]
@.SH DESCRIPTION
@.I Ldc
is used by the system administrator to control the
operation of the load control system, by sending commands to
@.I ldd
(the load control server daemon).
@.I Ldc
may be used to:
@.IP \(bu
list all the queued jobs owned by a single user,
@.IP \(bu
list all the jobs in the queue,
@.IP \(bu
list the current settings of changeable load control server parameters,
@.IP \(bu
abort the load control server,
@.IP \(bu
delete a job from the queue (specified by pid or by user name),
@.IP \(bu
purge the queue of all jobs,
@.IP \(bu
rearrange the order of queued jobs,
@.IP \(bu
run a job regardless of the system load (specified by pid or user name),
@.IP \(bu
change the load average at which jobs will be queued,
@.IP \(bu
change the limit on the number of jobs in queue,
@.IP \(bu
change the number of seconds between each check on the load average,
@.IP \(bu
print the contents of the servers error logging file,
@.IP \(bu
change the maximum time limit that a job can be queued.
@.PP
Without any arguments,
@.I ldc
will prompt for commands from the standard input.
If arguments are supplied,
@.IR ldc
interprets the first argument as a command and the remaining
arguments as parameters to the command.  The standard input
may be redirected causing
@.I ldc
to read commands from a file.
Commands may be abbreviated, as any unique prefix of a command will be
accepted.
The following is the list of recognized commands.
@.TP
? [ command ... ]
@.TP
help [ command ... ]
@.br
Print a short description of each command specified in the argument list,
or, if no arguments are given, a list of the recognized commands.
@.TP
abort server
@.br
Terminate the load control server.
This does 
@.I not
terminate currently queued jobs, which will run when they
next poll the server (usually every 10 minutes).
If the server is restarted these jobs will be inserted into the queue ordered
by the time at which the job was started.
Jobs will 
@.I not
be lost by aborting the server.
Both words "abort server" must by typed (or a unique prefix) as a safety
measure.
Only root can execute this command.
@.TP
delete [\f2pids\f1] [-u \f2users\f1]
@.br
This command has two modes. It will delete jobs listed by pid, or with the
@.B \-u
option delete all the jobs owned by the listed users.
Job that are removed from the queue will exit returning status 1 (they
do not run).
Users can only delete jobs they own from the queue, while root can delete any
job.
@.TP
errors
@.br
Print the contents of the load control server error logging file.
@.TP
list [\f2user\f1]
@.br
This will list the contents of the queue, showing each jobs rank, pid,
owner, time in queue, and a abbreviated line of the command to be executed
for the specified user. If no user is specifies, it defaults to be the
user running the command. (Same as the ldq command).
@.TP
loadlimit \f2value\f1
@.br
Changes the load average at which the load control system begins
to queue jobs to \f2value\f1.
Only root can execute this command.
@.TP
longlist
@.br
Same as list except prints ALL the jobs in the queue. This is expensive to
execute. (Same as the ldq -a command).
@.TP
move \f2pid rank\f1
@.br
Moves the process specified by process id 
@.I pid
to position 
@.I rank
in the queue.
Only root can execute this command.
@.TP
purge all
@.br
Removes ALL the jobs from the queue. Removed jobs terminate returning a
status of 1.
As a safety measure both the words "purge all" (or a prefix of) must be typed.
Only root can execute this command.
@.TP
quit
@.br
Exit from ldc.
@.TP
run [\f2pids\f1] [-u \f2users\f1]
@.br
Forces the jobs with the listed 
@.I pids
to be run 
@.I regardless 
of the system load.
The
@.B \-u
option forces all jobs owned by the listed users to be run regardless
of the system load.
Only root can execute this command.
@.TP
sizeset \f2size\f1
@.br
Sets the limit on the number of jobs that can be in the queue to be
@.I size.
This prevents the unix system process table from running out of slots if
the system is extremely overloaded. All job requests that are made while
the queue is at the limit are rejected and told to try again later.
The default value is 150 jobs.
Only root can execute this command.
@.TP
status
@.br
Prints the current settings of internal load control server variables.
This includes the number of jobs in queue, the load average above which
jobs are queued, the limit on the size of the queue, the time in seconds between
load average checks by the server, the maximum time in seconds a job can be
queued, and the number of recoverable errors detected by the server.
@.TP
timerset \f2time\f1
@.br
Sets the number of seconds that the server waits between system load average
checks to
@.I time.
(Every 
@.I time
seconds the server reads the current load average and if it below the load
average limit (see 
@.I loadlimit
) the jobs are removed from the front of the queue and told to run).
Only root can execute this command.
@.TP
waitset \f2time\f1
@.br
Sets the maximum number of seconds that a job can be queued regardless
of the system load to 
@.I time
seconds.
This will prevent the load control system from backing up with jobs that never
run do to some kind of degenerate condition.
@.SH EXAMPLES
To list the jobs owned by user joe:
@.sp
list joe
@.sp
To move process 45 to position 6 in the queue:
@.sp
move 45 6
@.sp
To delete all the jobs owned by users sam and joe:
@.sp
delete -u sam joe
@.sp
To run jobs with pids 1121, 1177, and 43:
@.sp
run 1121 1177 43
@.SH FILES
@.nf
/usr/spool/ldd/*	spool directory where sockets are bound
@.fi
@.SH "SEE ALSO"
ldd(8),
ldrm(1),
ldq(1)
@.SH DIAGNOSTICS
@.nf
@.ta \w'?Ambiguous command      'u
?Ambiguous command	abbreviation matches more than one command
?Invalid command	no match was found
?Privileged command	command can be executed by only by root
@.fi
@//E*O*F man/ldc.8//
chmod u=r,g=r,o=r man/ldc.8
 
echo x - man/ldd.8
sed 's/^@//' > "man/ldd.8" <<'@//E*O*F man/ldd.8//'
@.TH LDD 8 "24 January 1985"
@.UC 4
@.ad
@.SH NAME
ldd \- load system server (daemon)
@.SH SYNOPSIS
@.B /etc/ldd
[ 
@.B \-L 
@.I load
] [ 
@.B \-T
@.I alarm 
]
@.SH DESCRIPTION
@.TP
@.B \-L
changes the load average threshold to
@.I load
instead of the default (usually 10).
@.TP
@.B \-T
changes the time (in seconds) 
between load average checks to 
@.I alarm
seconds instead of the default (usually 60 seconds).
@.PP
@.I Ldd
is the load control server (daemon) and is normally invoked
at boot time from the
@.IR rc.local (8)
file.
The
@.I ldd
server attempts to maintain the system load average
below a preset value so interactive programs like
@.IR vi (1)
remain responsive.
@.I Ldd
works by preventing the system from thrashing
(i.e. excessive paging and high rates of context switches) and decreasing the
systems throughput by limiting the number runnable processes in the system
at a given moment.
When the system load average 
is above the threshold,
@.I ldd
will block specific cpu intensive processes from running and place
them in a queue.
These blocked jobs are not runnable and therefore do not 
contribute to the system load. When the load average drops below the threshold,
@.I ldd
will remove jobs from the queue and allow them to continue execution.
The system administration determines which programs are 
considered cpu intensive and places control of their execution under the
@.I ldd
server.
The system load average is the number of runnable processes,
and is measured by the 1 minute 
@.IR uptime (1)
statistics.
@.PP
A front end client program replaces each program controlled by the
@.I ldd
server.
Each time a user requests execution of a controlled program, the
client enters the request state,
sends a "request to run" datagram to the server and waits for a response. The
waiting client is blocked, waiting for the response from the
@.I ldd
server.
If the client does not receive an answer to a request after a certain
period of time has elapsed (usually 90 seconds), the request is resent.
If the request is resent a number of times (usually 3) 
without response from the server, the requested program is executed. 
This prevents the process from being blocked forever if the
@.I ldd
server fails.
@.PP
The
@.I ldd
server can send one of five different messages to the client.
A "queued message" indicates that the client has
been entered into the queue and should wait.
A "poll message" indicates that the server did not receive a message,
so the client should resend the message.
A "terminate message" indicates that the request cannot be honored
and the client should exit abnormally.
A "run message" indicates the requested program should be run.
A "full message" indicates that the ldd queue is full and this request cannot
be accepted. This limit is to prevent the Unix kernel process table from
running out of slots, since queued processes 
still use system process slots.
@.PP
When the server receives a "request to run",
it determines whether the job should run immediately, be rejected, 
or be queued.
If the queue is full, the job is rejected and the client exits.
If the queue is not empty, the request is added to the queue,
and the client is sent a "queued message" 
The client then enters the queued state
and waits for another command from the server.
If no further commands are received from the server after a preset time 
has elapsed (usually 10 minutes),
the client re-enters the request state and resends the request
to the server to ensure that the server has not terminated or
failed since the time the client was queued.
@.PP
If the queue is empty, the server checks the current load average, and
if it is below the threshold, the client is sent a "run message".
Otherwise the server queues the request, sends the client a "queued message",
and starts the interval timer.
The interval timer is bound to a handler that checks the system load every
few seconds (usually 60 seconds). 
If the handler finds the current load average is below the threshold,
jobs are removed from the head of the queue and sent a "run message".
The number of jobs sent "run messages" depends on how much the current 
load average has dropped below the limit.
If the load average is above the threshold, the handler checks
how long the oldest process has been waiting to run.
If that time is greater than a preset limit (usually 4 hours), the job is 
removed from the queue and allowed to run regardless of the load.
This prevents jobs from being blocked forever due to load averages that
remain above the threshold for long periods of time.
If the queue becomes empty, the handler will shut off the interval timer. 
@.PP
The
@.I ldd
server logs all recoverable and unrecoverable errors in a logfile. Advisory
locks are used to prevent more than one executing server at a time.
When the
@.I ldd
server first begins execution, it scans the spool directory for clients that
might have been queued from a previous
@.I ldd
server and sends them a "poll request". 
Waiting clients will resend their "request to run" message to the new
server, and re-enter the request state.
The
@.I ldd
server will rebuild the queue of waiting tasks 
ordered by the time each client began execution.
This allows the
@.I ldd
server to be terminated and be re-started without
loss or blockage of any waiting clients.
@.PP
The environment variable LOAD can be set to "quiet", which will
surpress the output to stderr of the status strings "queued" 
and "running" for commands which have been set up to display status.
@.PP
Commands can be sent to the server with the
@.IR ldc (8)
control program. These commands can manipulate the queue and change the
values of the various preset limits used by the server.
@.SH FILES
@.nf
@.ta \w'/usr/spool/ldd/sr/msgsock           'u
/usr/spool/ldd	ldd spool directory
/usr/spool/ldd/sr/msgsock	name of server datagram socket
/usr/spool/ldd/sr/cnsock	name do server socket or control messages
/usr/spool/ldd/sr/list		list of queued jobs (not always up to date)
/usr/spool/ldd/sr/lock	lock file (contains pid of server)
/usr/spool/ldd/sr/errors	log file of server errors
@.fi
@.SH "SEE ALSO"
ldc(8),
ldq(1),
ldrm(1).
@//E*O*F man/ldd.8//
chmod u=r,g=r,o=r man/ldd.8
 
echo x - man/ldq.1
sed 's/^@//' > "man/ldq.1" <<'@//E*O*F man/ldq.1//'
@.TH LDQ 1 "24 January 1985"
@.UC 4
@.SH NAME
ldq \- load system queue listing program
@.SH SYNOPSIS
@.B ldq
[
@.I user
] [
@.B \-a
]
@.SH DESCRIPTION
@.I Ldq
is used to print the contents of the queue maintained by the
@.IR ldd (8)
server.
For each job selected by
@.I ldq
to be printed, the rank (position) in the queue, the process id, the owner of
the job, the number of seconds the job has been waiting to run, and the
command line of the job (truncated in length to the first 16 characters)
is printed.
@.PP
With no arguments,
@.I ldq
will print out the status of the jobs in the queue owned by the user running
@.I ldq.
Another users jobs can be printed if that user is specified as an argument
to
@.I ldq.
The
@.B \-a
option will print all the jobs in the queue.
Of course the
@.B \-a
option is much more expensive to run.
@.PP
Users can delete any job they own by using either the
@.IR ldrm (1)
or
@.IR ldc (8)
commands.
@.SH FILES
@.nf
@.ta \w'/usr/spool/ldd/cl/*            'u
/usr/spool/ldd/cl/*	the spool area where sockets are bound
@.fi
@.SH "SEE ALSO"
ldrm(1),
ldc(8),
ldd(8)
@.SH DIAGNOSTICS
This command will fail if the
@.I ldd
server is not executing.
@//E*O*F man/ldq.1//
chmod u=r,g=r,o=r man/ldq.1
 
echo x - man/ldrm.1
sed 's/^@//' > "man/ldrm.1" <<'@//E*O*F man/ldrm.1//'
@.TH LDRM 1 "24 January 1985"
@.UC 4
@.SH NAME
ldrm \- remove jobs from the load system queue
@.SH SYNOPSIS
@.B ldrm
[
@.I pids
] [
@.B \-u
@.I users
]
@.SH DESCRIPTION
@.I Ldrm
will remove a job, or jobs, from the load control queue.
Since the server is protected, this and
@.IR ldc (8)
are the only ways users can remove jobs from the load control spool (other
than killing the waiting process directly).
When a job is removed, it will terminate returning status 1.
This method is preferred over sending a kill -KILL to the process as the
job will be removed from the queue, and will no longer appear in
lists produced by
@.IR ldq (1)
or
@.IR ldc (8).
@.PP
@.I Ldrm
can remove jobs specified either by pid or by user name.
With the
@.B \-u
flag,
@.I ldrm
expects a list of users who will have all their jobs removed from the
load control queue.
When given a list of pid's,
@.I ldrm
will remove those jobs from the queue.
A user can only remove jobs they own, while root can remove any job.
@.SH EXAMPLES
To remove the two jobs with pids 8144 and 47:
@.sp
ldrm 8144 47
@.sp
To remove all the jobs owned by the users joe and sam:
@.sp
ldrm -u joe sam
@.SH FILES
@.nf
@.ta \w'/usr/spool/ldd/cl/*   'u
/usr/spool/ldd/cl/*	directory where sockets are bound
@.fi
@.SH "SEE ALSO"
ldq(1),
ldc(8),
ldd(8)
@.SH DIAGNOSTICS
``Permission denied" if the user tries to remove files other than his
own.
@//E*O*F man/ldrm.1//
chmod u=r,g=r,o=r man/ldrm.1
 
echo Inspecting for damage in transit...
temp=/tmp/shar$$; dtemp=/tmp/.shar$$
trap "rm -f $temp $dtemp; exit" 0 1 2 3 15
cat > $temp <<\!!!
     182    1518    9101 README
      14      96     613 NOTICE
      25      76     502 Makefile
      27      52     439 Makefile
     215    1075    5877 ldc.8
     168    1045    6106 ldd.8
      55     221    1145 ldq.1
      59     261    1362 ldrm.1
     745    4344   25145 total
!!!
wc  README NOTICE Makefile man/Makefile man/ldc.8 man/ldd.8 man/ldq.1 man/ldrm.1 | sed 's=[^ ]*/==' | diff -b $temp - >$dtemp
if [ -s $dtemp ]
then echo "Ouch [diff of wc output]:" ; cat $dtemp
else echo "No problems found."
fi
exit 0



More information about the Comp.sources.unix mailing list