Load control and intelligence in schedulers

Geoff Kuenning geoff at desint.UUCP
Thu Oct 11 14:05:09 AEST 1984


(Discussing the UCSD load-control mechanism)

>The real advantage to this approach is
>that kernel based approaches can not easily distinguish between a vi and a
>compile, causing interactive jobs to become unuseable.

>	Keith Muller


Gee, when I was in college (early 70's) our big CDC 6500 ran a "kernel"
scheduler that did a real good job at that, on a dynamic basis (i.e., a big
vi operation like a huge global substitution ran at "background" priority).
The scheduler had multiple layers:  an input queue for batch jobs, a "pool"
of 40 potentially runnable jobs, and 7 "control points" (read partitions) for
jobs actually in memory and available for CPU usage.   (The limit of 7
processes in memory would have been far too small had we not had slowish
core for swapping, especially since 3 were permanently occupied by system
processes).  Borrowing an idea that worked really well at Purdue,
most processes ran under a fairly standard priority-adjustment scheme, where
I/O improved priority and CPU usage decreased it.  However, any job that
blocked for *terminal* I/O got a short-term and big boost in priority when
that I/O completed.  (The length that the priority boost lasts depends on
CPU speed--I think we used a few CPU seconds.  The idea is to pick a number
more than what an editor usually needs before it reads more from the terminal,
but less than the amount of time taken by your typical compile.)  Once this
limit expired, process priority dropped drastically and becomes subject to
the standard scheduling algorithms.

The other trick was to have a scheduler that was smart about picking the 40
potentially-runnable jobs and about bringing the 7 into memory.  The biggest
improvement in a Unix system (where it is hard to control the number of
potentially-runnable jobs without something like the UCSD load-control system)
would come from tuning the swapping scheduler better.  A swap takes a large
amount of time;  you want to make that time pay off by picking a process that
will stay out for a long time, so that the amount of time spent swapping is
small by comparison.  In addition, you would like to pick a process that is
consuming a lot of the resource you need--memory, I/O, or CPU--which requires
better per-process statistics (especially on I/O rates) than most Unixes
keep.

Even the best scheduler cannot be perfect.  Ours had operator commands to
change process priorities and lock them into or out of memory.  Many is the
time I have seen a good operator clear up a thrashing system by either
forcing an offending process to completion or by swapping it out until the
load level had dropped.  Now if we could only package good old Toshio and
ship him with each 4.2 system...:-)
-- 
	Geoff Kuenning
	First Systems Corporation
	...!ihnp4!trwrb!desint!geoff



More information about the Comp.unix.wizards mailing list