What is a QUEDEFS file (cron)?

guy at gorodish.UUCP guy at gorodish.UUCP
Thu Feb 12 15:37:45 AEST 1987


>Well, I never said it was coded well! :->  But the *potential* in the
>design is impressive.

Other such systems have been done, with fewer funny little
non-orthogonalities (Why does only one queue provide ASAP job
execution?  Would it not make more sense to support "execute ASAP"
and "execute ASAP after this time" jobs in all queues?) and more
generality (Might it not be useful to have queues that run some
specified program, such as "troff", with the job file used as input?
Might it not also be useful to support submission of jobs to other
machines' queues - e.g., a bunch of workstations hanging off some
supercomputer or mini-supercomputer?)

In short, It's Been Done Before (although not always under UNIX), and
arguably better.

>It's in there.  Take a look at the function get_batch (assuming you have
>source).  It is about the 3rd or 4th line of code; just look for nuser.

I have source.  Unfortunately, it's only the source to the standard
S5R2 and S5R3 "cron", not the version you're talking about; it has no
such function in it, and doesn't support the "u" parameter in
"queuedefs" files.

>:#> Unfortunately, there is no way to use these other queues without
>:#> rewriting a portion of cron, but that's a story for another day.

This is wrong.  I just ran a job in queue "d", after setting up a
"queuedefs" entry for it.

>Um, sorry but no cigar.  Using the -q specifier with at causes it to act just
>like batch.

Um, sorry, but no cigar.  *Only* jobs in queue "b" are executed ASAP,
at least in the standard version.  It waited until the specified time
before executing the job I ran in queue "d".

>Frankly, I have never found a good reason for using batch.  The job still runs
>with my uid, so it is still counted as one of my MAXPROC processes (unless
>someone has hacked the kernel to limit processes by process group rather than
>by user id).  So I ask, what do I gain by using batch?

It's less a question of what you gain than of what the system gains.
If you have a reasonably cooperative user base, and a machine that
could be swamped if everybody who wants to do a "make" or "troff" or
whatever does it, you can reduce thrashing of various flavors by
running those jobs in a batch queue.  I know of one Data General AOS
site that did that, and was told by an ex-DG person that this was
done at DG as well.  Multics also had variants of the standard
compiler commands that ran the compilation as a batch job.

It may also be useful for machines like the aforementioned
supercomputers, where you may not want a high level of
multiprogramming.

>:> As a closing note, cronjobs (as opposed to atobs or batchjobs)
>:> bypass the queuedefs.  So setting a queue definition for queue 'c'
>:> will have no effect.

If you're looking for some amusement, try submitting an "at" job to
queue "c" and watch the sparks.  "cron" thinks that everything in
that queue is a "cron" event, so it assumes that the data structures
are set up properly for a "cron" event - even if the event was
entered by "at".

They should not have introduced this "queue 'c' is for 'c'ron jobs"
nonsense, and had a separate file for the at/batch queues' parameters
and the parameters used for "cron" jobs.  (They should also have had
all the at/batch queues support both "execute ASAP" batch jobs and
"execute ASAP after a specified time" at jobs and permitted them to
have multi-character names.)



More information about the Comp.bugs.sys5 mailing list