Arete vs Pyramid?

Lars Hammarstrand lasse at daab.UUCP
Sun Dec 28 16:54:41 AEST 1986


In fact, we are selling Aret` here in Sweden, so if you don't get any answers
from your part of the world, you can always hear with us.

In general:

This machine is made for adminstrative work, i.e programs that works with
databases, transaction processing and things like that.

Up to 88 lines in one single machine and with additional I/O processor, up to
256 lines.

Four separate busses to speed up: Memory-CPU-SLAVE-disk/tape-terminal I/O
access.

And (believe me) very, very fast disk to user memory transfer (what i call
multi-burst-transfer).

If anybody wants more information, i can always fix the usenet address directly
to Aret` but that have to wait for a week or so, because i have one week off
now.

Also, you can have the test programs that we ran on the machines, but first i
have to hear with my boss before leave it to the net.

Here is s short explanation to what we call the diskstone program!
(More information will be supplied when(if) i put it one the net.)

/*
**
**			M A I N
**
** Decode arguments and fork away all processes necessary for this particular
** test.
**
** It works like this:
**
** 1. Decode arguments and check if they are ok. (with standard getopt(3))
**
** 2. Creat a message queue.
**    (to use when checking the children, if they are ready to go. And when they
**     are done, read the child return status and times.)
**
** 3. Setup signals so if you enter a SIGINT (Ctl-c) the main process will kill
**    all its children in a clean way + if the process is in backround, it sets
**    up a dummy rountine to jump to, so you can break the pause call in a clean
**    way whitout aborting the process. (this is the way the main process trigs
**    all its children to start at the same time).
**
** 4. fork away the actual number of children.
**
** 5. execute a Lie-in-wait-for loop to let all the children start up before
**    they can go on and read from the data banks. The main reason is if you let
**    them start at once and you have a large number of processes, children 
**    may die before all forks are done.
**
** 6. Now, trig the children to start by sending a SIGALRM to each of them.
**    (work, work, work.......)
**
** 7. loop to wait for all the children to terminate. If a child has been killed
**    by a signal, keep the signal in the child status table and don't bother
**    about the rest. If a child has exited in a normal way read its return 
**    status and executing times from the message queue and put it in the child
**    status table.
**
** 8. Print out time!: Print all the status from the child status table in a
**    nice format. (one column for each child)
**    a) If the child was killed, print just which signal it was.
**    b) If the return code wasn't zero, print just the return code.
**    c) In normal cases, print the real/user/sys time and the start/stop time.
**    Finaly, print the MIN, MAX and average statistics.
** 
** 9. If a return code from a child wasn't zero, and there was something to
**    read from the message queue, print the extended error message from that
**    particular child (kept in the child status table).
**
** 10 remove the message queue id. (it won't be removed in exit(2))
**    --
**
**		Lars Hammarstrand. (860410)
*/




And here is how the result looks like:

-------------------------------------------------------------------------------
Apr 22 17:54 1986  Title: D10t10w0b400/log5-10-10			Page 1



		Tue Apr 22 17:54:53 1986 

Nr PID	  Read-time  User-time  Sys-time   Start     End      Trmsie Retcd Errno
================================================================================
0  428    0:00:00.0  0:00:00.0  0:00:00.0  12:55:34  12:55:44
1  429    0:00:00.0  0:00:00.0  0:00:00.0  12:55:34  12:55:44
2  430    0:00:00.0  0:00:00.0  0:00:00.0  12:55:34  12:55:44
3  431    0:00:00.0  0:00:00.0  0:00:00.0  12:55:34  12:55:44
4  432    0:00:00.0  0:00:00.0  0:00:00.0  12:55:34  12:55:44

	  --------  ---------  ---------
AVERAGE   0:00:00.0  0:00:00.0  0:00:00.0
MINTIME   0:00:00.0  0:00:00.0  0:00:00.0
MAXTIME   0:00:00.0  0:00:00.0  0:00:00.0


DEV COEF  0:00:00.0  0:00:00.0  0:00:00.0	(standard deviation)
95% CONF  0:00:00.0  0:00:00.0  0:00:00.0	(95% precent confidence)
================================================================================
Processes 5, databaes 10, transactions 10, max record 8000, kernel bufs 800


-------------------------------------------------------------------------------
We did the test with the kernel compiled with 200, 400 and 800 disk buffers.
(and so on) and from 5 to 150 processes (users).

On process simulates one user doing 10 transactions, where one transaction is
one read and one write from each data base. (with some variations)
Some of the tests did some writings to the screen for each transaction, so
it will disturb the disk i/o channels as much a posseble.

This is a pure "C" program and not a lot of sort and sort and sort.. scripts
(flames > /dev/null)

It's very important to test and simulate a system when it's heavy loaded, and
not just do the test with a single proccess running a sort program with just one
user logged in, if you want to test the TRUE(!) throughput on bigger systems.

When i say true, i mean how it will work in a real database transaction
enviromet.

The main question is always: how well does the dispacher work when the system
is tied down to its knees?

--

Lars Hammarstrand.
Datorisering AB - Stockholm, Sweden.



More information about the Comp.unix.wizards mailing list