AIM Technology Software

Peter Marvit marvit at hplabsb.UUCP
Wed Jul 30 11:54:00 AEST 1986


>I just read in a brochure from AIM Technology that 
>"AIM Benchmarks* are industry standards for UNIX* system measurement."
>Are their software for system measurement good? Are there satisfied
>users out there? How did they get to be "industry standard"?
> 
>*UNIX is a Registered Trademark of AT&T
>*AIM Benchmarks is a Trademark of AIM Technology

I was quite closely involved with the AIM Suite II benchmark and know its
author reasonably well.  If you would like further info about the software
or the company, please e-mail me directly. I will try to keep this posting
as factual as possible and will offer personal opinions through the mail
only. IN any case, a summary follows.

Benchmarking, like sex, politics, and religious, is a subject on which
everyone has an opinion and everyone seems to believe his/her own is the
*TRUTH*.  How many people have actually researched the topic of performance
measurement, however, and attempt to understand the subtleties? Far fewer,
I'm afraid.  in fact, the scientific literature is very scanty; most of the
tomes deal with theoretical queuing theory or IBM mainframe capacity
plannign.  Precious little literature exists which is suitable to a general
audience or even UNIX in particular.  Please see April and August 84 (?) BYTE
Magazines for examples.  However, I will save my benchmarking tutorial for
another posting.

The AIM Suite I (and Suite II) might claim to be "industry standard" due
primarily to lack of commercial competition. Certainly a reasonable number of
licenses have been sold (exact number is confidential, obviously). AIM also
has the virtue of being generally first and quite innovative in packaging a
relatively easy to use and comprehensible suite. Let me concentrate my
remaining comments on Suite II; I have not seen Suite III, if it exists,
and Suite I is old news.

Suite II, described in a paper by Gene Dronek at the Utah USENIX conference
(85?), consists of two parts: system testing/data generation and data
analysis/presentation.  The first part runs a series of "elemental" single
thread tasks which puport to measure items like RAM copy, floating point
add, TTY character write, and so on.  The task is run for a certain amount
of time until a rate is established.  The results are in terms of
bytes/second (or other relevant measures) rather than elapsed time. Running
all 36 tasks takes about 20 minutes on *any* machine from PC/AT to Amdahl.

The rsesults are put into a data base which in then run through a program
which employs linear analysis using predtermined (and user-modifiable)
weights to produce figures of merit.  For example, if you thought your job
mix was heavily memory and disk laden, but did little math and kernel
calls,you could set up a "filter" which would interpret the performance
data as it applies to your application.

The presentation can be either in graphical or numerical form.  Marketing
folks love this, and many of the marketing deprtments ordered the software.

AIM Suite II gained a great deal from the deficiencies of its predecessor.
It was designed so that non-UNIX gurus could set it up, run it, and
understand the results.  Unfortunately, like many pioneers, it could now be
considered old technology with some significant frailties.  this is not say
that the software is invalid -- only that its shortcomings must be
understood before blindly invoking its name.  

First, do the individual tests actually measure what they purport?  Do they
take into account the overhead involved with the funstion calls?  How prone
is the software to compiler optimization?  How accurate/ free from
variation are successive runs? Are the individual test themselves valid?
What aspects of performance are left untouched?  What control does the
benchmark impose for system configuration?

With the reporting of data, how are the individual numbers weighed?  If
presenting a single figure of merit, what is the model and supporting
detail? Do the application mixes have *any* relation to reality?

I sponsored a Benchmark Symposium June 1985 which was attended by 20 people
representing companies from all over the world.  Unfortunately, because AIM
is so well known and has been around for quite some time, it recieved a
large number of complaints (most of which, however, contained valid
points). Few alternatives existed and few appeared forthcoming.  It was
consensus that AIM had somewhat popularized benchamrking and must also be
superceded by a product or program which corrects its failings.

Interested readers are referred to David Hinnant's BYTE article in which he
presents a suite of public domain benchmarks (AIM charges up to 4 figures).
Stephen Mills of NCR presenetd his benchmarks at last summer's USENIX and
has also posted the source to net.sources about a month ago. Whetstone and
dhrystone as well as a myriad of other benchmarks populate (clog :-) the
UNIX world. AIM still sells and probably should not be completely written
off, given that there exists a range of data from the past few years for
comparison's sake.

The ultimate answer to your original question depends a great deal on what
your purpose is.  Your affiliate at AT&T Labs at Lisle probably still have
the unreleased QUARTZ benchmarking system which I was very impressed with
(not available to outside interests, however). Its operations were
described at the Dallas UNIFORUM last year. If you get "such a deal" on
large amounts of historical data and you *really* feel it's useful, you
might consider the AIM suite.  If you are looking for tools for engineering
support during design and implementation phases of building a mchine, you
are better off with some home-grown programs.  If you don't care about
scientific accuracy, the BYTE seive is a cute diversion which signifies
nothing in the real world.  AIM's "industry standard" proclamation, in any
case, is marketing hyperbole with some historical truth behind it.

Disclaimer: I was employed at Yates Ventures (when it existed) as
Laboratory Manager.  I was responsible for hardware benchmarking and in
fact used and sold the AIM Suite II. My opinions are personal and do nnot
necessarily reflect any corporate entity.  I derive no money from any
benchmarking product or activity. I do have *very* strong opinions on the
general subject and specific programs and companies which are not
approprate to a public forum.

Peter Marvit
HP Labs
ARPA: marvit at hplabs.hp.com
uucp:{decvax,ihnp4,siesmo,ucbvax}!hplabs!marvit

P.S. Apologies in advance for typos.  Factual corrections graciously
accepted.



More information about the Comp.unix mailing list