FAST?

Thad P Floryan thad at cup.portal.com
Sun Jan 6 22:13:50 AEST 1991


res at cbnews.att.com (Robert E. Stampfli) in <1991Jan5.052841.3618 at cbnews.att.com
>
writes:

	Actually, I would put the 3B1 and the 8Mhz 8086 (which would have a
	16-bit bus) about on par -- the 3B1 turns just over 1000 dhrystones,
	while an AT&T 6300 turns about 875 (MSC 4.0).  In today's world,
	neither of these numbers is impressive.

B-b-b-but, we ALL know there are:

	1. lies,
	2. damned lies, and
	3. benchmarks!

True about the dhrystone numbers not being impressive.  I have one 68020/68881
Amiga system here that's fairly impressive (cannot find the dhry results; must
be mis-filed), and I've played with a bunch of 68030 and have seen several
68040 systems.

But, in ALL my own tests of doing real work (compiles, I/O, multi-tasking,etc)
the 3B1 outperforms a Mac II (68020/68881/68xxx (MMU)) running A/UX 1.* or 2.*,
and is no slouch compared even to my office VAX 11/780 systems

	However, I remember running Unix for a whole department of perhaps 20
	people on an 11/45, and the 3B1 would have run rings around that.  So,
	it all is relative: if you need a real smoker, go buy a 386 box and
	pay for the extra horsepower.  If you just want a solid Unix-in-a-box
	machine at the right price, I can't think of a better choice than the
	3B1.

In the general sense I would agree with you.  As an overall system the 3B1 is
quite decent even by contemporary standards.  And it literally outperformed
the WGS6386/25MHz running SVR3.2 in our Users' Group booth during last year's
West Coast Computer Faire ... I had actually believed the 6386 had crashed,
but it was still crawling along.  You wouldn't BELIEVE the number of things
that were running on the 3B1 there ... some clowns had started up a LOT of
games, video demos, and other stuff while I left for awhile to view the show,
and when I returned was wondering why it "felt" slow running GNU EMACS and
gcc ... 'til I saw all the processes and windows that were obscured. Sheesh!
Good ol' "kill -9 nn" cleared up that problem real fast!

And the Goodguys gave a "LAN Manager" demo at AT&T, San Francisco, during the
same meeting I spoke back in Sept. 1990.  They also had a 6386 running UNIX
acting as a server on StarLAN for a single 6300; man that system was s-l-o-w.
I brought up my 3B1 running "essentially" the same software and, again, the
3B1 beat the pants off the 6386.  I didn't have to say even a single word; it
was obvious from people's expressions as I was mousing and keyboarding along.
Even the "UA" on the 3B1 was MUCH faster than "FACE" on the 6386.

ALL other experiences I had with other vendors' '386 machines brings me to the
same conclusion: the 386 s*cks.  And I'm not overly keen on the '486 either;
gimme a 68030 or 68040 ANY day.

I can afford *ANY* computer that I choose to buy, and I see NOTHING based on
the Intel CPU chips that even remotely interests me.

Now, I really, honestly, and truly do NOT want to start any "computer wars"
pissing contests in this newsgroup; just had to endure a bunch of that crap
when some NeXT bozos cluttered up the comp.sys.amiga newsgroups recently.

All I'm going to say is that Intel CPU architecture does NOT lend itself to
good and efficient systems as typified by the 8086, 80286, 80386, 80486;
they apparently DO have a winner with the i860, but that's something new and
completely different and not likely to be in "home" or "small-office" systems
for awhile.

And I'm NOT going to bring up the "math" problems in early (still shipping?)
'486 chips.  No, to forestall all the massive flames which I'm SURE will
result from this posting, I'm going to list some of the facts for your studied
consideration.  Many of the following technical points were posted to another
newsgroup by Dave Haynie, whose technical expertise I respect (he designs
computers for a living).

1.	Let's first address the "8086" and "80286" in every '386 and '486 chip.

	Intel had to make the '386 and '486 as compatible as they could, they
	had no other choice.  Because of what are design flaws, the 8088/8086
	architecture wasn't extendable.  There is no user mode, and there is
	no real OS to hide any differences.  That's a bad thing, and the Intel
	lines will be stuck with it for a long time; even the '486 had to make
	compromises to let MS-DOS stuff work, and all those folks running UNIX
	on the '486 will pay the price.

	Contrast that with the superiority of the Motorola architecture which
	lets the user take along the software investment AND take advantage of
	performance increases other than simply a faster clock.

2.	As someone else has said before, it is impossible to both understand
	and appreciate the Intel architecture; Motorola utilizes memory-mapping
	while Intel uses Isolated I/O

	Dedicated I/O instructions are an extremely archaic concept.  The only
	reason they existed on the 8088 in the first place was because the
	8088 was designed to be relatively close to assembly-source level
	compatible with 8080/8085 machines.  The use of this technique in the
	8080 was mainly to get around address space limits (you could have 64K
	AND I/O devices at once), but it's a horrible waste of instruction
	decoding, and the rest of the world does fine without it.

	You will find that this architectural foolishness is just about
	nonexistant outside of the 80x86 family.  Even other Intel chips, like
	the i860, do things the modern way, by memory mapping.  I don't know
	of any modern microprocessor that supports I/O-only instructions.

	The generic objections to the Intel architecture, however, have
	absolutely nothing to do with I/O mapping.  They have to do with
	segmentation.  Segmentation is one of the more truely evil concepts in
	the microprocessor industry.  Again, this was something Intel adpoted
	to make the transition from 8080 to 8088 less painful.  It worked to
	that end, but has been causing endless pain every since.  Motorola,
	which was in a similar position in the 70's, chose instead to scrap
	any notions of pseudo-compatibility with their 8-bit line, and instead
	do a 16 bit microprocessor correctly.  Their solution was to make the
	programmer's model a full 32 bit model, rather than kludging around
	with a 16 bit model and some banking scheme (eg, everyone then knew
	that 64K of addressing wasn't enough). 

	The end result has been that every subsequent generation of Motorola
	680x0 uses the same programmer's model.  Every generation of Intel
	80x86, except for the 80386->80486 jump, has had a new programmer's
	model and special hardware modes to support the old models.  The
	reason the 80486 has the same model as the 80386?  The 80386 was the
	first 80x86 CPU to support a true 32 bit programmer's model, which
	made segments unnecessary.  So there was no reason to change anything.

	The Intel 80x86 architecture isn't appreciated anywhere near the high
	end of any market.  It's used by folks who find 80486 machines a good
	bang/buck, or by folks who find that the installed base of 30-40
	million MS-DOS machines and growth of another 10 million or so a year
	tends to make rather esoteric programs available on the market.  Or by
	people who don't know any better.  But there are few, if any, people
	who choose 80x86 machines because they admire their architecture.  And
	I'm willing to bet just as many people buy Ford Escorts for their
	styling.


Thad Floryan [ thad at cup.portal.com (OR) ..!sun!portal!cup.portal.com!thad ]



More information about the Comp.sys.att mailing list