*nix performance

Brian Cuthie brian at umbc3.UMD.EDU
Fri Oct 21 04:58:59 AEST 1988


In article <168 at ernie.NECAM.COM> koll at ernie.NECAM.COM (Michael Goldman) writes:
>
>As I was saying (I'm just getting used to posting) Any manufacturer
>trying to run the DMA chip above 5 MHz risks frying the chip, and
>part of the motherboard.  With all these cpus going along at 25 MHz
                                              ^^^^^^^^^^^^^^^ what !?

>it is faster to use the cpu.  Before dumping on IBM for using such
>a dumb chip, recall that the original PC came with a cassette port
>and only 64K on the mother board.  Who needs DMA in that environment ?
>
>This is one more reason to go to the new Microchannel architecture which
>has good DMA support and very nice chips.  There are some other problems
>with DMA on the PC. One is that DOS is not re-entrant and so you have
>to VERRRY Carefully save the state with any program that uses interrupts
>which is implicit in any reasonable application with DMA.  With all the

WHAT !?  DMA and interrupts are COMPLETELY UNRELATED.  DMA places the 
processor in a HOLD state while the transfer takes place.  This locks
out even interrupts.  There is ABSOLUTELY no necessity to save any context
while doing DMA.  Besides, I know what re-entrant instructions are (and
besides, they're "restartable instructions", but that's a different point), but
what the !%^%@ is a re-entrant operating system.  Can you name one ?? I bet
not.

>yo-yos trying to be the next Mitch Kapor, IBM wisely left out helping
>anyone write DMA programs, for fear of having every one try to save a
>few usecs and crashing DOS.  The string transfer assembly instructions
>on the 80x86 are as fast as DMA anyway at comparable clock speeds.  IN
>a no wait-state system there's no real advantage to DMA for single
>threaded OS's like DOS, which is probably why IBM waited to have the
>386 in a new bus with a new multi-threaded OS an new DMA chips.

The problem with DMA on the PC is simple.  DMA channel 0 is programmed to
periodically paw through RAM to effect a refresh.  Since NOTHING can
interrupt a DMA in progress (including another higher priority DMA request)
burst mode DMA transfers, which would be significantly faster than CPU
transfers could EVER be, would lock out the channel 0 refresh for too long.
Thus DMAs are limited to single byte transfers.  Since each byte then has to 
place the processor into a HOLD state, and this takes some time, it turns
out to be faster to do processor string moves.

Keep in mind that DMA is significantly faster than CPU transfers, even with
caching, because the DMA chip places the memory address on the bus and then 
asserts the READ or WRITE line while simultaniously asserting the DMA ACK line.
Since the peripheral requesting DMA is well aware of who he/she is and knows 
that if the memory WRITE line is asserted it must be a peripheral READ (and
vice versa) the transfer takes place in exactly ONE memory cycle.  Observe
that this would be twice as fast as the CPU since it requires, at best, one
cycle to read the byte from the peripheral and one cycle to write it
to memory.  Of course the above argument holds for 16 or 32 bit words also,
so long as the memory, peripheral and DMA controller are all willing
to participate.

>So now one process can wait for a file transfer using DMA while another
>process can execute.  This implies that the developers can intelligently

Well this sounds better than it often is, since the CPU must sit by and
wait for the DMA to complete anyway.

>use the DMA chips (don't hold your breath - the operant philosophy
>seems to be " If the PC is cheap then I don't have to pay the
>programmers much either. " and we get what they pay for (I'm not
>bitter, not ME !)).  Finally, recall that the 8088 was still
>trying to maintain some compatibility with 8080s and a lot of the
>support chips out there at the time hadn't caught up.  The 80386
>is what Intel should have designed long ago if they had seen the
>future, and now it has good support chips. (Not dumping on Intel,

Intel would have been more than happy to have designed the 80386 years
ago (and in fact that's when they started the design) had the technology
been affordable.   What do you think has kept the 80486 so long.  It hasn't
been a lack of market demand.

>hindsight is 20-20, and densities didn't allow much earlier.)

Bingo

>
>Regards,
>Michael Goldman


Brian Cuthie
Consultant
Columbia, MD 21046
(301) 381 - 1718

Internet:	brian at umbc3.umd.edu
Usenet:		...uunet!umbc3!cbw1!brian



More information about the Comp.unix.questions mailing list