From coff at tuhs.org Sat Dec 2 05:25:31 2023 From: coff at tuhs.org (segaloco via COFF) Date: Fri, 01 Dec 2023 19:25:31 +0000 Subject: [COFF] Western Electric 321 Development System Schematics? Message-ID: Hello everyone, I was wondering if anyone is aware of any surviving technical diagrams/schematics for the WECo 321EB or WECo 321DS WE32x00 development systems? Bitsavers has an AT&T Data Book from 1987 detailing pin maps, registers, etc. of 32xxx family ICs and then another earlier manual from 1985 that seems to be more focused on a technical overview of the CPU specifically. Both have photographs and surface level block diagrams, but nothing showing individual connections, which bus leads went where, etc. While the descriptions should be enough, diagrams are always helpful. In any case, I've recently ordered a 32100 CPU and 32101 MMU I saw sitting on eBay to see what I can do with some breadboarding and some DRAM/DMA controllers from other vendors, was thinking of referring to any available design schematics of the 321 development stuff for pointers on integrations. Either way, i'm glad the data books on the hardware have been preserved, that gives me a leg up. Thanks for any insights! - Matt G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Fri Dec 15 07:48:05 2023 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 14 Dec 2023 16:48:05 -0500 (EST) Subject: [COFF] Terminology query - 'system process'? Message-ID: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> So Lars Brinkhoff and I were chatting about daemons: https://gunkies.org/wiki/Talk:Daemon and I pointed out that in addition to 'standard' daemons (e.g. the printer spooler daemon, email daemon, etc, etc) there are some other things that are daemon-like, but are fundamentally different in major ways (explained later below). I dubbed them 'system processes', but I'm wondering if ayone knows if there is a standard term for them? (Or, failing that, if they have a suggestion for a better name?) Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping) process"), but the CACM "The UNIX Time-Sharing System" paper: https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf doesn't even mention it, so no guidance there. Berkeley UNIX also has one, mentioned in "Design and Implementation of the Berkeley Virtual Memory Extensions to the UNIX Operating System": http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf where it is called the "pageout daemon".("During system initialization, just before the init process is created, the bootstrapping code creates process 2 which is known as the pageout daemon. It is this process that .. writ[es] back modified pages. The process leaves its normal dormant state upon being waken up due to the memory free list size dropping below an upper threshold.") However, I think there are good reasons to dis-favour the term 'daemon' for them. For one thing, typical daemons look (to the kernel) just like 'normal' processes: their object code is kept in a file, and is loaded into the daemon's process when it starts, using the same mechanism that 'normal' processes use for loading their code; daemons are often started long after the kernel itself is started, and there is usually not a special mechanism in the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init' process, not the kernel); daemons interact with the kernel through system calls, just like 'ordinary' processes; the daemon's process runs in 'user' CPU mode (using the same standard memory mapping mechanisms, just like blah-blah). 'System processes' do none of these things: their object code is linked into the monolithic kernel, and is thus loaded by the bootstrap; the kernel contains special provision for starting the system process, which start as the kernel is starting; they don't do system calls, just call kernel routines directly; they run in kernel mode, using the same memory mapping as the kernel itself; etc, etc. Another important point is that system processes are highly intertwined with the operation of the kernel; without the system process(es) operating correctly, the operation of the system will quickly grind to a halt. The loss of ordinary' daemons is usually not fatal; if the email daemon dies, the system will keep running indefinitely. Not so, for the swapping process, or the pageout daemon Anyway, is there a standard term for these things? If not, a better name than 'system process'? Noel From bakul at iitbombay.org Fri Dec 15 08:06:23 2023 From: bakul at iitbombay.org (Bakul Shah) Date: Thu, 14 Dec 2023 14:06:23 -0800 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> Message-ID: I remember calling them kernel processes as they had no code running in user mode. Not sure now of the year but sometime in ‘80s. Now I’d probably call them kernel threads as they don’t have a separate address space. > On Dec 14, 2023, at 1:48 PM, jnc at mercury.lcs.mit.edu wrote: > > So Lars Brinkhoff and I were chatting about daemons: > > https://gunkies.org/wiki/Talk:Daemon > > and I pointed out that in addition to 'standard' daemons (e.g. the printer > spooler daemon, email daemon, etc, etc) there are some other things that are > daemon-like, but are fundamentally different in major ways (explained later > below). I dubbed them 'system processes', but I'm wondering if ayone knows if > there is a standard term for them? (Or, failing that, if they have a > suggestion for a better name?) > > > Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping) > process"), but the CACM "The UNIX Time-Sharing System" paper: > > https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf > > doesn't even mention it, so no guidance there. Berkeley UNIX also has one, > mentioned in "Design and Implementation of the Berkeley Virtual Memory > Extensions to the UNIX Operating System": > > http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf > > where it is called the "pageout daemon".("During system initialization, just > before the init process is created, the bootstrapping code creates process 2 > which is known as the pageout daemon. It is this process that .. writ[es] > back modified pages. The process leaves its normal dormant state upon being > waken up due to the memory free list size dropping below an upper > threshold.") However, I think there are good reasons to dis-favour the term > 'daemon' for them. > > > For one thing, typical daemons look (to the kernel) just like 'normal' > processes: their object code is kept in a file, and is loaded into the > daemon's process when it starts, using the same mechanism that 'normal' > processes use for loading their code; daemons are often started long after > the kernel itself is started, and there is usually not a special mechanism in > the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init' > process, not the kernel); daemons interact with the kernel through system > calls, just like 'ordinary' processes; the daemon's process runs in 'user' > CPU mode (using the same standard memory mapping mechanisms, just like > blah-blah). > > 'System processes' do none of these things: their object code is linked into > the monolithic kernel, and is thus loaded by the bootstrap; the kernel > contains special provision for starting the system process, which start as > the kernel is starting; they don't do system calls, just call kernel routines > directly; they run in kernel mode, using the same memory mapping as the > kernel itself; etc, etc. > > Another important point is that system processes are highly intertwined with > the operation of the kernel; without the system process(es) operating > correctly, the operation of the system will quickly grind to a halt. The loss > of ordinary' daemons is usually not fatal; if the email daemon dies, the > system will keep running indefinitely. Not so, for the swapping process, or > the pageout daemon > > > Anyway, is there a standard term for these things? If not, a better name than > 'system process'? > > Noel From clemc at ccc.com Fri Dec 15 08:09:20 2023 From: clemc at ccc.com (Clem Cole) Date: Thu, 14 Dec 2023 17:09:20 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> Message-ID: I don't know of a standard name. We used to call the kernel processes or kernel threads also. For instance, in the original Masscomp EFS code, we had a handful of processes that got forked after the pager using kernel code. Since the basic UNIX read/write from the user space scheme is synchronous, the premade pool of kernel processes was dispatched as needed when we listened for asynchronous remote requests for I/O. This is similar to the fact that asynchronous devices from serial or network interfaces need a pool of memory to stuff things into since you never know ahead of time when it will come. ᐧ On Thu, Dec 14, 2023 at 4:48 PM Noel Chiappa wrote: > So Lars Brinkhoff and I were chatting about daemons: > > https://gunkies.org/wiki/Talk:Daemon > > and I pointed out that in addition to 'standard' daemons (e.g. the printer > spooler daemon, email daemon, etc, etc) there are some other things that > are > daemon-like, but are fundamentally different in major ways (explained later > below). I dubbed them 'system processes', but I'm wondering if ayone knows > if > there is a standard term for them? (Or, failing that, if they have a > suggestion for a better name?) > > > Early UNIX is one of the first systems to have one (process 0, the > "scheduling (swapping) > process"), but the CACM "The UNIX Time-Sharing System" paper: > > https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf > > doesn't even mention it, so no guidance there. Berkeley UNIX also has one, > mentioned in "Design and Implementation of the Berkeley Virtual Memory > Extensions to the UNIX Operating System": > > http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf > > where it is called the "pageout daemon".("During system initialization, > just > before the init process is created, the bootstrapping code creates process > 2 > which is known as the pageout daemon. It is this process that .. writ[es] > back modified pages. The process leaves its normal dormant state upon being > waken up due to the memory free list size dropping below an upper > threshold.") However, I think there are good reasons to dis-favour the term > 'daemon' for them. > > > For one thing, typical daemons look (to the kernel) just like 'normal' > processes: their object code is kept in a file, and is loaded into the > daemon's process when it starts, using the same mechanism that 'normal' > processes use for loading their code; daemons are often started long after > the kernel itself is started, and there is usually not a special mechanism > in > the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init' > process, not the kernel); daemons interact with the kernel through system > calls, just like 'ordinary' processes; the daemon's process runs in 'user' > CPU mode (using the same standard memory mapping mechanisms, just like > blah-blah). > > 'System processes' do none of these things: their object code is linked > into > the monolithic kernel, and is thus loaded by the bootstrap; the kernel > contains special provision for starting the system process, which start as > the kernel is starting; they don't do system calls, just call kernel > routines > directly; they run in kernel mode, using the same memory mapping as the > kernel itself; etc, etc. > > Another important point is that system processes are highly intertwined > with > the operation of the kernel; without the system process(es) operating > correctly, the operation of the system will quickly grind to a halt. The > loss > of ordinary' daemons is usually not fatal; if the email daemon dies, the > system will keep running indefinitely. Not so, for the swapping process, or > the pageout daemon > > > Anyway, is there a standard term for these things? If not, a better name > than > 'system process'? > > Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Fri Dec 15 08:12:15 2023 From: imp at bsdimp.com (Warner Losh) Date: Thu, 14 Dec 2023 15:12:15 -0700 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> Message-ID: When shutting down, FreeBSD refers to them as: kern/kern_shutdown.c: printf("Waiting (max %d seconds) for system process `%s' to stop... ", kern/kern_shutdown.c: printf("Waiting (max %d seconds) for system thread `%s' to stop... ", However, a number of places, including the swap daemon, still refer to things as this daemon or that daemon (page demon being top of the list, but there's the buf daemon, the vmdaemon that handles swapping (as opposed to paging), the update daemon, etc. Warner On Thu, Dec 14, 2023 at 3:06 PM Bakul Shah wrote: > I remember calling them kernel processes as they had no code running in > user mode. Not sure now of the year but sometime in ‘80s. Now I’d probably > call them kernel threads as they don’t have a separate address space. > > > On Dec 14, 2023, at 1:48 PM, jnc at mercury.lcs.mit.edu wrote: > > > > So Lars Brinkhoff and I were chatting about daemons: > > > > https://gunkies.org/wiki/Talk:Daemon > > > > and I pointed out that in addition to 'standard' daemons (e.g. the > printer > > spooler daemon, email daemon, etc, etc) there are some other things that > are > > daemon-like, but are fundamentally different in major ways (explained > later > > below). I dubbed them 'system processes', but I'm wondering if ayone > knows if > > there is a standard term for them? (Or, failing that, if they have a > > suggestion for a better name?) > > > > > > Early UNIX is one of the first systems to have one (process 0, the > "scheduling (swapping) > > process"), but the CACM "The UNIX Time-Sharing System" paper: > > > > https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf > > > > doesn't even mention it, so no guidance there. Berkeley UNIX also has > one, > > mentioned in "Design and Implementation of the Berkeley Virtual Memory > > Extensions to the UNIX Operating System": > > > > http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf > > > > where it is called the "pageout daemon".("During system initialization, > just > > before the init process is created, the bootstrapping code creates > process 2 > > which is known as the pageout daemon. It is this process that .. writ[es] > > back modified pages. The process leaves its normal dormant state upon > being > > waken up due to the memory free list size dropping below an upper > > threshold.") However, I think there are good reasons to dis-favour the > term > > 'daemon' for them. > > > > > > For one thing, typical daemons look (to the kernel) just like 'normal' > > processes: their object code is kept in a file, and is loaded into the > > daemon's process when it starts, using the same mechanism that 'normal' > > processes use for loading their code; daemons are often started long > after > > the kernel itself is started, and there is usually not a special > mechanism in > > the kernel to start daemons (on early UNIXes, /etc/rc is run by the > 'init' > > process, not the kernel); daemons interact with the kernel through system > > calls, just like 'ordinary' processes; the daemon's process runs in > 'user' > > CPU mode (using the same standard memory mapping mechanisms, just like > > blah-blah). > > > > 'System processes' do none of these things: their object code is linked > into > > the monolithic kernel, and is thus loaded by the bootstrap; the kernel > > contains special provision for starting the system process, which start > as > > the kernel is starting; they don't do system calls, just call kernel > routines > > directly; they run in kernel mode, using the same memory mapping as the > > kernel itself; etc, etc. > > > > Another important point is that system processes are highly intertwined > with > > the operation of the kernel; without the system process(es) operating > > correctly, the operation of the system will quickly grind to a halt. The > loss > > of ordinary' daemons is usually not fatal; if the email daemon dies, the > > system will keep running indefinitely. Not so, for the swapping process, > or > > the pageout daemon > > > > > > Anyway, is there a standard term for these things? If not, a better name > than > > 'system process'? > > > > Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Fri Dec 15 09:29:35 2023 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 14 Dec 2023 18:29:35 -0500 (EST) Subject: [COFF] Terminology query - 'system process'? Message-ID: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> > From: Bakul Shah > Now I'd probably call them kernel threads as they don't have a separate > address space. Makes sense. One query about stacks, and blocking, there. Do kernel threads, in general, have per-thread stacks; so that they can block (and later resume exactly where they were when they blocked)? That was the thing that, I think, made kernel processes really attractive as a kernel structuring tool; you get code ike this (from V6): swap(rp->p_addr, a, rp->p_size, B_READ); mfree(swapmap, (rp->p_size+7)/8, rp->p_addr); The call to swap() blocks until the I/O operation is complete, whereupon that call returns, and away one goes. Very clean and simple code. Use of a kernel process probably makes the BSD pageout daemon code fairly straightforward, too (well, as straightforward as anything done by Berzerkly was :-). Interestingly, other early systems don't seem to have thought of this structuring technique. I assumed that Multics used a similar technique to write 'dirty' pages out, to maintain a free list. However, when I looked in the Multics Storage System Program Logic Manual: http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageSysPLM_Sep78.pdf Multics just writes dirty pages as part of the page fault code: "This starting of writes is performed by the subroutine claim_mod_core in page_fault. This subroutine is invoked at the end of every page fault." (pg. 8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to complete dealing with a page fault.) It makes sense to have a kernel process do this; having the page fault code do it just makes that code more complicated. (The code in V6 to swap processes in and out is beautifully simple.) But it's apparently only obvious in retrospect (like many brilliant ideas :-). Noel From lm at mcvoy.com Fri Dec 15 09:54:43 2023 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 14 Dec 2023 15:54:43 -0800 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> Message-ID: <20231214235443.GG6732@mcvoy.com> On Thu, Dec 14, 2023 at 06:29:35PM -0500, Noel Chiappa wrote: > > From: Bakul Shah > > > Now I'd probably call them kernel threads as they don't have a separate > > address space. > > Makes sense. One query about stacks, and blocking, there. Do kernel threads, > in general, have per-thread stacks; so that they can block (and later resume > exactly where they were when they blocked)? Yep, threads have stacks, not sure how they could work without them. Which reminds me of some Solaris insanity. Back when I was at Sun, they were threading the VM and I/O system. The kernel was pretty bloated and a stack was 2 8K pages. The I/O people wanted to allocate a kernel thread *per page* that was being sent to disk/network. I pointed out that this means if all of memory wants to head to disk, your dirty page cache is 1/3 of memory because the other 2/3s were thread stacks. They ignored me, implemented it, and it was a miserable failure and they had to start over. They just didn't believe in basic math. > Use of a kernel process probably makes the BSD pageout daemon code fairly > straightforward, too (well, as straightforward as anything done by Berzerkly > was :-). I have immense respect for the BSD pageout daemon code. When I did UFS clustering to make I/O go at the platter speed (rather than 1/2 the platter speed), it caused a big problem because the pageout daemon could not keep up with UFS, UFS used up page cache much faster than the pageout daemon could free pages. I wrote somewhere around 13 different pageout daemons in an attempt to do better than the BSD one. And, in certain cases, I did do better. All of them did at least a little better. But none of them did better in all cases. That BSD code was subtly awesome, I was at the top of my game and couldn't beat it. I ended up implementing a "free behind" in UFS, I'd watch the variable that controlled kicking the pageout code into running, and I'd start freeing behind when it was getting close to running (but enough ahead that the pageout daemon wouldn't wake up). It's a really gross hack but it was the best that I could come up with. --lm From bakul at iitbombay.org Fri Dec 15 11:15:39 2023 From: bakul at iitbombay.org (Bakul Shah) Date: Thu, 14 Dec 2023 17:15:39 -0800 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> Message-ID: <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> On Dec 14, 2023, at 3:29 PM, Noel Chiappa wrote: > >> From: Bakul Shah > >> Now I'd probably call them kernel threads as they don't have a separate >> address space. > > Makes sense. One query about stacks, and blocking, there. Do kernel threads, > in general, have per-thread stacks; so that they can block (and later resume > exactly where they were when they blocked)? Exactly! If blocking was not required, you can do the work in an interrupt handler. If blocking is required, you can't just use the stack of a random process (while in supervisor mode) unless you are doing some work specifically on its behalf. > Interestingly, other early systems don't seem to have thought of this > structuring technique. I suspect IBM operating systems probably did use them. At least TSO must have. Once you start *accounting* (and charging) for cpu time, this idea must fall out naturally. You don't want to charge a process for kernel time used for an unrelated work! *Accounting* is an interesting thing to think about. In a microkernel where most of the work is done by user mode services, how to you keep track of time and resources used by a process (or user)? This can matter for things like latency, which may be part of your service level agreement (SLA). This should also have a bearing on modern network based services. > It makes sense to have a kernel process do this; having the page fault code > do it just makes that code more complicated. (The code in V6 to swap > processes in and out is beautifully simple.) But it's apparently only obvious > in retrospect (like many brilliant ideas :-). There was a race condition in V7 swapping code. Once a colleague and I spent two weeks of 16 hour debugging days! We were encrypting before swapping out and decrypting before swapping in. This changed timing enough that the bug manifested itself (in the end about 2-3 times a day when the system was running 5 or 6 kernel builds in parallel!). This is when I really understood "You are not expected to understand this." :-) From lars at nocrew.org Fri Dec 15 16:24:33 2023 From: lars at nocrew.org (Lars Brinkhoff) Date: Fri, 15 Dec 2023 06:24:33 +0000 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> (Noel Chiappa's message of "Thu, 14 Dec 2023 16:48:05 -0500 (EST)") References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> Message-ID: <7wv8903rvi.fsf@junk.nocrew.org> For the record, daring to stray outside Unix, ITS originally had a "system job", and later split off a "core job". Both running in monitor mode. Job means the same thing as process here, and monitor same as kernel. --- >8 --- cut and stop reading here --- >8 --- off topic --- >8 --- In ITS terminology, "demon" means a (user space) process that is started on demand by the system. This could be due to an external event (say, a network connection), or another process indicating a need for a service. The demon may go away right after it has done its job, or linger around for a while. There's a separate term "dragon" which means a continuously running background process. From lars at nocrew.org Fri Dec 15 16:34:50 2023 From: lars at nocrew.org (Lars Brinkhoff) Date: Fri, 15 Dec 2023 06:34:50 +0000 Subject: [COFF] ITS 138 listing from 1967 Message-ID: <7wo7es3red.fsf@junk.nocrew.org> For the benefit of Old Farts around here, I'd like to share the good word that an ITS 138 listing from 1967 has been discovered. A group of volunteers is busy transcribing the photographed pages to text. Information and link to the data: https://gunkies.org/wiki/ITS_138 This version is basically what ITS first looked like when it went into operation at the MIT AI lab. It's deliciously arcane and primitive. Mass storage is on four DECtape drives, no disk here. Users stations consist of five teletypes and four GE Datanet 760 CRT consoles (46 colums, 26 lines). The number of system calls is a tiny subset of what would be available later. There are more listings from 1967-1969 for DDT, TECO, LISP, etc. Since they are fan-fold listings, scanning is a bit tricky, so a more labor- intensive photographing method is used. From crossd at gmail.com Fri Dec 15 23:43:09 2023 From: crossd at gmail.com (Dan Cross) Date: Fri, 15 Dec 2023 08:43:09 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> Message-ID: On Thu, Dec 14, 2023 at 7:07 PM Noel Chiappa wrote: > > Now I'd probably call them kernel threads as they don't have a separate > > address space. > > Makes sense. One query about stacks, and blocking, there. Do kernel threads, > in general, have per-thread stacks; so that they can block (and later resume > exactly where they were when they blocked)? > > That was the thing that, I think, made kernel processes really attractive as > a kernel structuring tool; you get code ike this (from V6): > > swap(rp->p_addr, a, rp->p_size, B_READ); > mfree(swapmap, (rp->p_size+7)/8, rp->p_addr); > > The call to swap() blocks until the I/O operation is complete, whereupon that > call returns, and away one goes. Very clean and simple code. Assuming we're talking about Unix, yes, each process has two stacks: one for userspace, one in the kernel. The way I've always thought about it, every process has two parts: the userspace part, and a matching thread in the kernel. When Unix is running, it is always running in the context of _some_ process (modulo early boot, before any processes have been created, of course). Furthermore, when the process is running in user mode, the kernel stack is empty. When a process traps into the kernel, it's running on the kernel stack for the corresponding kthread. Processes may enter the kernel in one of two ways: directly, by invoking a system call, or indirectly, by taking an interrupt. In the latter case, the kernel simply runs the interrupt handler within the context of whatever process happened to be running when the interrupt occurred. In both cases, one usually says that the process is either "running in userspace" (ie, normal execution of whatever program is running in the process) or "running in the kernel" (that is, the kernel is executing in the context of that process). Note that this affects behavior around blocking operations. Traditionally, Unix device drivers had a notion of an "upper half" and a "lower half." The upper half is the code that is invoked on behalf of a process requesting services from the kernel via some system call; the lower half is the code that runs in response to an interrupt for the corresponding device. Since it's impossible in general to know what process is running when an interrupt fires, it was important not to perform operations that would cause the current process to be unscheduled in an interrupt handler; hence the old adage, "don't sleep in the bottom half of a device driver" (where sleep here means sleep as in "sleep and wakeup", a la a condition variable, not "sleep for some amount of time"): you would block some random process, which may never be woken up again! An interesting aside here is signals. We think of them as an asynchronous mechanism for interrupting a process, but their delivery must be coordinated by the kernel; in particular, if I send a signal to a process that is running in userspace, it (typically) won't be delivered right away; rather, it will be delivered the next time the process is scheduled to run, as the process must enter the kernel before delivery can be effected. Signal delivery is a synthetic event, unlike the delivery of a hardware interrupt, and the upcall happens in userspace. > Use of a kernel process probably makes the BSD pageout daemon code fairly > straightforward, too (well, as straightforward as anything done by Berzerkly > was :-). > > > Interestingly, other early systems don't seem to have thought of this > structuring technique. I assumed that Multics used a similar technique to > write 'dirty' pages out, to maintain a free list. However, when I looked in > the Multics Storage System Program Logic Manual: > > http://www.bitsavers.org/pdf/honeywell/large_systems/multics/AN61A_storageSysPLM_Sep78.pdf > > Multics just writes dirty pages as part of the page fault code: "This > starting of writes is performed by the subroutine claim_mod_core in > page_fault. This subroutine is invoked at the end of every page fault." (pg. > 8-36, pg. 166 of the PDF.) (Which also increases the real-time delay to > complete dealing with a page fault.) Note that this says, "starting of writes." Presumably, the writes themselves were asynchronous; this just initiates the operations. It certainly adds latency to the page fault handler, but not as much as waiting for the operations to complete! > It makes sense to have a kernel process do this; having the page fault code > do it just makes that code more complicated. (The code in V6 to swap > processes in and out is beautifully simple.) But it's apparently only obvious > in retrospect (like many brilliant ideas :-). I can kinda sorta see a method in the madness of the Multics approach. If you think that page faults are relatively rare, and initiating IO is relatively cheap but still more expensive than executing "normal" instructions, then it makes some sense that you might want to amortize the cost of that by piggybacking one on the other. Of course, that's just speculation and I don't really have a sense for how well that worked out in Multics (which I have played around with and read about, but still seems largely mysterious to me). In the Unix model, you've got scheduling latency to deal with to run the pageout daemon; of course, that all happened as part of a context switch, and in early Unix there was no demand paging (and so I suppose page faults were considered fatal). That said, using threads as an organizational metaphor for structured concurrency in the kernel is wonderful compared to many of the alternatives (hand-coded state machines, for example). - Dan C. From crossd at gmail.com Sat Dec 16 00:20:51 2023 From: crossd at gmail.com (Dan Cross) Date: Fri, 15 Dec 2023 09:20:51 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> Message-ID: On Thu, Dec 14, 2023 at 5:17 PM Clem Cole wrote: > I don't know of a standard name. We used to call the kernel processes or kernel threads also. I've heard all combinations of (system|kernel) (thread|task|process), all of which mean more or less the same thing: something the kernel can schedule and run that doesn't have a userspace component, reusing the basic concurrency primitives in the kernel for its own internal purposes. I'm not sure I've heard "kernel daemon" before, but intuitively I'd lump it into the same category unless I was told otherwise (as Noel mentioned, of course Berkeley had the "pageout daemon" which ran only in the kernel). > For instance, in the original Masscomp EFS code, we had a handful of processes that got forked after the pager using kernel code. Since the basic UNIX read/write from the user space scheme is synchronous, the premade pool of kernel processes was dispatched as needed when we listened for asynchronous remote requests for I/O. This is similar to the fact that asynchronous devices from serial or network interfaces need a pool of memory to stuff things into since you never know ahead of time when it will come. > ᐧ I remember reading a paper on the design of NFS (it may have been the BSD paper) and there was a note about how the NFS server process ran mostly in the kernel; user code created it, but pretty much all it did was invoke a system call that implemented the server. That was kind of neat. - Dan C. > On Thu, Dec 14, 2023 at 4:48 PM Noel Chiappa wrote: >> So Lars Brinkhoff and I were chatting about daemons: >> >> https://gunkies.org/wiki/Talk:Daemon >> >> and I pointed out that in addition to 'standard' daemons (e.g. the printer >> spooler daemon, email daemon, etc, etc) there are some other things that are >> daemon-like, but are fundamentally different in major ways (explained later >> below). I dubbed them 'system processes', but I'm wondering if ayone knows if >> there is a standard term for them? (Or, failing that, if they have a >> suggestion for a better name?) >> >> >> Early UNIX is one of the first systems to have one (process 0, the "scheduling (swapping) >> process"), but the CACM "The UNIX Time-Sharing System" paper: >> >> https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf >> >> doesn't even mention it, so no guidance there. Berkeley UNIX also has one, >> mentioned in "Design and Implementation of the Berkeley Virtual Memory >> Extensions to the UNIX Operating System": >> >> http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf >> >> where it is called the "pageout daemon".("During system initialization, just >> before the init process is created, the bootstrapping code creates process 2 >> which is known as the pageout daemon. It is this process that .. writ[es] >> back modified pages. The process leaves its normal dormant state upon being >> waken up due to the memory free list size dropping below an upper >> threshold.") However, I think there are good reasons to dis-favour the term >> 'daemon' for them. >> >> >> For one thing, typical daemons look (to the kernel) just like 'normal' >> processes: their object code is kept in a file, and is loaded into the >> daemon's process when it starts, using the same mechanism that 'normal' >> processes use for loading their code; daemons are often started long after >> the kernel itself is started, and there is usually not a special mechanism in >> the kernel to start daemons (on early UNIXes, /etc/rc is run by the 'init' >> process, not the kernel); daemons interact with the kernel through system >> calls, just like 'ordinary' processes; the daemon's process runs in 'user' >> CPU mode (using the same standard memory mapping mechanisms, just like >> blah-blah). >> >> 'System processes' do none of these things: their object code is linked into >> the monolithic kernel, and is thus loaded by the bootstrap; the kernel >> contains special provision for starting the system process, which start as >> the kernel is starting; they don't do system calls, just call kernel routines >> directly; they run in kernel mode, using the same memory mapping as the >> kernel itself; etc, etc. >> >> Another important point is that system processes are highly intertwined with >> the operation of the kernel; without the system process(es) operating >> correctly, the operation of the system will quickly grind to a halt. The loss >> of ordinary' daemons is usually not fatal; if the email daemon dies, the >> system will keep running indefinitely. Not so, for the swapping process, or >> the pageout daemon >> >> >> Anyway, is there a standard term for these things? If not, a better name than >> 'system process'? >> >> Noel From imp at bsdimp.com Sat Dec 16 02:25:04 2023 From: imp at bsdimp.com (Warner Losh) Date: Fri, 15 Dec 2023 09:25:04 -0700 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> Message-ID: On Fri, Dec 15, 2023 at 7:21 AM Dan Cross wrote: > On Thu, Dec 14, 2023 at 5:17 PM Clem Cole wrote: > > I don't know of a standard name. We used to call the kernel processes > or kernel threads also. > > I've heard all combinations of (system|kernel) (thread|task|process), > all of which mean more or less the same thing: something the kernel > can schedule and run that doesn't have a userspace component, reusing > the basic concurrency primitives in the kernel for its own internal > purposes. I'm not sure I've heard "kernel daemon" before, but > intuitively I'd lump it into the same category unless I was told > otherwise (as Noel mentioned, of course Berkeley had the "pageout > daemon" which ran only in the kernel). > FreeBSD (and likely others) have extended this to allow kernel threads that we loosely call daemons as well. FreeBSD has APIs for creating a kernel-only threads and processes... > > For instance, in the original Masscomp EFS code, we had a handful of > processes that got forked after the pager using kernel code. Since the > basic UNIX read/write from the user space scheme is synchronous, the > premade pool of kernel processes was dispatched as needed when we listened > for asynchronous remote requests for I/O. This is similar to the fact that > asynchronous devices from serial or network interfaces need a pool of > memory to stuff things into since you never know ahead of time when it will > come. > > ᐧ > > I remember reading a paper on the design of NFS (it may have been the > BSD paper) and there was a note about how the NFS server process ran > mostly in the kernel; user code created it, but pretty much all it did > was invoke a system call that implemented the server. That was kind of > neat. I recall discussions with the kernel people at Solbourne who were bringing up SunOS 4.0 on Solbourne hardware about this. nfsd was little more than an N way fork followed by the system call. It provided a process context to sleep in, which couldn't be created in the kernel at the time. I've not gone to the available SunOS sources to confirm this is what's going on. I'd thought, though, that this was the second nfsd implementation. The first one would decode the requests off the wire and schedule the I/O. It was only when there were issues with this approach that it moved into the kernel. This was mostly a context switch thing, but as more security measures were added to the system that root couldn't bypass, NFS needed to move into the kernel so it could bypass them. See getfh and similar system calls. I'm not sure how much of this was done in SunOS, I'm only familiar with the post 4.4BSD work... Warner > - Dan C. > > > On Thu, Dec 14, 2023 at 4:48 PM Noel Chiappa > wrote: > >> So Lars Brinkhoff and I were chatting about daemons: > >> > >> https://gunkies.org/wiki/Talk:Daemon > >> > >> and I pointed out that in addition to 'standard' daemons (e.g. the > printer > >> spooler daemon, email daemon, etc, etc) there are some other things > that are > >> daemon-like, but are fundamentally different in major ways (explained > later > >> below). I dubbed them 'system processes', but I'm wondering if ayone > knows if > >> there is a standard term for them? (Or, failing that, if they have a > >> suggestion for a better name?) > >> > >> > >> Early UNIX is one of the first systems to have one (process 0, the > "scheduling (swapping) > >> process"), but the CACM "The UNIX Time-Sharing System" paper: > >> > >> https://people.eecs.berkeley.edu/~brewer/cs262/unix.pdf > >> > >> doesn't even mention it, so no guidance there. Berkeley UNIX also has > one, > >> mentioned in "Design and Implementation of the Berkeley Virtual Memory > >> Extensions to the UNIX Operating System": > >> > >> http://roguelife.org/~fujita/COOKIES/HISTORY/3BSD/design.pdf > >> > >> where it is called the "pageout daemon".("During system initialization, > just > >> before the init process is created, the bootstrapping code creates > process 2 > >> which is known as the pageout daemon. It is this process that .. > writ[es] > >> back modified pages. The process leaves its normal dormant state upon > being > >> waken up due to the memory free list size dropping below an upper > >> threshold.") However, I think there are good reasons to dis-favour the > term > >> 'daemon' for them. > >> > >> > >> For one thing, typical daemons look (to the kernel) just like 'normal' > >> processes: their object code is kept in a file, and is loaded into the > >> daemon's process when it starts, using the same mechanism that 'normal' > >> processes use for loading their code; daemons are often started long > after > >> the kernel itself is started, and there is usually not a special > mechanism in > >> the kernel to start daemons (on early UNIXes, /etc/rc is run by the > 'init' > >> process, not the kernel); daemons interact with the kernel through > system > >> calls, just like 'ordinary' processes; the daemon's process runs in > 'user' > >> CPU mode (using the same standard memory mapping mechanisms, just like > >> blah-blah). > >> > >> 'System processes' do none of these things: their object code is linked > into > >> the monolithic kernel, and is thus loaded by the bootstrap; the kernel > >> contains special provision for starting the system process, which start > as > >> the kernel is starting; they don't do system calls, just call kernel > routines > >> directly; they run in kernel mode, using the same memory mapping as the > >> kernel itself; etc, etc. > >> > >> Another important point is that system processes are highly intertwined > with > >> the operation of the kernel; without the system process(es) operating > >> correctly, the operation of the system will quickly grind to a halt. > The loss > >> of ordinary' daemons is usually not fatal; if the email daemon dies, the > >> system will keep running indefinitely. Not so, for the swapping > process, or > >> the pageout daemon > >> > >> > >> Anyway, is there a standard term for these things? If not, a better > name than > >> 'system process'? > >> > >> Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bakul at iitbombay.org Sat Dec 16 03:13:54 2023 From: bakul at iitbombay.org (Bakul Shah) Date: Fri, 15 Dec 2023 09:13:54 -0800 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: Message-ID: <34D60B0B-4538-4E77-AB65-FA48FA7CF110@iitbombay.org> On Dec 15, 2023, at 6:21 AM, Dan Cross wrote: > > I remember reading a paper on the design of NFS (it may have been the > BSD paper) and there was a note about how the NFS server process ran > mostly in the kernel; user code created it, but pretty much all it did > was invoke a system call that implemented the server. That was kind of > neat. At Valid Logic Systems I prototyped a relatively simple network filesystem. Here there was no user code. There was one “agent” kernel thread per remote system accessing local filesystem + a few more. The agent thread acted on behalf of a remote system and maintained a session as long as at least one local file/dir was referenced from that system. There were complications as it was not a stateless design. I had to add code to detect when the remote server/client died or rebooted and return ENXIO / clear out old state. From paul.winalski at gmail.com Sat Dec 16 03:51:47 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Fri, 15 Dec 2023 12:51:47 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> Message-ID: For me, the term "system process" means either: o A conventional, but perhaps privileged user-mode process that performs a system function. An example would be the output side of a spooling system, or an operator communications process. o A process, or at least an address space + execution thread, that runs in privileged mode on the hardware and whose address space is in the resident kernel. Do Unix system processes participate in time-sliced scheduling the way that user processes do? On 12/14/23, Bakul Shah wrote: > > Exactly! If blocking was not required, you can do the work in an > interrupt handler. If blocking is required, you can't just use the > stack of a random process (while in supervisor mode) unless you > are doing some work specifically on its behalf. > >> Interestingly, other early systems don't seem to have thought of this >> structuring technique. > > I suspect IBM operating systems probably did use them. At least TSO > must have. Once you start *accounting* (and charging) for cpu time, > this idea must fall out naturally. You don't want to charge a process > for kernel time used for an unrelated work! The usual programming convention for IBM S/360/370 operating systems (OS/360, OS/VS, TOS and DOS/360, DOS/VS) did not involve use of a stack at all, unless one was writing a routine involving recursive calls, and that was rare. Addressing for both program and data was done using a base register + offset. PL/I is the only IBM HLL I know that explicitly supported recursion. I don't know how they implemented automatic variables assigned to memory in recursive routines. It might have been a linked list rather than a stack. I remember when I first went from the IBM world and started programming VAX/VMS, I thought it was really weird to burn an entire register just for a process stack. > There was a race condition in V7 swapping code. Once a colleague and I > spent two weeks of 16 hour debugging days! I had a race condition in some multithread code I wrote. I couldn't find it the bug. I even resorted to getting machine code listings of the whole program and marking the critical and non-critical sections with green and red markers. I eventually threw all of the code out and rewrite it from scratch. The second version didn't have the race condition. -Paul W. From imp at bsdimp.com Sat Dec 16 04:08:06 2023 From: imp at bsdimp.com (Warner Losh) Date: Fri, 15 Dec 2023 11:08:06 -0700 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> Message-ID: On Fri, Dec 15, 2023 at 10:51 AM Paul Winalski wrote: > For me, the term "system process" means either: > > o A conventional, but perhaps privileged user-mode process that > performs a system function. An example would be the output side of a > spooling system, or an operator communications process. > > o A process, or at least an address space + execution thread, that > runs in privileged mode on the hardware and whose address space is in > the resident kernel. > > Do Unix system processes participate in time-sliced scheduling the way > that user processes do? > Yes. At least on FreeBSD they do. They are just processes that get scheduled. They may have different priorities, etc, but all that factors in, and those priorities allow them to compete and/or preempt already running processes depending on a number of things. The only thing special about kernel-only thread/processes is that they are optimized knowing they never have a userland associated with them... > On 12/14/23, Bakul Shah wrote: > > > > Exactly! If blocking was not required, you can do the work in an > > interrupt handler. If blocking is required, you can't just use the > > stack of a random process (while in supervisor mode) unless you > > are doing some work specifically on its behalf. > > > >> Interestingly, other early systems don't seem to have thought of this > >> structuring technique. > > > > I suspect IBM operating systems probably did use them. At least TSO > > must have. Once you start *accounting* (and charging) for cpu time, > > this idea must fall out naturally. You don't want to charge a process > > for kernel time used for an unrelated work! > > The usual programming convention for IBM S/360/370 operating systems > (OS/360, OS/VS, TOS and DOS/360, DOS/VS) did not involve use of a > stack at all, unless one was writing a routine involving recursive > calls, and that was rare. Addressing for both program and data was > done using a base register + offset. PL/I is the only IBM HLL I know > that explicitly supported recursion. I don't know how they > implemented automatic variables assigned to memory in recursive > routines. It might have been a linked list rather than a stack. > > I remember when I first went from the IBM world and started > programming VAX/VMS, I thought it was really weird to burn an entire > register just for a process stack. > > > There was a race condition in V7 swapping code. Once a colleague and I > > spent two weeks of 16 hour debugging days! > > I had a race condition in some multithread code I wrote. I couldn't > find it the bug. I even resorted to getting machine code listings of > the whole program and marking the critical and non-critical sections > with green and red markers. I eventually threw all of the code out > and rewrite it from scratch. The second version didn't have the race > condition. > The award for my 'longest bug chased' is at around 3-4 years. We had a product, based on an arm9 CPU (so armv4) that would sometimes hang. Well, individual threads in it would hang waiting for a lock and so weird aspects of the program stopped working in unusual ways. But the root cause was a stuck lock, or missed wakeup. It took months to recreate this problem. I tried all manner of debugging to accelerate it reoccurring (no luck) to audit tall locks/unlocks/wakeups to make sure there was no leaks or subtle mismatches (there wasn't, despite a 100MB log file). It went on and on. I rewrote all the locking / sleeping / etc code, but also no dice. The one day, by chance, I was talking to someone who asked me about atomic operations. I blew them off at first, but then realized the atomic ops weren't implemented in hardware, but in software with the support of the kernel (there were no CPU level atomic ops). Within an hour of realizing this and auditing the code path, I had a fix to a race that was trivial to discover once you looked at the code closely. My friend also found the same race that I had about the same time I was finishing up my fix (which he found another race in, go pair programming). With the corrected fix, the weird hanging went away, only to be reported once again... in a unit that hadn't been updated with the patch! tl;dr: you never know what the root cause might be in weird, racy situations. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuff at riddermarkfarm.ca Sat Dec 16 04:30:08 2023 From: stuff at riddermarkfarm.ca (Stuff Received) Date: Fri, 15 Dec 2023 13:30:08 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <7wv8903rvi.fsf@junk.nocrew.org> References: <20231214214805.81B2618C08F@mercury.lcs.mit.edu> <7wv8903rvi.fsf@junk.nocrew.org> Message-ID: On 2023-12-15 01:24, Lars Brinkhoff wrote: > For the record, daring to stray outside Unix, ITS originally had a > "system job", and later split off a "core job". Both running in monitor > mode. Job means the same thing as process here, and monitor same as > kernel. > > --- >8 --- cut and stop reading here --- >8 --- off topic --- >8 --- From https://www.tuhs.org/cgi-bin/mailman/listinfo/coff "The Computer Old Farts Forum provides a place for people to discuss the history of computers and their future." It seems that you are well within bounds. S. From grog at lemis.com Sat Dec 16 12:04:08 2023 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Sat, 16 Dec 2023 13:04:08 +1100 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> Message-ID: On Friday, 15 December 2023 at 12:51:47 -0500, Paul Winalski wrote: > The usual programming convention for IBM S/360/370 operating systems > (OS/360, OS/VS, TOS and DOS/360, DOS/VS) did not involve use of a > stack at all, unless one was writing a routine involving recursive > calls, and that was rare. Addressing for both program and data was > done using a base register + offset. PL/I is the only IBM HLL I know > that explicitly supported recursion. I don't know how they > implemented automatic variables assigned to memory in recursive > routines. It might have been a linked list rather than a stack. Yes, the 360 architecture doesn't have a hardware stack. Subroutine calls worked with was something like a linked list. Registers were saved in a “save area", and they were linked. At least in assembler (I never programmed HLLs under MVS), by convention R13 pointed to the save area. From memory, subroutine calls worked like: LA 15,SUBR load address of subroutine BALR 14,15 call subroutine, storing address in R14 The subroutine then starts with STM 14,12,12(13) save registers 14 to 12 (wraparound) in old save area LA 14,SAVE load address of our save area ST 14,8(13) save in linkage of old save area LR 13,14 and point to our save areas Returning from the subroutine was then L 13,4(13) restore old save area LM 14,12,12(13) restore the other registers BR 14 and return to the caller Clearly this example isn't recursive, since it uses a static save area. But with dynamic allocation it could be recursive. > I remember when I first went from the IBM world and started > programming VAX/VMS, I thought it was really weird to burn an entire > register just for a process stack. Heh. Only one register? /370 was an experience for me, one I never wanted to repeat. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA.php -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: not available URL: From paul.winalski at gmail.com Sun Dec 17 05:21:48 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Sat, 16 Dec 2023 14:21:48 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> Message-ID: On 12/15/23, Greg 'groggy' Lehey wrote: > > At least in assembler (I never programmed HLLs under MVS), by > convention R13 pointed to the save area. From memory, subroutine > calls worked like: > > LA 15,SUBR load address of subroutine > BALR 14,15 call subroutine, storing address in R14 > > The subroutine then starts > with > > STM 14,12,12(13) save registers 14 to 12 (wraparound) in old save > area > LA 14,SAVE load address of our save area > ST 14,8(13) save in linkage of old save area > LR 13,14 and point to our save areas > > Returning from the subroutine was then > > L 13,4(13) restore old save area > LM 14,12,12(13) restore the other registers > BR 14 and return to the caller > > Clearly this example isn't recursive, since it uses a static save > area. But with dynamic allocation it could be recursive. Yes, that was the most common calling convention in S/360/370., and the one that was used if you were implementing a subroutine package for general use. It has the advantage that the (caller-allocated) register save area has room for all of the registers and so there is no need to change the caller code if the callee is changed to use an additional register. It also makes it very convenient to implement unwinding from an exception handler. But it does burn 60 bytes for the register save area and if you're programming for a S/360 model 25 with only 32K of user-available memory that can be significant. Those writing their own assembly code typically cut corners on this convention in order to reduce the memory footprint and the execution time spent saving/restoring registers. There's been long debate by ABI and compiler designers over the relative merits of assigning the duties of allocating the register save area (RSA) and saving/restoring registers to either the caller or the callee. The IBM convention has the caller allocate the RSA and the callee save and restore the register contents. One can also have a convention where the caller allocates an RSA and saves/restores the registers it is actively using. Or a convention where the callee allocates the RSA and saves/restores the registers it has modified. Each convention has its merits and demerits. The IBM PL/I compiler for OS and OS/VS (but not DOS and DOS/VS) had three routine declaration attributes to assist in optimization of routine calls. Absent any other information, the compiler must assume the worst--that the subroutine call may modify any of the global variables, and that it may be recursive. IBM PL/I had a RECURSIVE attribute to flag routines that are recursive. It also had two attributes--USES and SETS--to describe the global variables that are either used by (USES) or changed by (SETS) the routine. Global variables not in the USES list did not have to be spilled before the call. Similarly, global variables not in the SETS list did not have to be re-loaded after the call. IBM dropped USES and SETS from the PL/I language with the S/370 compilers. USES and SETS were something of a maintenance nightmare for application programmers. They were very error-prone. If you didn't keep the USES and SETS declarations up-to-date when you modified a routine you could introduce all manner of subtle stale data bugs. On the compiler writers' side, data flow analysis wasn't yet advanced enough to make good use of the USES and SETS information anyway. Modern compilers perform interprocedural analysis when they can and derive accurate global variable data blow information on their own. -Paul W. From paul.winalski at gmail.com Sun Dec 17 05:44:44 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Sat, 16 Dec 2023 14:44:44 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> <4416CB1B-CDE2-42DA-92F2-33284DB6093F@iitbombay.org> Message-ID: IBM's OS/360 did not have the modern "process" concept using virtual memory to implement a thread of control with its own, separate address space. Instead they had the concept of threads of control associated with contiguous segments of physical memory called "partitions". You had either a pre-defined set of partitions (OS MFT, multiprogramming with a fixed number of tasks) or the OS allocated partitions on-the-fly as needed for the current mix of jobs (OS/MVT, multiprogramming with a variable number of tasks). OS/VS1, OS/VS2 SVS, and DOS/VS for System/370 operated in the same way, except there was a single virtual address space, usually much larger than physical memory, that was partitioned up. DOS/360 ran one job at a time. DOS/VS had up to 5 partititions: BG (background), and P1-P4. Scheduling in DOS/VS was strictly preemptive, in the order P1, P2, P3, P4, BG. P1 got control whenever it was ready to run. If P1 was stalled, P2 was scheduled, then P3 through BG, which only got to run whenever the higher-priority jobs were stalled. The most sophisticated (and resource-hogging) version of OS/VS was OS/VS MVS (multiple virtual storages) which implemented the modern concept of each partition (process) getting its own, separate 0-based address space. Some of these partitions might be allocated to privileged tasks, most notably the spooling system such as HASP (Houston Automatic Spooling Priorithy), which was developed by IBM contractors working at NASA's Houston space facility in the mid-1960s. HASP provided both spooling and remote job entry services and ran at least partly in partition (i.e., user process) context. DOS/360 had a kernel (supervisor, in IBM-speak) enhancement called POWER that provided spooling capability. DOS/VS had POWER/VS, which ran in a separate partition (typically P1, the highest priority). VAX/VMS (and its successor OpenVMS) had a few privileged user-mode processes to perform system tasks. Two of these processes were OPCOM (provides communication with the operator at the operator's console) and JOB CONTROL (provides spooling and batch job services). I think OPCOM runs entirely in user mode. JOB CONTROL may have some routines that execute in kernel mode. These system processes are similar to daemons in Unix. -Paul W. From coff at tuhs.org Tue Dec 19 23:54:49 2023 From: coff at tuhs.org (Derek Fawcus via COFF) Date: Tue, 19 Dec 2023 13:54:49 +0000 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> References: <20231214232935.802BB18C08F@mercury.lcs.mit.edu> Message-ID: On Thu, Dec 14, 2023 at 06:29:35PM -0500, Noel Chiappa wrote: > Interestingly, other early systems don't seem to have thought of this structuring technique. How early does that have to be? MP/M-1.0 (1979 spec) mentions this, as "Resident System Processes" http://www.bitsavers.org/pdf/digitalResearch/mpm_I/MPM_1.0_Specification_Aug79.pdf It was a banked switching, multiuser, multitasking system for a Z80/8080. It mentions 5 such processes. Later versions, and the 8086 version still had them. The MP/M-86 docs mention 'Terminal Message', Clock, Echo and 'System Status' processes. I believe the first was spawned one per console. (Some of the internal structures suggest it was intended to support swapping, but I don't know if that was implemented in terms of disk swapping) DF From jnc at mercury.lcs.mit.edu Thu Dec 21 05:31:41 2023 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 20 Dec 2023 14:31:41 -0500 (EST) Subject: [COFF] Terminology query - 'system process'? Message-ID: <20231220193141.DC1C918C092@mercury.lcs.mit.edu> > From: Derek Fawcus > How early does that have to be? MP/M-1.0 (1979 spec) mentions this, as > "Resident System Processes" ... It was a banked switching, multiuser, > multitasking system for a Z80/8080. Anything with a microprocessor is, by definition, late! :-) I'm impressed, in retrospect, with how quickly the world went from proceesors built with transistors, through proceesors built out discrete ICs, to microprocessors. To give an example; the first DEC machine with an IC processor was the -11/20, in 1970 (the KI10 was 1972); starting with the LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced after that used microprocessors. So just 10 years... Wow. Noel From paul.winalski at gmail.com Thu Dec 21 06:29:17 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Wed, 20 Dec 2023 15:29:17 -0500 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231220193141.DC1C918C092@mercury.lcs.mit.edu> References: <20231220193141.DC1C918C092@mercury.lcs.mit.edu> Message-ID: On 12/20/23, Noel Chiappa wrote: > > To give an example; the first DEC machine with an IC > processor was the -11/20, in 1970 (the KI10 was 1972); starting with the > LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a > CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced > after that used microprocessors. The VAX-11/780, 11/750, and 11/730 were all implemented using 7400-series discrete, gate-level TTL integrated circuits. The planned follow-on series was Venus (ECL gate level; originally planned to be the 11/790 but after many delays released as the VAX 8600), Gemini (a two-board VAX implementation; cancelled), and Scorpio (a chip set eventually released as the VAX 8000). Superstar, released as the VAX-11/785, is a re-implementation of the 11/780 using faster TTL. The floating point accelerator board for the 11/785 was implemented using Fast Fairchild TTL. The first microprocessor implementation of the VAX architecture was the MicroVAX-I. It, and all later VAX processors, implemented only the MicroVAX subset of the VAX architecture in hardware and firmware. The instructions left out were the character string instructions (except for MOVC), decimal arithmetic, H-floating point, octaword, and a few obscure, little-used instructions such as EDITPC and CRC. The missing instructions were simulated by the OS. These instructions were originally dropped from the architecture because there wasn't enough real estate on a chip to hold the microcode for them. It's interesting that they continued to be simulated in macrocode even after several process shrink cycles made it feasible to move them to microcode. I wrote a distributed (computation could be done in parallel over a DECnet LAN or WAN) Mandelbrot Set computation and display program for VAX/VMS. It was implemented in PL/I. The actual display was incredibly sluggish, and I went in search of the performance problem. It turned out to be in the cartesian-to-screen coordinate translation subroutine. The program did its computations in VAX D-float double precision floating point. The window was 800x800 pixels and this was divvied up into 32x32-pixel cells for distributed, parallel computation. The expression "800/32" occurred in the coordinate conversion program. In PL/I language data type semantics, this expression is "fixed decimal(3,0) divided by fixed decimal(2,0)". This expression was being multiplied by a VAX D-float (float decimal in PL/I) and this mixture of fixed and float triggered one of the more bizarre of PL/I's baroque implicit data type conversions. First, the D-float values were converted to full-precision (15-digit) packed decimal by calling one of the Fortran RTL's subroutines. All the arithmetic in the routine was done in full-precision packed decimal. The result was then converted back to D-float (again by a Fortran RTL call). There were effectively only two instructions in the whole subroutine that weren't simulated in macrocode, and one of those was the RET instruction at the end of the routine! I changed the offending expression to "800E0/32E0" and got a 100X speedup--everything was now being done in (hardware) D-float. -Paul W. From jnc at mercury.lcs.mit.edu Thu Dec 21 06:35:54 2023 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 20 Dec 2023 15:35:54 -0500 (EST) Subject: [COFF] Terminology query - 'system process'? Message-ID: <20231220203554.8A2A818C09B@mercury.lcs.mit.edu> > the first DEC machine with an IC processor was the -11/20, in 1970 Clem has reminded me that the first was the PDP-8/I-L (the second was a cost-reduced version of the -I), from. The later, and much more common, PDP-8/E-F-M, were contemporaneous with the -11/20. Oh well, only two years; doesn't really affect my main point. Just about 'blink and you'll miss them'! Noel From paul.winalski at gmail.com Fri Dec 22 03:21:26 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Thu, 21 Dec 2023 12:21:26 -0500 Subject: [COFF] IBM 1403 line printer on DEC computers? Message-ID: There's been a discussion recently on TUHS about the famous IBM 1403 line printer. It's strayed pretty far off-topic for TUHS so I'm continuing the topic here in COFF. DEC marketed its PDP-10 computer systems as their solution for traditional raised-floor commercial data centers, competing directly with IBM System 360/370. DEC OEMed a lot of data center peripherals such as card readers/punches, line printers, 9-track magtape drives, and disk drives for their computers, but their main focus was low cost vs. heavy duty. Not really suitable for the data center world. So DEC OEMed several high-end data center peripherals for use on big, commercial PDP-10 computer systems. For example, the gold standard for 9-track tape drives in the IBM world was tape drives from Storage Technology Corporation (STC). DEC designed an IBM selector channel-to-MASSBUS adapter that allowed one to attach STC tape drives to a PDP-10. AFAIK this was never offered on the PDP-11 VAX, or any other of DEC's computer lines. They had similar arrangements for lookalikes for IBM high-performance disk drives. Someone on TUHS recalled seeing an IBM 1403 or similar line printer on a PDP-10 system. The IBM 1403 was certainly the gold standard for line printers in the IBM world and was arguably the best impact line printer ever made. It was still highly sought after in the 1970s, long after the demise of the 1950s-era IBM 1400 computer system it was designed to be a part of. Anyone considering a PDP-10 data center solution would ask about line printers and, if they were from the IBM world, would prefer a 1403. The 1403 attached to S/360/370 via a byte multiplexer channel, so one would need an adapter that looked like a byte multiplexer channel on one end and could attach to one of DEC's controllers at the other end (something UNIBUS-based, most likely). We know DEC did this sort of thing for disks and tapes. The question is, did they have a way to attach the 1403 to any of their computer systems? -Paul W. From jnc at mercury.lcs.mit.edu Fri Dec 22 04:56:09 2023 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 21 Dec 2023 13:56:09 -0500 (EST) Subject: [COFF] IBM 1403 line printer on DEC computers? Message-ID: <20231221185609.A29FA18C09C@mercury.lcs.mit.edu> > From: Paul Winalski > The 1403 attached to S/360/370 via a byte multiplexer channel ... > The question is, did they have a way to attach the 1403 to any of their > computer systems? There's a thing called a DX11: https://gunkies.org/wiki/DX11-B_System_360/370_Channel_to_PDP-11_Unibus_Interface which attaches a "selector, multiplexer or block multiplexer channel" to a UNIBUS machine, which sounds like it could support the "byte multiplexer channel"? The DX11 brochure only mentions that it can be "programmed to emulate a 2848, 2703 or 3705 control unit" - i.e. look like a peripheral to a IBM CPU; whether it could look like an INM CPU to an IBM peripheral, I don't know. (I'm too lazy to look at the documentation; it seems to be all there, though.) Getting from the UNIBUS to the -10, there were off-the-shelf boxes for; the DL10 for the KA10 and KI10 CPUs, and a DTE20 on a KL10. It all probably needed some coding, though. Noel From clemc at ccc.com Fri Dec 22 08:03:50 2023 From: clemc at ccc.com (Clem Cole) Date: Thu, 21 Dec 2023 17:03:50 -0500 Subject: [COFF] Fwd: Fwd: IBM 1403 line printer on DEC computers? In-Reply-To: References: Message-ID: FYI: Tim was Mr. 36-bit kernel and I/O system until he moved to the Vax and later Alpha (and Intel). The CMU device he refers is was the XGP and was a Xerox long-distance fax (LDX). Stanford and MIT would get them too, shortly thereafter. ---------- Forwarded message --------- From: Timothe Litt Date: Thu, Dec 21, 2023 at 1:52 PM Subject: Re: Fwd: [COFF] IBM 1403 line printer on DEC computers? To: Clem Cole I don't recall ever seeing a 1403 on a DECsystem-10 or DECSYSTEM-20. I suppose someone could have connected one to a systems concepts channel... or the DX20 Massbus -> IBM MUX/SEL channel used for the STC (TU70/1/2) tape and disk (RP20=STC 8650) disk drives. (A KMC11-based device.) Not sure why anyone would. Most of the DEC printers on the -10/20 were Dataproducts buy-outs, and were quite competent. 1,000 - 1,250 LPM. Earlier, we also bought from MDS and Analex; good performance (1,000LPM), but needed more TLC from FS. The majority were drum printers; the LP25 was a band printer, and lighter duty (~300LPM). Traditionally, we had long-line interfaces to allow all the dust and mess to be located outside the machine room. Despite filters, dust doesn't go well with removable disk packs. ANF-10 (and eventually DECnet) remote stations provided distributed printing. CMU had a custom interface to some XeroX printer - that begat Scribe. The LN01 brought laser printing - light duty, but was nice for those endless status reports and presentations. I think the guts were Canon - but in any case a Japanese buyout. Postscript. Networked. For high volume printing internally, we used XeroX laser printers when they became available. Not what you'd think of today - these are huge, high-volume devices. Bigger than the commercial copiers you'd see in print shops. I(Perhaps interestingly, internally they used PDP-11s running 11M.) Networked, not direct attach. They also were popular in IBM shops. We eventually released the software to drive them (DQS) as part of GALAXY. The TU7x were solid drives - enough so that the SDC used them for making distribution tapes. The copy software managed to keep 8 drives spinning at 125/200 ips - which was non-trivial on TOPS-20. The DX20/TX0{2,3}/TU7x *was *eventually made available for VAX - IIRC as part of the "Migration" strategy to keep customers when the -10/20 were killed. I think CSS did the work on that for the LCG PL. Tapes only - I don't think anyone wanted the disks by them - we had cheaper dual-porting via the HSC/CI, and larger disks. The biggest issue for printers on VAX was the omission of VFU support. Kinda hard to print paychecks and custom forms without it - especially if you're porting COBOL from the other 3-letter company. Technically, the (Unibuas) LP20 could have been used, but wasn't. CSS eventually solved that with some prodding from Aquarius - I pushed that among other high-end I/O requirements. On 21-Dec-23 12:29, Clem Cole wrote: Tim - care to take a stab at this? ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From coff at tuhs.org Sun Dec 24 10:53:22 2023 From: coff at tuhs.org (segaloco via COFF) Date: Sun, 24 Dec 2023 00:53:22 +0000 Subject: [COFF] Dataphone 300 Info/Docs? Message-ID: I've got an exciting piece of hardware to pair with the VT100 I recently got, a Western Electric Dataphone 300.  The various status lights and tests seem to work, and the necessary cabling is in place as far as the unit is concerned.  However, it did not come with the accompanying telephone.  I believe but can't verify yet that the expected telephone is or resembles a *565HK(M) series telephone, the ones with one red and five clear buttons along the bottom, otherwise resembling a standard WECo telephone. Pictured: http://www.classicrotaryphones.com/forum/index.php?action=dlattach;attach=439223;image Thus far I've found myself confused on the wiring expectations.  There is a power line going into a small DC brick, one DB-25 port on the back terminating in a female 25-pair amphenol cable, and another DB-25 port with a ribbon extension plugged in.  My assumptions thus far have been the amphenol plugs into a *565HK(M) or similar series telephone and the DB-25 then plugs into the serial interface of whichever end of the connection it represents.  However, while this is all fine and dandy, it's missing one important part...a connection to the outside world.  I've found no documentation describing this yet, although a few pictures from auctions that included a telephone seemed to have a standard telephone cable also coming out of the back of the telephone terminating in either a 6 or 8-conductor modular plug.  The pictures were too low-res to tell which. Would anyone happen to know anything concrete about the wiring situation with these, or some documentation hints, as I've tried some general web searches for documentation concerning Dataphone 300 and the 103J Data Set configuration and haven't turned up wiring-specific information.  If nothing else I might just tap different places on the network block of the 2565HKM I've got plugged into it and see if anything resembling a telephone signal pops up when running some serial noise in at an appropriate baud.  My fear is that the wiring differences extend beyond the tap to the CO/PBX line and that there are different wiring expectations in the 25-pair as well, this and my other appropriate telephone are both 1A2 wired I believe, still working on that KSU... Any help is much appreciated, lotsa little details in these sorts of things, but once I get it working I intend to do some documentation and teardown photos.  I don't want to take it apart yet and run the risk of doing something irreversible.  I want to make sure it gets a chance to serve up some serial chit chat as weird telephone noises. - Matt G. From clemc at ccc.com Sun Dec 31 12:27:18 2023 From: clemc at ccc.com (Clem Cole) Date: Sat, 30 Dec 2023 21:27:18 -0500 Subject: [COFF] [simh] Old VAX/VMS Tapes In-Reply-To: <75e8f333-98fc-45da-b109-fedaa9d78fdb@ieee.org> References: <656c72ae-2b6e-487c-a7bc-6e3a3896b49f@ieee.org> <53587999-897f-4b69-b476-b1c83dfaf816@ieee.org> <2cafc131-3e5d-4bf1-b0ee-537e3ed0f4cd@ieee.org> <75e8f333-98fc-45da-b109-fedaa9d78fdb@ieee.org> Message-ID: We should move to COFF (cc’ed) for any further discussion. This is way off topic for simh. Below Sent from a handheld expect more typos than usual On Sat, Dec 30, 2023 at 7:59 PM Nigel Johnson MIEEE via groups.io wrote: > First of all, 7-track vs 9-yrack - when you are streaming in serpentine > mode, it is whatever you can fit into the tape width having regard to the > limitations of the stepper motor accuracy. > Agreed. It’s the physical size of head and encoding magnetics. Parallel you have n heads together all reading or writing together into n analog circuits. A rake across the ground if you will. Serial of course its like a single pencil line with the head on a servo starting in the center of the tape and when you hit the physical eot move it up or down as appropriate. It is nothing to do with the number of bits per data unit. > I did not say that or imply it. But variable vs. fixed blocking has implications on both mechanical requirements and ends up being reflected in how the sw handles it. Traditional 9-track allows you to mix record sizes on each tape. Streamer formats don’t traditionally allow that because they restrict / remove inter record gaps in the same manner 9-track supports. This increases capacity of the tape (less waste). Just for comparison at 6250 BPI a traditional 2400’ ½” tape writing fixed blocks of 10240 8-bit bytes gets about 150Mbytes. A ¼” DC-6150 tape using QIC-150 only one forth the length and half as wide gets the same capacity and they both use the same core scheme to encode the bits. QIC writes smaller bits and wastes less tape with IRCs. That all said, Looking at the TK25 specs besides being 11 tracks it is also supports a small number different block sizes (LRECL) - unlike QIC. Nothing like 9-track which can handle a large range of LRECLs. What I don’t see in the TK25 is if you can mix them on a tape or if that is coded once for each tape as opposed in each record. Btw while I don’t think ansi condones it, some 9-track units like the Storage Tek ones could not only write different LRECLs but could write using different encoding (densities) on the same medium. This sad trick confused many drives when you moved the tape to a drive that could not. I have some interesting customer stories living those issues. But I digress … FWIW As I said before do have a lot of experience with what it takes to support this stuff and what you have to do decode it, the drivers for same et al. I never considered myself a tape expert- there are many the know way more than I - but I have lived, experienced and had to support a number of these systems and have learned the hard way about how these schemes can go south when trying to recover data. Back in the beginning of my career, we had Uniservo VIC drives which were > actually 7-bit parallel! (256, 556, and 800 bpi! NRZI > Yep same here. ½” was 5, 7 and 9 bits in parallel originally. GE-635 has in the late 1960s then and a IBM shop in the early 70s. And of course saw my favorite tapes of all - original DEC tape. I’ve also watched things change with serial and the use of serpentine encoding. You might find it amusing — any early 1980s Masscomp machines had a special ½” drive that had a huge number serpentine tracks I’ve forgotten the exact amount. They used traditional 1/2” spools from 3M and the like but r/w was custom to the drive. I’ve forgotten the capacity but at the time it was huge. What I remember it was much higher capacity and reliability than exabyte which at the time was the capacity leader. The USAF AWACS planes had 2 plus a spare talking to the /700 systems doing the I/O - they were suckling up everything in the air and recording it as digital signals. The tape units were Used to record all that data. An airman spends his/whole time loading and unloading tapes. Very cool system. > Some things about the 92192 drive: it was 8" cabinet format in a 5.25 > inch world so needed an external box. It also had an annoying habit, given > Control Data's proclivity for perfection, that when you put a cartridge in, > it ran it back and forth for five minutes before coming ready to ensure > even tension on the tape! > > The formatter-host adapter bus was not QIC36, so Emulex had to make a > special controller, the TC05, to handle the CDC Proprietary format. The > standard was QIC-36, although I think that Tandberg had a standard of their > own. > Very likely. When thoses all came on the scene there were a number of interfaces and encoding schemes. I was not involved in any of the politics but QIC ended up as the encoding standard and SCSI the interface IIRC the first QIC both Masscomp and Apollo used was QIC-36 via a SCSI converter board SCS made for both of us. I don’t think Sun used it. Later Archive and I think Wangtek made SCSI interface standard on the drives. > I was wrong about the 9-track versus 7, the TC05/sentinel combination > writes 11 tracks! The standard 1/4' cartridge media use QIC24, which > specifies 9 tracks. I just knew it was not 9! > It also means it was not a QIC standard as I don’t believe they had one between QIC-24-DC and QIC-120-DC. Which I would think means that if this tape came from a TK25 I doubt either Steve or my drives will read it - he’ll need to find someone with a TK25 - which I have never seen personally. > That's all I know! > fair enough Clem_._,_._,_ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjenkin at canb.auug.org.au Sun Dec 31 13:51:02 2023 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sun, 31 Dec 2023 14:51:02 +1100 Subject: [COFF] Terminology query - 'system process'? In-Reply-To: <20231220193141.DC1C918C092@mercury.lcs.mit.edu> References: <20231220193141.DC1C918C092@mercury.lcs.mit.edu> Message-ID: <45D745E8-8CAA-4236-ACDA-CD785B519749@canb.auug.org.au> Noel, Adding a little to your observation on how fast CMOS microprocessors took over. Wasn’t just DEC and IBM who were in financial trouble in 1992: - the whole US minicomputer industry had been hit. DEC did well to just survive, albeit only for another 5 years before merging with HP. steve j > On 21 Dec 2023, at 06:31, Noel Chiappa wrote: > > I'm impressed, in retrospect, with how quickly the world went from proceesors > built with transistors, through proceesors built out discrete ICs, to > microprocessors. To give an example; the first DEC machine with an IC > processor was the -11/20, in 1970 (the KI10 was 1972); starting with the > LSI-11, in 1975, DEC started using microprocessors; the last PDP-11 with a > CPU made out of of discrete ICs was the -11/44, in 1979. All -11's produced > after that used microprocessors. > > So just 10 years... Wow. > > Noel ============ [ 1,070 pg ] [ power point ] The Birth and Passing of Minicomputers: From A Digital Equipment Corp. (DEC) Perspective Gordon Bell 11 October 2006 (intro slide) (DEC) 1957-1998. 41 yrs., 4 generations: transistor, IC, VLSI, clusters - winner take all How computer classes form...and die. Not dealing with technology = change = disruption pg 21 91 Minicomputer companies 1984 by 1990 only 4 survived. (DG, DEC, HP, IBM) ============ -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin