The 8008, F8 and SC/MP, while arguably important from a historical standpoint, never gained widespread commercial adoption in the way the 6502, Z80 or 8080 derivatives (including the Z80 itself) did. The 6800 got some popularity in embedded applications, but was quickly superseded by the 6809 which managed to stay relevant all the way up to the late 90s.
A glaring omission in this list is the Intel MCS-51 architecture, which - despite being horrible to program for in this day and age - still powers billions of devices worldwide, with 8051 derivatives often being embedded as a housekeeping core into all sorts of specialized ASICs and 8051-compatible microcontrollers being the staple of cheap electronics. It was never meant to compete with general purpose processors such as the Z80, and its awkwardly limited instruction set reflects that, but it still managed to become the de facto CPU architecture for monitors, power tools, RGB lighting, SD cards and whatnot.
Hi, Author here. Apologies, I should have made it clear that this was intended to be a short list of microprocessors only, mostly to leave room for a follow up on microcontrollers like the 8051. Completely agree that the 8051 deserves a place on any list of important 1970s architectures.
If anyone is interested in the results of the poll which looks like a dead heat between the 6800/9 and Z80, then the winner was Motorola. So I'm now deep in research on the 6800, hopefully to result in a post in the near future.
On the 6800/6809: don't forget to look at the Fairlight CMI and the Qasar M8 developed by Tony Furse: the sampling synthesizer that was the sound of the 80s.
This past summer, Tony has been retrieving his 8-inch Motorola Exorciser floppies. He encouraged me to work on my Motorola Exorciser emulator- it now emulates a 6809 version of the Exorciser:
Ha. I have a love-hate relationship with the MC6805. On the one hand, I built a lot of devices based on the 68HC05 series. On the other, having to implement a 16-bit division routine on one was what made me swear off assembly language for good.
it would be reductive to chalk up the sound of the 80s to any one synth. Especially when the Fairlight was responsible for a number of iconic songs:
http://ghservices.com/gregh/fairligh/topsong/
The Z80 was still being used in embedded designs in the 90s. One company I worked at was still supporting Z80 systems and s/w at least until around '94. Possibly later, as I didn't work on those.
In the UK, I never saw a 6809 used in embedded designs (late 80s - mid 90s), I did see plenty of Z80s and 8051 derivatives.
However by that period (late 80s on) a lot of those embedded designs were using NEC V20/V30, as they made for easier systems to program than plain 8bit systems; plus the price range of the systems I had experience with allowed it.
Motorola's embedded cpu's had strange numbers that would make it difficult to recognize to which cpu family it belonged. 68705 were 6800 derived, 68302 were 68000 derived.
As for the 6809 it l ooks like that there were no embedded derivatives of it. 68HC16 seems to be a 16 bit extended 6800 using a similar technic as 65816 to extend the 6502.
NEC V20/V30 not that much but 80186 and all their specialized embedded variants from Intel (80186EA/EB/EC) and AMD (Am186EM) were extremely appreciated as it allowed to use normal MS-DOS compilers and software.
Am186EM we loved that one. 100 pin PQFP with unmultiplexed bus, CMOS up to 40Mhz, including UART, SPI etc.
Yes, AM186EM was doubleplus good. I used it in an early interface between a Kodak DC-20 camera (early digital) and IrDA at high-speed (1 Mb/s in those days).
Also liked the V20/V30 too, built a PC card comm controller with those. You are correct, it was normal to use MS-DOS compilers, although I did have to get a special-purpose debugger (code; interface was uart) for the '186.
With those kinds of products, it's 'annoying' that Intel sort of gave away the embedded space.
(agree: 8051 is high up there in the 'microcontroller' space.)
My first commercial embedded design used a 6809. That was also the last time I ever saw one used in the embedded space! As fun as the 68HC09 was, the 68HC05 and 68HC11 were better choices.
The 6809 was a relative bit player; nifty chip but too late in the 8-bit market cycle to get nearly the traction or ship remotely the numbers its competitors did. By the time it hit the market, if needed the uptick in performance/features, you usually went to one of the 16-bit chips.
If you're including 'derivatives', then the 68hc11 and related (6800 with extra stuff) were/are wildly popular in the embedded space and are still today being produced by NXP. You can buy them from stock at Digikey.
The 6809 never superseded the 6800 at anything in the market other than being cool. And then the 68000 ate whatever market it might have had left.
Agreed on the 8051. There's probably a dozen within 20' of you right now.
In the list of applications of the 6502 architecture there is one big missing entry. Good old phone line modems using Rockwell chipsets (Hayes modem) that went up to 56Kbit/s. The central controller of the Rockwell chipset was an embedded MPU based on 65C02 kernel that were clocked to up to 75MHz (that's the fasted I had seen). The fact is relatively unknown as NDA with Rockwell were extremely tight.
Less impressive, but something which was still fun to me when I had an Amiga: At least some Amiga keyboards used a SOC with a 6502 core.
One of my Amiga's had 4 CPU families: A 6502 core on the keyboard, a Z80 on the SCSI controller, an x86 on a bridge board (the Amiga 2000 had ISA slots, and one of them was in-line with a Zorro slot so you could get a board that let you run x86 software using a window on the Amiga desktop as the output; I don't remember if the bridge board itself was an 8088 or 8086, but I upgraded it with a 286 accelerator card) and of course the 68000 (+ a 68020 expansion)....
The 1541 floppy drive for the C64 used a 6502, almost the same as the C64’s 6510 CPU. Some C64 software co-opted the floppy drive’s 6502 as a coprocessor.
I've seen that one before, but always love seeing this. And particularly love the touch of not using pre-prepared cables but casually unplugging the C64 and cutting apart and splicing the cables with the equipment on...
The 4040 drive for the Commodore business line had TWO 6502 equivalent CPUs. I used to tease a friend who owned one that his floppy drive was smarter than his computer.
> Some C64 software co-opted the floppy drive’s 6502 as a coprocessor.
In that sense, the 8-bit Commodores were a bit like mainframes - one could load a data transform program to run on the floppy drive and run it without bothering the host. I believe it goes a bit further than the Atari family in that.
Now I got curious: I have the impression the 2000 wouldn't be able to see the ISA bus unless a bridgeboard with an x86 was there. Is that so or could the 2000 see the ISA boards regardless of the bridge board?
That is correct, power only. Even with the bridge board installed the Amiga bus could not talk directly to the ISA side, any communication went through dual port ram on the bridgeboard - signals were in no way multiplexed between the two buses directly.
Such a missed opportunity... It'd have allowed Amigas to easily tap the graphics hardware becoming available to ISA machines. Even a plain (and cheap) bridge board with just the bus interfaces would be a huge enabler that could take the Amiga out of its multimedia (and NTSC-timings) niche.
There is an even bigger missing entry: toys. The Chinese company Sunplus/GeneralPlus has had a near monopoly on toy microcontrollers for over two decades at this point and the vast majority of their lower-end offerings are based on trimmed down 6502 derivatives, usually expanded with dedicated hardware for tasks like audio playback. Notably, the original Furby [1] and many Tamagotchi models [2] use such chips.
(Most of his money is made in licensing the soft core, but I love that he continues to support the hobbyist market by making ridiculously obsolete 40-pin PDIPs!)
These are iconic in the sense of "historical and influential". But I think only the Z80 and 6502 still have a big community still excited about them as 8-bit processors specifically, which I thought was the basis of the piece.
I wonder if to this we should add the Atmel Atmega328P, which is omnipresent in microcontrollers, not the least of which is the Arduino. It's always described as "8-bit" though it has wider addressing than that and is capable of doing 16-bit and 32-bit math. It's a Harvard architecture.
I like the AVR-8 which is a nice bookend to the 8-bit era in a lot of ways. It has more registers than the IBM 360 and also gets great performance with that Harvard architecture avoiding the very complex system of caching and pipelining you see in modern computers to fight latency.
Unlike many of those 8-bit computers the AVR-8 doesn’t have any 16-bit arithmetic other than increment/decrement address modes that use the X, Y and Z register pairs unless you count the 8x8 MUL instruction that returns a 16-bit result or the MOVW instruction which copies a register pair and not just 1 register.
Today I would make the distinction between those that are readily end-user programmable (microprocessors) and those that aren't (microcontrollers).
Certainly many in this article were originally designed for what now would be microcontroller applications, but the fact that they couldn't then do on-chip ROM let hobbyists repurpose them into general-purpose microcomputers.
I think the distinction then was more like non-existent. Most chips then didn't have on-board peripherals (for some value of "then"), so a CPU-intended-to-be-a-microcontroller and a CPU-intended-to-be-a-microprocessor looked very similar.
Now we've added things to microcontrollers (serial ports, GPIO pins, timers, etc.) and other things to microprocessors (FPUs and memory managers). But back in the day, was a Z80 a microcontroller or a microprocessor? Yes.
I have always assumed the distinction was whether it had its own on-board memory or talked over the bus to external memory. That's a kind of who-cares thing in my mind.
The Gameboy is not a Z80, it's a separate 8080-descendant sibling.
The NES had zero modification to the 6502 core, except that the binary-coded decimal flag was hardcoded to always be off (to avoid infringing on the one patent that MOS filed; chip design was not copyrightable at the time).
The 6809 was the last of the ‘classic’ 8-bit microprocessors, used in home and small business computers of the 1970s and early 1980s. It was backwards compatible with the 6800 but added more instructions.
This isn't really properly correct. The 6809 was backwards compatible with the 6800 only at the assembly source code level, not the opcode level. That is, I believe you could run your 6800 code through a 6809 assembler and probably get a valid program, but 6800 binaries would not run. The opcode encoding was different.
Also does a bit of a disservice to the 6809 to just call it a successor to the 6800, as it is in many ways the most "deluxe" of all the 8 bit processors. It wasn't just more instructions, but more/wider registers, as well.
I still have a Motorola 6809 TRM that has artwork right on the cover about how the 6809 was a "bridge" processor to transition you from the 6800 to the 68000.
Kind of weird though, the 6809 was developed concurrent to the 68k, by an entirely different team at Motorola and its instruction set looked nothing like the 68000. The only thing they had in common was the "68" prefix and big-endian-ness.
I guess there was also the ability to interface the 68k with 6800/6809 support chips on the bus, though.
Substack could be the Medium replacement we all seek, but this pop up proves it isn't that. It should "just" add the publisher to a sidebar. I don't pay attention to my email either. This just creates more Internet Exhaust.
i loved the 6809 - more addressing modes than you could shake a stick at, but most of the assembler code i wrote in the 1980s was for the Z80 (which i also liked) and the 6502 (which i didn't).
Stupid question: what's so special about the 8 in 8-bit CPUs?
> Simplicity
>
> With some practice, you could keep the whole of an 8-bit processor’s instruction set in your head.
Is it just a matter of replacing all instructions/chips/buses to be 32 bit/lane? Or is it just that in practice the 32 bit CPUs have more complex ISAs?
The reason I ask is that I'm learning RV32I, and writing a simple implementation, and wondering if there's any additional didactic value in me learning 8-bit CPUs.
My take: In the real world code deals with 8-bit byte values all the time. So an 8-bit CPU is sort of the minimum "practical" world architecture for getting things done. So it's interesting from that POV.
Also because of the above, 16/32/64-bit ISAs have to support dealing with bytes, so they end up either including instructions to explicitly deal with them, or constantly having to mask the upper bits of values/registers.
Unfortunately 8-bit CPUs were also almost always tied with 16-bit address buses. Back then when memory was super costly, this was fine. But later on, it became their biggest limitation and all sorts of awkward paging/segmenting stuff was caked on top in order to make them work with memory sizes greater than 64k.
I sometimes wonder how things would have gone if we'd settled on having "bytes" be 12-bits (like the PDP-8) or something instead, with the first gen of address buses being 24 bits. That would have made the first generation of home computers have a lot more longevity.
Hell, I believe the PDP-11 only had 16-bit registers, imagine if we'd just started our journey that way.
One of my "some day", "probably way more effort than I want to spend" projects would be to create a 40-ish-pin DIP module that contains a RV32 core and has a 6502-like pinout so you can build small homebrew computers that use the RV32 ISA. There are a ton of comporomises that would have to be made, but that's part of the fun.
Most of the small RV32 microcontrollers have a low pin count lots of on-board devices, and it's not as straightforward as gluing together a CPU, RAM, ROM, and some IO devices on a breadboard.
Basically, an answer to the questopn "how would a breadboard computer built in 1977 look if the RV32 ISA existed?"
8-bit CPUs have 8 data pins (D0-D7) and anything going in or out of the CPU is doing so 8 bits at a time. This includes all external accesses such as RAM, ROM, and I/O.
But 8-bit CPUs have more than 8 address lines, because 256 bytes total for combined RAM, ROM and I/O space is not useful. That number I think is typically 16 although Signetics 2650 had only 12 (with the instruction set only supporting 12-bit addresses), and the Atari 6507 (6502 derivative) had 13 (instruction set still supporting 16-bit addresses but the upper 3 bits of addresses were basically ignored).
> Is it just a matter of replacing all instructions/chips/buses to be 32 bit/lane?
Depends on the 8-bit CPU really.
- The Z80 lets you combine specific register pairs to work with 16 bits and address memory through them.
- The 6502 does not, but has the whole "zero page" thing where the first 256 bytes of RAM can contain 16-bit data and pointers.
- Both the Z80 and 6502 have a stack pointer register (the 6502's being 8-bit and fixed to point to RAM locations 512-767). But the 2650 had an internal 8-byte stack and stack pointer.
- The 8051 (and the 8048 I think) has lots of instructions for manipulating individual bits in registers and RAM, and also has a division between the memory that opcodes are fetched from versus data. None of the above work like that (the F8 might).
I recall the Signetics 2650 as having 15-bit internal addressing with the 16th bit used for an indirect addressing mode. But yes, not all 15 bits being routed out to pins it had pretty limited memory potential.
I think it's the combination of those cheap CPUs with 80s home computer hardware, where the hardware was designed as "game engine API", e.g. you just needed a handful of instructions to define a sprite and move it around on screen, without any library or driver inbetween. And those systems were hard-realtime. All timing was predictable down to the clock cycle, which allowed to synchronize rendering code with the video beam by cycle-counting.
These are things that are no longer possible on modern computers (but in return we gained a lot of performance by decoupling hardware components).
* "Graphics and sound weren’t hidden behind ‘APIs’."
* Programs were written in 8 bit assembly
* Generally, the 8 bit CPU was considered a complete package, rather than just one small piece as RV32I is.
Those sound nice to have for didactic purposes. 32 bit could do that, but the RISC-V ecosystem/community seems keener on integration with wider world, rather than keeping a small complete system.
I'll check out some 8 bit ecosystems/communities. Video game space seems active? Or maybe there's a RV32I community for writing retro style games, with some simple graphics support?
I started an FPGA hobby project a few years ago (Before Covid[tm]) that tied a PicoRV32 core to my custom hand-made 80s-style "video chip" VDP type + direct access to SRAM with the hopes of making something like a "Retro-V"; boot straight to a BASIC (or Lua or something) prompt, etc. I had it supporting text generation, and most of the stuff to do simple tiles/sprites, and booting into a little custom "kernel."
I stalled once I started trying to integrate with the SD Card on my dev board. It got un-fun and then I got distracted by the apocalypse.
But I still think it's a neat idea, to tie RISC-V to that kind of "instant on" hobbyist architecture.
EDIT: looks like this "IceStation" project ended up doing mostly what I was intending, and started not long after me, but actually shipped something. Mine was targeting a Xilinx Artix-7 board, tho.
Surprisingly, in the list on that page OKI isn't even mentioned.
I had a small OKI databook once with a variety of 4-bit cpus in them (most if not all CMOS, iirc). Obviously geared towards deeply embedded uses like toys, sensors, basically: battery powered uses where speed isn't important but every uA counts.
Allthough it's possible those OKI parts where just 2nd sources of ones in the WP list.
And bit-slicing microprocessors, where you could build up your register width in multiples of 4 bits just by running several of them in parallel: https://en.wikipedia.org/wiki/AMD_Am2900
And the empty upper right corner was a fabrication test structure; it's missing in the simulation because it was completely isolated from the rest of the chip. You can see it in the die shots here:
A glaring omission in this list is the Intel MCS-51 architecture, which - despite being horrible to program for in this day and age - still powers billions of devices worldwide, with 8051 derivatives often being embedded as a housekeeping core into all sorts of specialized ASICs and 8051-compatible microcontrollers being the staple of cheap electronics. It was never meant to compete with general purpose processors such as the Z80, and its awkwardly limited instruction set reflects that, but it still managed to become the de facto CPU architecture for monitors, power tools, RGB lighting, SD cards and whatnot.