RISC vs. CISC
RISC has been the neat new thing for eons now. RISC is Reduced Instruction Set Computer -- it makes it easier to make processors run fast since there's less logic that needs to be baked into the silicon than a CISC, or Complex Instruction Set Computer, does.In the very beginning there was really only RISC computers, but they weren't called RISC at the time. They were just very simple computers. The reduction was part and parcel to the tiny amount of logic that was even available to be used. Shortly after that there grew to be computers that had pretty complex instruction sets -- all you have to do is look at the HP VAX series and IBM 3xx mainframes to see it play out. My favorite example is that the VAX had an instruction for polynomial evaluation. Crazy.There there was the lowly Intel 8086. It began life as mostly an industrial controller competing with the Zilog Z80 and Motorola 68xx series. This isn't really surprising since in the mid 70s the idea of a personal computer was a foreign concept. Each of this set of processors was a simple CISC processor. Due to limitations in memory they had variable instruction length (to save precious bytes) and much of the internals ran on microcode -- which was what the core processor ran. The instructions that it ran were what controlled what real instructions were run internally.The disconnect between the in-memory instructions and the microcode is really the dividing line between RISC and CISC. RISC got rid of the microcode and ran the instructions directly. One of the most famous early examples is the MIPS processor and the DEC Alpha. These got rid of the added burden of running microcode and the decode step and started executing programs directly. Memory became cheap, relatively speaking, and the added size of the programs was no longer an issue.RISC was the next big thing.Excepting the ARM processor, we're presently in a CISC world. the Intel architecture, started with the 808x went on to become the leading "real" processor. It grew to 32 bits with the 80386 and moved to 64 bits with the help of AMD.Why did it win?My theory is that the abstraction is really what got it to win. When you have a simple and expressive instruction set, like you get with CISC, you are writing the equivalent a pseudocode for the processor. As processors grow and change the code you write gets translated differently. The state of the art can evolve without the outside world being involved.Constrast that with one of the famous failures of RISC -- the Intel Itanium processor. This was RISC to the n-th degree.The concept was beautiful. The problem is that once the instruction set is laid out, it's fixed at that point in time. All of the compromises are baked in forever. As the Itanium architecture matured, it went from having effectively no decode stage to growing back to having decode and instruction scheduling stages.All of the advantages you get from RISC are thrown away after a generation or two of development.Another advantage is you really don't need to optimize between generations of processors on CISC. You can get some middling gains, but it's typically not stunning. With RISC, oftentimes you would need to recompile to really get the advantages.CISC simply embraces the warts and moves on. It's not perfect, but it's adaptable. Sometimes adaptable is what it takes to win.The market tends to do a reasonable job in terms of deciding winners. And now, for the big processors at least, CISC has handily won.