The Central Processing Unit (CPU or processor) is essentially the brain of a computer.
OverviewOverall, most central processing units contain:
A brief HistoryEarly programmable computers were built with individual components, starting with vacuum tubes until discrete transistors were invented in the late 1950s. It wasn't until 1971 when Intel developed the first fully integrated (one chip) microprocessor, the Intel 4004 note . During the 1980s, microprocessor companies were finding more and more things to integrate into the same package until it evolved into the processor that exists today. Extensions of this include the microcontroller, which contains a CPU core with many peripherals such as timers, analog sampling, serial communication, and general purpose IO, to drive small embedded devices. The microcontroller later evolved into the System-on-a-Chip (SoC), which can be considered a self-contained computer. These power a lot of today's portable electronics. See the analysis page for more in-depth information.
Some famous CPUs over the years:
open/close all folders
Vacuum tubes (1890s-1910s): The first electronic components.
Discrete circuts (1950s): Transistors, diodes (both previously vacuum tubes, now much smaller solid-state components), resistors, and capacitors wired together to build logic modules.
Logic chips (1960s): The first chips were simple one-operation logic modules, like digital LEGO bricks.
Intel 4004 (1971): First complete (CU and ALU) CPU.
Intel 8008 (1972): First 8-bit CPU.
Signetics 2650 (1973): An early, minicomputer-like CPU.
Intel 8080 (1974): First truly usable CPU, and the first to be used in hobbyist microcomputers, the ancestors of today's PCs.
Motorola 6800 (1974): First Motorola CPU.
MOS 6502 (1975): First cheap CPU. An improved Motorola 6800. The standard CPU for 8-bit videogame consoles and early Western home computers.
Zilog Z80 (1976): An improved Intel 8080. The main competitor to the MOS 6502, and more popular in Eastern countries.
Intel 8085 (1977): An improved 8080. While it, along with the 8080, was eclipsed by the Z80, it enjoyed success in industrial applications, including a radiation hardened version that NASA used in the 90s
Motorola 6809 (1978): Similar to the 6800, but with some 16-bit registers.
Intel 8086 (1978): The 16-bit evolution of the 8080. First in Intel's long line of x86 processors.
Motorola 68000 (1979): First in Motorola's 68k family.
Intel 80186 (1981): A slightly upgraded version of the 8086 (with an 8-bit version, the 80188), it integrated a lot of the external chips used with the 8086. It wasn't any faster per clock than its predecessor, meaning that most PC makers skipped over it and went straight to the 80286, but it saw wide use in embedded devices due to the integration of the other chips.
Intel iAPX 432 (1981): Intel's first 32-bit processor, and the first of many attempts by the company to produce a viable VLIW architecture. Unfortunately, the design was way too ambitious for the manufacturing technologies available at the time, resulting in the CPU part alone having to be split into three chips, and forcing the system to be clocked relatively low in order to keep the three parts in sync. This, along with the architecture's poorly-designed compiler, resulted in performance around half that of the 8086, meaning that it was a failure pretty much out of the gate.
Intel 80286 (1982): The x86 processor that introduced Protected Mode, thanks to an integrated memory mapping unit (MMU) as well as Memory Addressing beyond 1MB. Its performance was double that of its predecessor. However, its design was somewhat rushed (due to Intel putting most of its resources toward the i432 "micro-mainframe"), and it had a number of quirks that had to be worked around for the sake of 8088 compatibility, particularly the "A20 line" issue (which was later put to good use on AT-class machines as the "high memory area"). Also, there was no way to return to real mode from protected mode, as the mindset was "Why would you want that?". The intended way if you really wanted to go back was a command from the keyboard that would reset the processor. However, Microsoft and others found it was much faster and cleaner to intentionally crash the CPU. This method is still used by some operating systems to intentionally reset the computer as a last resort.
Motorola 68020 (1984): A fully 32-bit 68000.
NEC V30 (1984): A pin-compatible clone of the Intel 8086. Despite the same clock speed could run code somewhat faster at due to improved internal logic. It also had a few extra features, including one that allowed it to emulate the Intel 8080.
Western Design Center 65816 (1984): A 16-bit derivative of the MOS 6502.
Berkley RISC (1984): The first RISC designed processor, developed as a government project from the 80s. It was commercialized soon afterwards as Sun's SPARC processor. When word spread out that RISC was very powerful (the second iteration outperformed Motorola's 68000 anywhere from 140% to 400%), many companies followed suit to build their own RISC chips. What came out of it was Intel's i960, AMD's 29000, DEC's Alpha, Motorola's 88000, and PowerPC.
Intel 80386 (1985): The first 32-bit x86 processor, the 80386 also fixed several of the 80286's deficiencies; it could switch from protected mode to real mode without intentionally crashing the machine, and it supported 32-bit segments, meaning that the 80286's rather odd segmenting model could be avoided almost entirely. The 80386 also added "virtual 8086" mode, a way to run 16-bit code and 32-bit code simultaneously while in protected mode, which paved the way for Windows 3.x/9x's and 32-bit Windows NT's DOS box support. Came in two variants: the more powerful DX version (the original design), and the lower cost SX version with a 16-bit data bus (most motherboards at the time were 16-bit).
Intel i960 (1985): Intel's second CPU with RISC architecture; mostly used as an embedded microcontroller and in military applications, and never promoted for general use.
MIPS R2000 (1985): First in the MIPS family of RISC CPUs. This was spawned off from a project from Stanford University to develop a RISC processor at the same time Berkeley was developing theirs.
Acorn ARM2 (1986): First in the ARM family of RISC CPUs, the most produced CPU family in history.
Motorola 68030 (1987): A 68020 with an integrated memory controller.
MIPS R3000 (1988)
Intel 80486/i486 (1989): First x86 processor with built-in level-1 cache and a built-in Floating Point Unit (except the SX variant, where it was a 486 with a faulty FPU), introduced Pipelining, and one of the first CPUs to have a multiplier. Due to its performance in games (notably with the FPU), it was in high demand in the early 90s.
Intel i860 (1989): Introduced along with the 80486, this was Intel's second attempt at a from-scratch RISC architecture. While in theory it was promising, it suffered from pipeline issues that impacted performance. Its floating point performance however was good to be used in several high-performance computing projects. Features implemented did influence later Intel designs as well. Allegedly, its codename (N-Ten) was where the name Windows NT came from.
NEC V60 (late 1980s): The first 32-bit general-purpose microprocessor mass-produced in Japan. Unlike NEC's earlier V20 and V30 processors, does not use the x86 architecture.
Motorola 68040 (1990): Motorola's first 680x0 CPU with a built-in FPU. Faster than the i486 clock-per-clock, but ran notoriously hot (and thus was among the first desktop CPUs to require a heat sink). Came in several variants: the EC version (which dropped the FPU and MMU) used in Embedded Systems, and the low cost LC version (which dropped just the FPU; however this proved to be its undoing as buggy software searching for the FPU would crash the system).
MIPS R4000 (1991): First 64-bit RISC microprocessor.
Advanced RISC Machines ARM6 (1991)
Apple-IBM-Motorola PowerPC 601 & 603 (1992): The first two major forms of the PowerPC family. The 603 addressed and corrected most of the 601's flaws, and then fixed the remainder of them in its revised incarnation, the 603e. Still lacked official support for multiprocessing, though that didn't stop some resourceful designers.
DEC Alpha (1992): The first famous 64-bit CPU, and the champion of Clock Speed and floating-point performance throughout the 1990s. Was killed by Compaq after the DEC merger in favor of HP and Intel's then-upcoming Itanium CPU.
Intel Pentium (1993): The first superscalar x86 processor, with an integrated cache controller and a 64-bit data bus for faster memory access. Later models introduced the MMX instruction set.
Hitachi SH-2 (1993): Another low-cost RISC CPU. While its notable claim to fame is being used in video game systems (namely the Sega 32X and Saturn), it often found use in home entertainment electronics and engine control systems.
Advanced RISC Machines ARM7TDMI (1994): Gained fame as one of the most widely used embedded applications processors.
Motorola 68060 (1994): Motorola's last 680x0 CPU before venturing full time into the PowerPC family. Was actually more famously used in TV Character Generator systems. Like its predecessor the '040 (there was never a 68050), it came in EC and LC variants. Notably, Apple skipped this CPU entirely and went straight to the PowerPC.
NEC V810 (1994): Part of the V800 series of RISC CPUs.
MIPS R4300i (1995)
Intel Pentium Pro (1995): RISC architecture with a CISC interpreter for the x86 ISA. Optimized for fully 32-bit OSes such as Windows NT and UNIX, where it was an excellent performer, but failed in the desktop market due to its high production cost and lackluster performance under Windows 3.x and 95.
Cyrix 6x86 (1995): The first non-cloned x86 processor to pose a serious threat to Intel. Noted for its low price and excellent integer performance; however, its floating-point performance was lackluster, which became a problem as more games started to make use of FP code. It sold very well for its first two years, allowing Cyrix to take second place in the CPU market for a while. Unfortunately, Cyrix failed to significantly update the design (other than a relatively small refresh with the 6x86MX in 1998), meaning that it soon got left in the wake of the Celeron and K6-2 as the decade went on, and Cyrix dropped out of the market in 2000.
Hitachi SH-4 (1997): An evolution to the SuperH family of processors. It was designed for high-performance use.
AIM PowerPC 740/750 (1997), aka PowerPC G3: An evolutionary derivative of the PowerPC 603e, the 740 was completely pin-compatible with the 603e and was therefore available from some after-market vendors as a drop-in upgrade. The major improvement that the 740 and 750 both had over the 603e was the addition of the PowerPC 604ev "Mach V" chip's extensive dynamic branch-prediction logic. However, because the G3 was based on the 603ev, it lacked the PowerPC 604's support for multiprocessing.
AMD K6 (1997): AMD's first real challenge to Intel since the 486 days, and the beginning of its run as a Worthy Opponent. Eventually expanded into the K6-2 with "3DNow!" (floating-point SIMD) capability to make up for its somewhat weak standard FPU, and the rare but fast K6-3 with built-in level-2 cache.
Cyrix MediaGX (1997): This featured the same CPU core as the 6x86, but added graphics, sound, memory and PCI controllers onto the very chip itself. While it was ahead of its time in many aspects, it had the bad luck of being launched when the 3D accelerator revolution was taking place, and the combination of a rather basic 2D graphics controller and the CPU's uninspiring 3D gaming performance meant it only found any usage in laptops and bargain-basement PCs. In retrospect, this chip may have been Cyrix's shining achievement; while the rest of their CPU line was purchased by Via for a relatively low price, this one actually got purchased by AMD, who continue to sell it to this day under the "Geode" name.
Intel Pentium II (1997) and Pentium III (1999): Improved versions of the Pentium Pro, with the 16-bit speed problems fixed and a new, easier-to-make cartridge design. The Pentium III added "SSE" instructions for floating-point DSP work (following AMD's lead with "3DNow!"), and spawned a minor controversy over the use of embedded serial numbers that Intel eventually dropped. Later versions of the III were available in much smaller "flip-chip" packages, which were easier to install and cool. The Pentium II was made especially famous by what was probably Intel's craziest publicity gimmick in the form of their "Dancing Bunny Suit" advertisements. This generation also saw the introduction of the Xeon line of server processors, which added multi-processor abilities,note usually also with more cache (and later, cores).
Intel Celeron (1998): A cheaper version of the Pentium II (and later III). The initial version flopped, as it performed worse than the original Pentium, but the second version pioneered the use of on-die level 2 cache (in the Pentium Pro and II it had been on the package but off-die), meaning it performed as well as its more expensive brethren and become known as the Hypercompetent Sidekick of the CPU world. The line continues to this day, usually as heavily stripped-down counterparts of Intel's higher-end chips.
AMD Athlon & Athlon XP (1999): The processor that finally bested Intel, with much-improved floating-point performance over the K6 and a system bus design borrowed from the DEC Alpha to avoid legal issues with Intel. Later versions of the Athlon included the XP, MP (dual-processor capable) and value-priced Sempron. It was also the first processor to challenge the notion that clock speed meant performance. So much so that AMD named all of their processors with a "PR number", which supposedly represented the level it matched at against Pentium 4 at the speed of the number.
Sony Emotion Engine (1999): Based on MIPS ISA.
Motorola PowerPC 7400/7410/744X/745X (1999): Motorola's solo venture in the PowerPC family. As IBM was falling back on designing the successor, Apple went with Motorola's design. It added multiprocessing support and a new feature that Motorola had branded AltiVec, the answer to Intel's SSE and AMD's 3DNow!. It was notably marketed by Apple, supposedly performing at 1GFLOP and being banned for export due to being a "weapons grade supercomputer". Indeed one demonstration that Apple referred to as "Debunking the Megahertz Myth," a side-by-side comparison was made of an operation on Adobe Photoshop running on an 867 MHz PowerMac G4 and on a 1.7 GHz Pentium 4 PC, with the G4 performing faster.
Intel Pentium 4 Willamette and Northwood (2000-2002): Intel's first major blunder. The processor was marketed for its Clock Speed alone and the first generation, Willamette, performed on par, or worse than its predecessors. The high clock speed also made the processor run ridiculously hot as well. Intel's under-the-table deals kept AMD from winning overnight however, later causing Intel massive legal backlash. Things improved with the second generation, Northwood. Along with adding much faster bus and memory speeds, it introduced HyperThreading, which simulated a Multi Core Processor. This combination led to it competing well against Athlon XP and even against the early Athlon 64s. Another small introduction in the Pentium 4 line was the Integrated Heat Spreader, a small metal cap which supposedly widened the heat dissipation area, though was more appreciated by enthusiasts for making it much harder to accidentally damage the processor while installing the heatsink. Later in this generation Intel also introduced the first enthusiast-oriented desktop CPU, the Pentium 4 Extreme Edition, which was a repackaged Xeon server CPU with more cache.
AMD Duron (2000): AMD's response to Intel's Celeron. Essentially a low cost Athlon.
Intel Itanium (2001): Intel's second major blunder. On paper it looked good with 64-bit technology, massive floating-point performance, the ability to access ludicrous amounts of memory, and back-compatibility with existing 32 bit code. In practice however, the first version barely even equaled a similarly-clocked Pentium III in native mode, and couldn't even convincingly outperform an 80486note in 32-bit mode. Subsequent editions were more successful, but it remained a very niche product due to its high price and the difficulty of generating optimal code for the architecture, and the Itanium gradually sank into irrelevancy as multi-core x86-64 chips equaled and surpassed its performance for a much lower price point.
Transmeta Crusoe (2001): Perhaps more noted for the years of hype that surrounded its introduction, but noteworthy for being the first CPU designed from the ground up for laptops. Rather than using the CRISC design, the Crusoe used a Very Long Instruction Word (VLIW) design and had software to handle the instruction translation. Transmeta claimed that when their hardware and software became mature enough, they would be able to produce CPUs that were cheaper, much faster and much more power efficient than anything their rivals could make. Unfortunately, they never got the chance; after the Crusoe got a decidedly mixed reaction (it indeed had a very low power draw, but also pretty low performance, which is a problem when you're sitting around waiting for your laptop to do stuff on limited battery power), Intel, AMD and newcomers Via quickly moved into the "ultraportable laptop" market which the Crusoe had helped solidify, and Transmeta left the business in 2005.
Via C3 (2001): Via's debut in the CPU market, and a somewhat more successful implementation of what Transmeta had tried with the Crusoe. Initially this was branded the Cyrix III, but changed after troubles in the development processnote . Initially it was just another budget CPU design, but Via quickly emphasised its low power usage, creating very small CPU packages and the still widely-used Mini ITX case design to allow PCs to be built in the kind of space that had not previously been possible.
IBM PowerPC 970 (G5) (2003): The first 64-bit-capable PowerPC CPU intended for desktop use, the PowerPC 970 was announced by IBM in 2002. It was a cut-down, single-core version of IBM's POWER4 server CPU, with PowerPC compatibility and Motorola's AltiVec instructions added. Dubbed the "G5" by Apple, it first appeared in the Power Mac G5 in 2003, with single and dual-processor versions available. Apple was never quite pleased with the G5. While it was a powerful CPU, it was also a power hog and required extreme cooling measures because it ran so hot; some machines even shipped with water cooling stock, something that had never been needed before on a consumer-level machine. Apple lobbied IBM and Motorola to make a low-power version of the G5, something that would be suitable for the PowerBook, but neither was interested, forcing Apple to consider ARM and Intel. They went with Intel, and history was made; see the Core entry for more.
Intel Pentium M (2003): Intel's Ensemble Darkhorse during the age of the Pentium 4. This processor was designed from the ground up to be used in laptops, rather than using a scaled-down desktop part. Superficially it looks like a Pentium III with the Pentium 4's system bus and a huge cache bolted on, but every aspect of the chip was carefully hand-tuned to provide the best possible balance energy efficiency. As a result, it outperformed everything on the market in terms of performance-per-watt on release and humiliatingly for Intel it often outperformed the "Prescott" Pentium 4 models even at stock speeds. This led to companies producing desktop Pentium M boards for enthusiasts who wanted to get in on the act. The Pentium M would subsequently form the basis for the Core 2 line which eventually took the market back for Intel, and many believe that had Intel not had the Pentium M in development when they did, they would probably have never regained the performance lead.
AMD Athlon 64 (2003): Introduced the x86-64 (also known as AMD64 and Intel64) instruction set, giving the x86 64-bit capabilities while remaining backwards compatible with 32-bit x86 programs. Also the first x86 processor to have the memory controller on-die, making access faster than it would be by going through the system bus. Noted for being by far the most successful AMD processor and probably remembered more fondly by enthusiasts than any other CPU, ever. The impact of its server counterpart, Opteron was even more keenly felt (64-bit addressing being more useful there), and AMD dominated that market for most of the decade.
Intel Pentium 4 Prescott and Cedar Mill (2004): Intel's third major blunder, and widely regarded as their worst x86 processor. The design was an evolution of the previous generations, taking the pipeline and increasing its stages. However, this only made the heat and power problems of the Pentium 4 worse while also lowering per-clock performance, and caused Intel to cancel another generation (Tejas) which was intended to reach even higher clock speeds. This was the final blow to the Pentium brand, and Intel reduced it to being a budget chip (just above Celeron). The only significant changes elsewhere was that it introduced the Land Grid Array socket into Intel's lineup and added x86-64 in later versions. Most of the major issues were fixed with the core's second version, Cedar Mill, though that didn't last long on the market before Intel released its Core 2 line.
AMD Sempron (2004): The successor to the AMD Duron. Initially these were severely crippled Athlon 64s, with only one memory channel and no 64-bit support, but after the introduction of multi-core Athlon 64s the brand instead began to be used for AMD's single-core chips.
Intel Pentium D (2005): Technically the first dual-core x86 CPU, though it was just two processor dies in one package, whereas AMD's Athlon X2 was two cores in one die. There were two versions of the Pentium D; the first was "Smithfield," which was even more ridiculously hot than its Prescott forerunner and far slower than the Athlon 64 X2. The second version was "Presler," which didn't improve on a performance front, but had more manageable heat levels and actually won over some enthusiasts due to the insane overclocks that were feasible.
IBM Xenon (2005): Based on IBM's POWER4 architecture, this is actually a tri-core processor with each core being a modified PPE unit of the Cell Broadband Engine (see below).
Intel Core (2006): The point where Intel started to be taken seriously again. It evolved from the Pentium M, with other innovations included being able to downclock one of the two cores to save power, and a single cache that was shared by both cores. Notable also for Intel's debut on Apple Macintosh computers, which now enabled them to run much faster than the PowerPC series, and to the delight of many, be fully compatible with Microsoft Windows. A server version was also produced, but was generally ignored in this market due to the lack of 64-bit functionality; a problem which Intel would address quickly.
Sony-Toshiba-IBM Cell Broadband Engine (2006): A joint effort between the three companies, based on IBM's POWER4 architecture. Development started in 2001 as an ambitious effort to develop something to act like something between a general CPU and specialized, high performance processors (like GPUs). Basically it contains one general purpose CPU to control 8 smaller CP Us that do the real operations. However, since it emphasizes performance over anything else, it hasn't obtained wide success due to difficulty programming applications for it (although the proliferation of the PS3 would like one to think otherwise...)
Intel Core 2 (2006): Intel leap-frogs AMD after a 6-year dalliance with the inefficient and impractically hot-running Pentium 4. This processor went back to Intel's roots by overhauling the Pentium M and the original Core's design, making it faster, and adding AMD's x86-64 instructions. Only months later, Intel turned things Up to Eleven by releasing the first quad-core desktop CPU, using the same dual-die approach previously employed for the Pentium D. In addition, Intel would continue to create lower power-consumption mobile variants for laptops. The Pentium brand, meanwhile, was reintroduced as a low-end CPU line, only a rung above the Celeron.
AMD Phenom and Phenom II (2007-2008): The successor to the Athlon 64, and the world's first native four-core chip. In addition to this, it was the first CPU that allowed the individual cores to be clocked wholly independently of one another as opposed to fixed rates, or even powered down altogether. AMD also offered tri-core versions in order to make up the production costs of chips which had a defective core. The first version of Phenom was late to the market, clocked far too slow to compete with Intel's offerings, and suffered a glitch which could crash the whole system in certain rare circumstances. However, the second revision was much more successful; while it couldn't convincingly outperform the Core 2 or later Core i7, AMD were able to offer more cores for the same price as an equivalent Intel chip, meaning that they dominated the low-end for several years.
Intel Atom (2008): Years prior, Intel began shrinking it's mobile processors for use in Microsoft's push for tiny, handheld computers. However, Intel developed the Atom from scratch, rather than basing it off an existing part. When first revealed, the entire package itself was often compared with a penny and it sipped power (the chipset that it supported was larger and more power consuming). It started the Netbook phase of laptops and occasionally finds itself in "Nettops" (small computers, usually mounted on monitors). Currently Intel is using Atom to compete with ARM for the mobility and microserver sectors. After years of comparatively slow, incremental updates, Intel is poised to finally bring Atom up to speed with a schedule similar to the tick-tock cadence of the Core platform, as well as a general overhaul of the architecture, which includes bringing in integrated graphics (either PowerVR SGX or Intel HD Graphics), out-of-order execution and x86-64 instructions.
ARM Cortex (2008): Not so much a processor as it is an architectural design, as ARM does not make its own chips but licenses them instead. However, it found itself being one of the most widely used processors in low-power electronics such as embedded devices, Netbook and Nettop computers, smartphones, and portable game systems. Chances are, if it's not using x86, it's probably using an ARM Cortex design. The family comes in three flavors: (A)pplication, (M)icrocontroller, and (R)obust.
Intel Nehalem & Westmere (2008-2010): Intel started to implement many features of then-current and previous x86 processors, including on-die memory controllers, HyperThreading, and single die quad-core. Models with a K at the end had their clock multipliers unlocked that was within many enthusiasts' budgets, rather than being a feature on the "Extreme Edition" parts that commanded luxury prices. New to Intel's lineup was an overclock-as-needed feature called Turbo Boost and graphics on the processor package itself for dual-core models. One of two issues was that Intel had an unclear branding schema, particularly with the i5 series note . The other was that motherboard sockets were split between markets, which drew criticism among AMD fans note , and had a staggered release, with the mainstream following the high end by a year. note Also of note: the very first consumer 6-core models note .
First Generation MCST Elbrus 2000 (1999-2011) A Russian attempt to revive its struggling microelectronics industry, this CPU was in development since early 2000, but was largely sidelined by a lack of modern production facilities, as Russian laws at the time prevented its large-scale production on the foreign plants, and their major application was in military. Another attempt to create a competitive VLIW-based CPU, this looks like it was finally done right: back in 2005 one of this chips, clocked at 300 MHz, outpaced a 500 MHz Pentium III in x86 compatibility mode, and was competitive with 2 GHz P4 when running a native code. As common for the VLIW CPU internally it is split into several pipelined execution units that are directly driven by a command stream without costly transcoding and rearranging the complex CISC and even RISC commands. Its secret, however, lies in the extremely efficient compile algorithms that allow for correct arrangement of this stream. It also emulates x86 architecture by a special binary compiler that runs as a hardware-supported shadow task in an OS and translates x86 binaries on-the-fly with a minimal penalty.
Intel Sandy Bridge & Ivy Bridge (2011): Intel starts to focus on power efficiency and the use of integrated graphics to accelerate compute tasks. Ivy Bridge was the first processor to use Intel's new tri-gate transistors which increased power efficiency and allowed them to create the Ultrabook initiative, PCs inspired after Apple's Macbook Air. Carrying over from the previous generation, Intel also released K versions of the processor with unlocked clock multipliers. The high-end sockets, simply denoted by an -E suffix, left off the graphics but packed up to six cores. Ivy Bridge also used thermal paste in the heatspreader, which causes higher temperatures, but at the same time allows the heatspreader to be removed more easily. Previously they were soldered on.
AMD Fusion (Bobcat, Llano, Jaguar) (2011): After AMD bought ATi in 2006, AMD had goals to merge the logical portion of the processor, with the computational powerhouse of the GPU. AMD eventually came out with a similar solution to that of Intel's in 2011 named Fusion. AMD's solution has more of a GPU focus, die shots showing that AMD's "Accelerated Processing Units" (APU) are roughly half GPU and half CPU. These features are rather popular with laptops, as graphical and computing performance increases dramatically, with little impact to battery life. In late 2012, the updated Trinity series of APUs was released, finally making the line viable for desktop users.
AMD FX-series (2011): AMD takes a radical approach to CPU organization in the processor. Instead of a "core", this processor has "modules", consisting of two basic execution units, with a shared FPU. There was claim that this used die space more efficiently. While the first version, "Bulldozer" did excel in multi-threaded tasks (up to the same performance as Intel's second generation Core i7 in some cases), in other tasks, it's only slightly better (or even slightly slower) than its predecessor. The second version, "Piledriver" improved matters — especially after AMD released the 9000-series with ludicrouly high clock speeds — consistently equalling or beating the equivalent Core i7s in heavily threaded situations, albeit at the cost of much higher power consumption (about 2.5x that of Intel's offerings) and still-low single thread performance. However, AMD appear to have conceded that the line has no long-term future, and by all indications have pulled the plug on FX versions of the next two planned cores, Steamroller and Excavator (though the Steamroller core found its way into their APUs, and Excavator likely will too).
Second Generation Elbrus 2000 (2011) An updated e2k CPU with smaller tech norms and increased clock speed. Comparable to low-end Intel chips clocked several times faster in x86 emulation mode, and lightning-fast on the native VLIW code. Also extremely cool for its performance, radiating ~15-20W at full load. Recent versions also included several DSP cores to speed-up the image and signal processing, a vestige of its previous use as a radar CPU. While still mainly used in military applications, MCST has recently demonstrated a PC-standard mini-ITX motheboard, indicating their wish to enter the general PC market, especially multimedia servers, where their additional DSP cores would be especially useful.
Intel Xeon Phi (2012): The Xeon Phi started its roots in the ambitious GPU Larabee back in 2007, which Intel wanted it to do real-time ray tracing (the Holy Grail of real-time graphics). However, this fell through and was brought up on and off in 2010. The end result though, is what Intel dubbed the Many-Integrated Core (MIC) processor for highly parallel processing. Using a revamped P5 architecture (the same one in the original Pentium), the low-end Phi can achieve a performance of over 1TFLOP.
Intel Haswell (2013): Intel continues their push for lower power and more capable integrated graphics. While it didn't improve much over the previous generation, some new instructions were added and CPU voltage regulators were moved onto the processor package. The higher end GPU, dubbed Iris and Iris Pro, could go toe-to-toe with lower-end dedicated GPU offerings from AMD and NVIDIA. In addition, including the voltage regulators on the processor package allows fine tuning of power states for even more power saving. The only issue though is that presents a problem for overclockers which need to adjust CPU voltage at times as well as using thermal paste in the heatspreader. The latter issue resulted in Intel producing an intermediate revision, Devil's Canyon (which is clocked higher, uses better transfer material and has an improved electrical design) ahead of the core's major overhaul, Broadwell.