The Central Processing Unit (CPU or processor) is essentially the brain of a computer.
OverviewOverall, most central processing units contain:
A brief HistoryEarly programmable computers were built with individual components, starting with vacuum tubes until discrete transistors were invented in the late 1950s. It wasn't until 1971 when Intel developed the first fully integrated (one chip) microprocessor, the Intel 4004. During the 1980s, microprocessor companies were finding more and more things to integrate into the same package until it evolved into the processor that exists today. Extensions of this include the microcontroller, which contains a CPU core with many peripherals such as timers, analog sampling, serial communication, and general purpose IO, to drive small embedded devices. The microcontroller later evolved into the System-on-a-Chip (SoC), which can be considered a self-contained computer. These power a lot of today's portable electronics. To help processors perform their tasks quicker, several features were developed:
Some famous CPUs over the years:
open/close all folders
Vacuum tubes (1890s-1910s): The first electronic components.
Discrete circuts (1950s): Transistors, diodes (both previously vacuum tubes, now much smaller solid-state components), resistors, and capacitors wired together to build logic modules.
Logic chips (1960s): The first chips were simple one-operation logic modules, like digital LEGO bricks.
Intel 4004 (1971): First complete (CU and ALU) CPU.
Intel 8008 (1972): First 8-bit CPU.
Signetics 2650 (1973): An early, minicomputer-like CPU.
Intel 8080 (1974): First truly usable CPU, and the first to be used in hobbyist microcomputers, the ancestors of today's PCs.
Motorola 6800 (1974): First Motorola CPU.
MOS 6502 (1975): First cheap CPU. An improved Motorola 6800. The standard CPU for 8-bit videogame consoles and early Western home computers.
Zilog Z80 (1976): An improved Intel 8080. The main competitor to the MOS 6502, and more popular in Eastern countries.
Motorola 6809 (1978): Similar to the 6800, but with some 16-bit registers.
Intel 8086 (1978): The 16-bit evolution of the 8080. First in Intel's long line of x86 processors.
Motorola 68000 (1979): First in Motorola's 68k family.
Intel 80186 (1981): A slightly upgraded version of the 8086 (with an 8-bit version, the 80188), it integrated a lot of the external chips used with the 8086. It wasn't any faster per clock than its predecessor, meaning that most PC makers skipped over it and went straight to the 80286, but it saw wide use in embedded devices due to the integration of the other chips.
Intel 80286 (1982): The x86 processor that introduced Protected Mode, thanks to an integrated memory mapping unit (MMU) as well as Memory Addressing beyond 1MB. Its performance was double that of its predecessor. However, its design was somewhat rushed (due to Intel putting most of its resources toward the i432 "micro-mainframe"), and it had a number of quirks that had to be worked around for the sake of 8088 compatibility, particularly the "A20 line" issue (which was later put to good use on AT-class machines as the "high memory area"). Also, there was no way to return to real mode from protected mode; the OS had to crash the CPU, then reload its state from memory — something the PC/AT BIOS allowed for, but still took time.
Motorola 68020 (1984): A fully 32-bit 68000.
NEC V30 (1984): A pin-compatible clone of the Intel 8086. Despite the same clock speed could run code somewhat faster at due to improved internal logic. It also had a few extra features, including one that allowed it to emulate the Intel 8080.
Western Design Center 65816 (1984): A 16-bit derivative of the MOS 6502.
Berkley RISC (1984): The first RISC designed processor, developed as a government project from the 80s. It was commercialized soon afterwards as Sun's SPARC processor. When word spread out that RISC was very powerful (the second iteration outperformed Motorola's 68000 anywhere from 140% to 400%), many companies followed suit to build their own RISC chips. What came out of it was Intel's i960, AMD's 29000, DEC's Alpha, Motorola's 88000, and PowerPC.
Intel 80386 (1985): The first 32-bit x86 processor, the 80386 also fixed several of the 80286's deficiencies; it could switch from protected mode to real mode without intentionally crashing the machine, and it supported 32-bit segments, meaning that the 80286's rather odd segmenting model could be avoided almost entirely. The 80386 also added "virtual 8086" mode, a way to run 16-bit code and 32-bit code simultaneously while in protected mode, which paved the way for Windows 3.x/9x's and 32-bit Windows NT's DOS box support. Came in two variants: the more powerful DX version (actually the original design), and the lower cost SX version with a 16-bit data bus (because most motherboards at the time were 16-bit).
Intel i960 (1985): Intel's second CPU with RISC architecture; mostly used as an embedded microcontroller and in military applications, and never promoted for general use.
MIPS R2000 (1985): First in the MIPS family of RISC CPUs. This was spawned off from a project from Stanford University to develop a RISC processor at the same time Berkeley was developing theirs.
Acorn ARM2 (1986): First in the ARM family of RISC CPUs, the most produced CPU family in history.
Motorola 68030 (1987): A 68020 with an integrated memory controller.
MIPS R3000 (1988)
Intel 80486/i486 (1989): First x86 processor with built-in level-1 cache and a built-in Floating Point Unit (except the SX variant, where it was a 486 with a faulty FPU), introduced Pipelining, and one of the first CPUs to have a multiplier. Due to its performance in games (notably with the FPU), it was in high demand in the early 90s.
Intel i860 (1989): Introduced along with the 80486, this was Intel's second attempt at a from-scratch RISC architecture. While it looked good on paper, in practice, it suffered from the same issues with pipeline spills that the later Pentium 4 and Itanium did, making its performance in everyday computing sub-par. However, like the Itanium, its floating-point performance was good, and it found its way into several high-performance computing projects. Its design was also influential on the Pentium, which used a 64-bit data path and later added 860-like SIMD features as "MMX". The 860 was known as the "N10" during its development; this had an influence on operating system history, as the i860 was the CPU chosen for initial development of what was then "OS/2 3.0". (It was completely different from the 80386, and Microsoft's OS/2 3.0 development team wanted a clean break from PC architecture.) This led to Microsoft contracting "N-Ten" to "NT" internally, and the name stuck.
NEC V60 (late 1980s): The first 32-bit general-purpose microprocessor mass-produced in Japan. Unlike NEC's earlier V20 and V30 processors, does not use the x86 architecture.
Motorola 68040 (1990): Motorola's first 680x0 CPU with a built-in FPU. Faster than the i486 clock-per-clock, but ran notoriously hot (and thus was among the first desktop CPUs to require a heat sink). Came in several variants: the EC version (which dropped the FPU and MMU) used in Embedded Systems, and the low cost LC version (which dropped just the FPU; however this proved to be its undoing as buggy software searching for the FPU would crash the system).
MIPS R4000 (1991): First 64-bit RISC microprocessor.
Advanced RISC Machines ARM6 (1991)
Apple-IBM-Motorola PowerPC 601 & 603 (1992): The first two major forms of the PowerPC family. The 603 addressed and corrected most of the 601's flaws, and then fixed the remainder of them in its revised incarnation, the 603e. Still lacked official support for multiprocessing, though that didn't stop some resourceful designers.
DEC Alpha (1992): The first famous 64-bit CPU, and the champion of Clock Speed and floating-point performance throughout the 1990s. Was killed by Compaq after the DEC merger in favor of HP and Intel's then-upcoming Itanium CPU.
Intel Pentium (1993): The first superscalar x86 processor, with an integrated cache controller and a 64-bit data bus for faster memory access. Later introduced the "MMX" instruction set.
Hitachi SH-2 (1993): Another low-cost RISC CPU.
Advanced RISC Machines ARM7TDMI (1994): Gained fame as one of the most widely used embedded applications processors.
Motorola 68060 (1994): Motorola's last 680x0 CPU before venturing full time into the PowerPC family. Was actually more famously used in TV Character Generator systems. Like its predecessor the '040 (there was never a 68050), it came in EC and LC variants. Notably, Apple skipped this CPU entirely and went straight to the PowerPC.
NEC V810 (1994): Part of the V800 series of RISC CPUs.
MIPS R4300i (1995)
Intel Pentium Pro (1995): RISC architecture with a CISC interpreter for the x86 ISA. Optimized for fully 32-bit OSes such as NT and UNIX, where it was an excellent performer, but failed in the desktop market due to its high production cost and lackluster performance under Windows 3.x and 95.
Cyrix 6x86 (1995): The first non-cloned x86 processor to pose a serious threat to Intel. Noted for its low price and excellent integer performance; however, its floating-point performance was lackluster, which became a problem as more games started to make use of FP code. It sold very well for its first two years, allowing Cyrix to take second place in the CPU market for a while. Unfortunately, Cyrix failed to significantly update the design (other than a relatively small refresh with the 6x86MX in 1998), meaning that it soon got left in the wake of the Celeron and K6-2 as the decade went on, and Cyrix dropped out of the market in 2000.
Hitachi SH-4 (1997)
AIM PowerPC 740/750 (1997), aka PowerPC G3: An evolutionary derivative of the PowerPC 603e, the 740 was completely pin-compatible with the 603e and was therefore available from some after-market vendors as a drop-in upgrade. The major improvement that the 740 and 750 both had over the 603e was the addition of the PowerPC 604ev "Mach V" chip's extensive dynamic branch-prediction logic. However, because the G3 was based on the 603ev, it lacked the PowerPC 604's support for multiprocessing.
AMD K6 (1997): AMD's first real challenge to Intel since the 486 days, and the beginning of its run as a Worthy Opponent. Eventually expanded into the K6-2 with "3DNow!" (floating-point SIMD) capability to make up for its somewhat weak standard FPU, and the rare but fast K6-3 with built-in level-2 cache.
Cyrix MediaGX (1997): This featured the same CPU core as the 6x86, but added graphics, sound, memory and PCI controllers onto the very chip itself. While it was ahead of its time in many aspects, it had the bad luck of being launched when the 3D accelerator revolution was taking place, and the combination of a rather basic 2D graphics controller and the CPU's uninspiring 3D gaming performance meant it only found any usage in laptops and bargain-basement PCs. In retrospect, this chip may have been Cyrix's shining achievement; while the rest of their CPU line was purchased by Via for a relatively low price, this one actually got purchased by AMD, who continue to sell it to this day under the "Geode" name.
Intel Pentium II (1997) and Pentium III (1999): Improved versions of the Pentium Pro, with the 16-bit speed problems fixed and a new, easier-to-make cartridge design. The Pentium III added "SSE" instructions for floating-point DSP work (following AMD's lead with "3DNow!"), and spawned a minor controversy over the use of embedded serial numbers that Intel eventually dropped. Later versions of the III were available in much smaller "flip-chip" packages, which were easier to install and cool. The Pentium II was made especially famous by what was probably Intel's craziest publicity gimmick in the form of their "Dancing Bunny Suit" advertisements.
Intel Celeron (1998): A cheaper version of the Pentium II (and later III). The initial version flopped, as it performed worse than the original Pentium, but the second version pioneered the use of on-die level 2 cache (in the Pentium Pro and II it had been on the package but off-die), meaning it performed as well as its more expensive brethren and become known as the Hypercompetent Sidekick of the CPU world.
AMD Athlon & Athlon XP (1999): The processor that finally bested Intel, with much-improved floating-point performance over the K6 and a system bus design borrowed from the DEC Alpha to avoid legal issues with Intel. Later versions of the Athlon included the XP, MP (dual-processor capable) and value-priced Sempron. It was also the first processor to challenge the notion that clock speed meant performance. So much so that AMD named all of their processors with a "PR number", which supposedly represented the level it matched at against Pentium 4 at the speed of the number.
Sony Emotion Engine (1999): Based on MIPS ISA.
Motorola PowerPC 7400/7410/744X/745X (1999): Motorola's solo venture in the PowerPC family. As IBM was falling back on designing the successor, Apple went with Motorola's design. It added multiprocessing support and a new feature that Motorola had branded AltiVec, the answer to Intel's SSE and AMD's 3DNow!. It was notably marketed by Apple, supposedly performing at 1GFLOP and being banned for export due to being a "weapons grade supercomputer". Indeed one demonstration that Apple referred to as "Debunking the Megahertz Myth," a side-by-side comparison was made of an operation on Adobe Photoshop running on an 867 MHz PowerMac G4 and on a 1.7 GHz Pentium 4 PC, with the G4 performing faster.
Intel Pentium 4 (2000): Intel's first major blunder. The processor was marketed for its Clock Speed alone and the first generation of processors (a.k.a. Willamette) performed on par, or worse than its predecessors. The high clock speed also made the processor run ridiculously hot as well. Intel's under-the-table deals kept AMD from winning overnight however, later causing Intel massive legal backlash. While things improved with the second generation (Northwood), it got worse with the third, known as Prescott. Notable for being both a power hog and a very hot processor. However, to mitigate issues, Intel created HyperThreading, which simulated a Multi Core Processor and SpeedStep, which clocked down the processor when it wasn't doing much to save power. Later models of the Prescott also introduced the "Land Grid Array" (LGA) chip, where the chip's pins are now in the socket while the chip itself had mere copper lands that rested on the pins.
AMD Duron (2000): AMD's response to Intel's Celeron. Essentially a low cost Athlon.
Intel Itanium (2001): Intel's second major blunder. On paper it looked good with 64-bit technology, massive floating-point performance, the ability to access ludicrous amounts of memory, and back-compatibility with existing 32 bit code. In practice however, the first version barely even equaled a similarly-clocked Pentium III in native mode, and couldn't even convincingly outperform an 80486* in 32-bit mode. Subsequent editions were more successful, but it remained a very niche product due to its high price and the difficulty of generating optimal code for the architecture, and the Itanium gradually sank into irrelevancy as multi-core x86-64 chips equalled and surpassed its performance for a much lower price point.
Transmeta Crusoe (2001): Perhaps more noted for the years of hype that surrounded its introduction, but noteworthy for being the first CPU designed from the ground up for laptops. Rather than using the CRISC design, the Crusoe used a Very Long Instruction Word (VLIW) design and had software to handle the instruction translation. Transmeta claimed that when their hardware and software became mature enough, they would be able to produce CPUs that were cheaper, much faster and much more power efficient than anything their rivals could make. Unfortunately, they never got the chance; after the Crusoe got a decidedly mixed reaction (it indeed had a very low power draw, but also pretty low performance, which is a problem when you're sitting around waiting for your laptop to do stuff on limited battery power), Intel, AMD and newcomers Via quickly moved into the "ultraportable laptop" market which the Crusoe had helped solidify, and Transmeta left the business in 2005.
Via C3 (2001): Via's debut in the CPU market, and a somewhat more successful implementation of what Transmeta had tried with the Crusoe. Initially this was branded the Cyrix III, but changed after troubles in the development process*. Initially it was just another budget CPU design, but Via quickly emphasised its low power usage, creating very small CPU packages and the still widely-used Mini ITX case design to allow PCs to be built in the kind of space that had not previously been possible.
IBM PowerPC 970 (G5) (2003): The first 64-bit-capable PowerPC CPU intended for desktop use, the PowerPC 970 was announced by IBM in 2002. It was a cut-down, single-core version of IBM's POWER4 server CPU, with PowerPC compatibility and Motorola's AltiVec instructions added. Dubbed the "G5" by Apple, it first appeared in the Power Mac G5 in 2003, with single and dual-processor versions available. Apple was never quite pleased with the G5. While it was a powerful CPU, it was also a power hog and required extreme cooling measures because it ran so hot; some machines even shipped with water cooling stock, something that had never been needed before on a consumer-level machine. Apple lobbied IBM and Motorola to make a low-power version of the G5, something that would be suitable for the PowerBook, but neither was interested, forcing Apple to consider ARM and Intel. They went with Intel, and history was made; see the Core entry for more.
Intel Pentium M (2003): Intel's Ensemble Darkhorse during the age of the Pentium 4. Before, Intel took their desktop processors and tweaked them for laptops. This, however, was the first one designed from the ground-up for laptops. Superficially it looks like a Pentium III with the Pentium 4's system bus and a huge cache bolted on, but every aspect of the chip was carefully hand-tuned to provide the best possible balance of performance and efficiency. As a result, it outperformed everything on the market in terms of performance-per-watt on release and humiliatingly for Intel it often outperformed the "Prescott" Pentium 4 models even at stock speeds. This led to companies producing desktop Pentium M boards for enthusiasts who wanted to get in on the act. The Pentium M would subsequently form the basis for the Core 2 line which eventually took the market back for Intel, and many believe that had Intel not had the Pentium M in development when they did, they would probably have never regained the performance lead.
AMD Athlon 64 (2003): Introduced the x86-64 (also known as AMD64 and Intel64) instruction set, giving the x86 64-bit capabilities while remaining backwards compatible with 32-bit x86 programs. Also the first x86 processor to have the memory controller on-die, making access faster than it would be by going through the system bus. Noted for being by far the most successful AMD processor and probably remembered more fondly by enthusiasts than any other CPU, ever. The impact of its server counterpart, Opteron was even more keenly felt (64-bit addressing being more useful there), and AMD dominated that market for most of the decade.
AMD Sempron (2004): The successor to the AMD Duron. Initially these were severely crippled Athlon 64s, with only one memory channel and no 64-bit support, but after the introduction of multi-core Athlon 64s the brand instead began to be used for AMD's single-core chips.
Intel Pentium D (2005): Technically the first dual-core x86 CPU, though it remains a point of contention between Intel and AMD fans as to whether the Pentium D (which was just two "Prescott" Pentium 4 dies slapped together on the same chip), or AMD's Athlon 64 X2 (which was engineered from the ground up to be a dual-core processor) really deserves that distinction. There were two versions of the Pentium D; the first was "Smithfield," which was even more ridiculously hot than its "Prescott" forerunner and far slower than the Athlon 64 X2. The second version was "Presler," which didn't improve on a performance front, but had more manageable heat levels and actually won over some enthusiasts due to the insane overclocks that were feasible.
IBM Xenon (2005): Based on PowerPC.
Intel Core (2006): The point where Intel started to be taken somewhat seriously again. The basic cores (of which there were two) were pretty much the same as the one designed three years earlier for the Pentium M, but the already-good performance of that chip combined with incremental improvements meant that the Core was able to draw level with the Athlon 64 by most measures. Other innovations included being able to downclock one of the two cores to save power, and a single cache that was shared by both cores. Notable also for Intel's debut on Apple Macintosh computers, which now enabled them to run much faster than the PowerPC series, and to the delight of many, be fully compatible with Microsoft Windows. A server version was also produced, but was generally ignored in this market due to the lack of 64-bit functionality; a problem which Intel would address quickly.
Sony-Toshiba-IBM Cell Broadband Engine (2006): A joint effort between the three companies, based on IBM's POWER4 architecture. Development started in 2001 as an ambitious effort to develop something to act like something between a general CPU and specialized, high performance processors (like GPUs). Basically it contains one general purpose CPU to control 8 smaller CP Us that do the real operations. However, since it emphasizes performance over anything else, it hasn't obtained wide success due to difficulty programming applications for it (although the proliferation of the PS3 would like one to think otherwise...)
Intel Core 2 (2006): Intel leap-frogs AMD after a 6-year dalliance with the inefficient and impractically hot-running Pentium 4. This processor went back to Intel's roots by overhauling the Pentium M and Core 1's design, making it faster, and adding AMD's x86-64 instructions. In addition, Intel would continue to create lower power-consumption mobile variants for laptops, each designated M in their product numbering, a trend continued in subsequent generations to this very day. The Pentium brand, meanwhile, was reintroduced as a low-end CPU line, only a rung above the Celeron.
AMD Phenom (2007): The successor to the Athlon 64, and the world's first native four-core chip (Intel's competing Core 2 Quads were a pair of Core 2 Duos slapped together on the same chip). In addition to this, it was the first CPU that allowed the individual cores to be clocked wholly independently of one another (Core and Core 2 offered downclocking options, but at fixed clock rates), or even powered down altogether. Unlike Intel, whose core numbers came in powers of two, AMD started offering treble-core versions in order to make up the production costs of chips which had a defective core. The first version of Phenom was late to the market, clocked far too slow to compete with Intel's offerings, and suffered a glitch which could crash the whole system in certain rare circumstances. However, the second revision was much more successful; while it couldn't convincingly outperform the Core 2 or later Core i7, AMD were able to offer more cores for the same price as an equivalent Intel chip, meaning that they dominated the low-end for several years.
ARM Cortex (2008): ARM's most widely used processor for embedded applications and consumer electronics. This family is as ubiquitous in those areas as the x86/x64 is to laptops and desktops.
First generation Intel Core i3/i5/i7 (2008-2010): Intel consolidated many of the features of the x86 processors into one, including an on-die memory controller and HyperThreading for some models. Quad-cores were now on a single die, in contrast to the dual module Core 2 Quads. They also added Turbo Boost, an ability for the processor to overclock some cores if needed. It also introduced unlocked clock multipliers (locked since the early 90s) on selected mainstream and performance models (these were given the K suffix), as opposed to only the "Extreme Edition" parts. This generation also marked the introduction of on-board GPUs on the i3 series as well as dual core i5s that would debut the following year with the 32nm die shrink, though it wasn't fully integrated into the core yet (instead using the "on-chip but off-die" approach Intel had taken to cache on the Pentium Pro and II). The one downsides to this family were a slightly unclear branding scheme* and the fact that it boasted one of the more notable motherboard socket, and market, splits between the enthusiast (the i7-9xx chips) and mainstream markets, which drew particular criticism from fans of AMD and their consistently supported AM3 design*.
First Generation MCST Elbrus 2000 (1999-2011) A Russian attempt to revive its struggling microelectronics industry, this CPU was in development since early 2000, but was largely sidelined by a lack of modern production facilities, as Russian laws at the time prevented its large-scale production on the foreign plants, and their major application was in military. Another attempt to create a competitive VLIW-based CPU, this looks like it was finally done right: back in 2005 one of this chips, clocked at 300 MHz, outpaced a 500 MHz Pentium III in x86 compatibility mode, and was competitive with 2 GHz P4 when running a native code. As common for the VLIW CPU internally it is split into several pipelined execution units that are directly driven by a command stream without costly transcoding and rearranging the complex CISC and even RISC commands. Its secret, however, lies in the extremely efficient compile algorithms that allow for correct arrangement of this stream. It also emulates x86 architecture by a special binary compiler that runs as a hardware-supported shadow task in an OS and translates x86 binaries on-the-fly with a minimal penalty.
Second Generation Intel Core i3/i5/i7 (aka Sandy Bridge/Ivy Bridge) (2011): During this generation, Intel, knowing full well their competition and the future that lay in smaller computer devices, began a two-front assault against ARM and AMD, focusing more on improving power efficiency and on-die graphics than improving performance*. This allowed for, among other things, smaller form notebooks as part of their new Ultrabook initiative, inspired in no small part by the Macbook Air. The onboard graphics not only now encompassed the entire line (with a few exceptions), but also vastly improved to be able to taken somewhat seriously, if not enough to rival AMD/ATi (see below). What's more, they now eclipsed high-end graphics cards at video transcoding via QuickSync. Late in the year, Intel released the high-end variant of the platform, Sandy Bridge-E, which nixes the onboard graphics in favor of 8 possible cores and a greater cache count per core. The platform has received a mostly favorable reception for easy overclocking via the K models using the multiplier ratio method, the only criticism being that the CPU clock can only be tuned within 5-7% due to being tied to the voltage regulator. Ivy Bridge, the platform tick, introduced even better onboard graphics and the tri-gate transistor method (the latter of which reduced TDP), but ran hotter due to the use of thermal paste.
AMD Fusion (Bobcat/Llano) (2011): After AMD bought ATi in 2006, AMD had goals to merge the logical portion of the processor, with the computational powerhouse of the GPU. AMD eventually came out with a similar solution to that of Intel's in 2011 named Fusion. AMD's solution has more of a GPU focus, die shots showing that AMD's "Accelerated Processing Units" (APU) are roughly half GPU and half CPU. These features are rather popular with laptops, as graphical and computing performance increases dramatically, with little impact to battery life. In late 2012, the updated Trinity series of APUs was released, finally making the line viable for desktop users.
AMD FX-series (2011): AMD takes a radical approach to CPU organization in the processor. Instead of a "core", this processor has "modules", consisting of two basic execution units, with a shared FPU. There was claim that this used die space more efficiently. While the first version, "Bulldozer" did excel in multi-threaded tasks (up to the same performance as Intel's second generation Core i7 in some cases), in other tasks, it's only slightly better (or even slightly slower) than its predecessor. The second version, "Piledriver" improved matters, consistently equalling or beating the equivalent Core i7s in heavily threaded situations, albeit at the cost of much higher power consumption and still-low single thread performance.
Second Generation Elbrus 2000 (2011-) An updated e2k CPU with smaller tech norms and increased clock speed. Comparable to low-end Intel chips clocked several times faster in x86 emulation mode, and lightning-fast on the native VLIW code. Also extremely cool for its performance, radiating ~15-20W at full load. Recent versions also included several DSP cores to speed-up the image and signal processing, a vestige of its previous use as a radar CPU. While still mainly used in military applications, MCST has recently demonstrated a PC-standard mini-ITX motheboard, indicating their wish to enter the general PC market, especially multimedia servers, where their additional DSP cores would be especially useful.
Intel Xeon Phi (2012): The Xeon Phi started its roots in the ambitious GPU Larabee back in 2007, which Intel wanted it to do real-time ray tracing (the Holy Grail of real-time graphics). However, this fell through and was brought up on and off in 2010. The end result though, is what Intel dubbed the Many-Integrated Core (MIC) processor for highly parallel processing. Using a revamped P5 architecture (the same one in the original Pentium), the low-end Phi can achieve a performance of over 1TFLOP.