troperville

tools

toys

Wiki Headlines
It's time for the second TV Tropes Halloween Avatar Contest, theme: cute monsters! Details and voting here.

main index

Narrative

Genre

Media

Topical Tropes

Other Categories

TV Tropes Org
random
Graphics Processing Unit
A GPU is the common term for a piece of a computer/console/arcade hardware that is dedicated to drawing things, i.e. graphics.

The general purpose of a GPU is to relieve the CPU of the responsibility for a significant portion of rendering. This has the double benefit of providing more CPU time while simultaneously offloading work to a chip designed specifically for that task.

Both consoles and regular computers have had various kinds of GPUs. They had two divergent kinds of 2D GPUs, but they converged with the advent of 3D rendering.

The term "GPU" was coined by nVidia upon the launch of their GeForce line of hardware. This was generally a marketing stunt, though the GeForce did have some fairly advanced processing features in it. However, the term GPU has become the accepted shorthand for any graphics processing chip, even pre-GeForce ones.

Arcade 2D GPU

The first 2D GPU chipsets appeared in early Arcade Games of the 1970's. The earliest known example was the Fujitsu MB14241, a video shifter chip that was used to handle graphical tasks in Taito & Midway titles such as Gun Fight (1975) and Space Invaders (1978). Taito and Sega began manufacturing dedicated video graphics boards for their arcade games from 1977. In 1979, Namco and Irem introduced tile-based graphics with their custom arcade graphics chipsets, while Nintendo's Radar Scope video graphics board was able to display hundreds of colors on screen for the first time.

By the mid-80's, Sega were producing Super Scaler GPU chipsets with advanced pseudo-3D sprite-scaling graphics that would not be rivalled by any home computers or consoles until the mid-90's.

From the 1970s to the 1990s, arcade GPU chipsets were significantly more powerful than GPU chipsets for home computers and consoles, both in terms of 2D and 3D graphics. It was not until the mid-90's that home systems rivalled arcades in 2D graphics, and not until the early 2000s that home systems rivalled arcades in 3D graphics.

Console 2D GPU

This kind of GPU, introduced to home systems by the TMS 9918/9928 (see below) and popularized by the NES, Sega Master System and Sega Genesis, forces a particular kind of look onto the games that use them. You know this look: everything is composed of a series of images, tiles, that are used in various configurations to build the world.

This enforcement was a necessity of the times. Processing power was limited, and while tile-based graphics were somewhat limited in scope, it was far superior to what could be done without this kind of GPU.

In this GPU, the tilemaps and the sprites are all built up into the final image by the GPU hardware itself. This drastically reduces the amount of processing power needed — all the CPU needs to do is upload new parts of the tilemaps as the user scrolls around, adjust the scroll position of the tilemaps, and say where the sprites go.

Computer 2D GPU

Computers had different needs. Computer 2D rendering was driven by the needs of applications more so than games. Therefore, rendering needed to be fairly generic. Such hardware had a framebuffer, an image that represents what the user sees. And the hardware had video memory to store extra images that the user could use.

Such hardware had fast routines for drawing colored rectangles and lines. But the most useful operation was the blit or BitBlt: a fast video memory copy. Combined with video memory, the user could store an image in VRAM and copy it to the framebuffer as needed. Some advanced 2D hardware had scaled-blits (so the destination location could be larger or smaller than the source image) and other special blit features.

Some 2D GPUs combined these two approaches, having both a framebuffer and a tilemap, and being able to output hardware accelerated sprites and tiles, and perform tile transformation routines over what was stored in the framebuffer. These were most powerful and advanced among them, but usually pretty specialized and tied to the specific platforms (see below), and in the end more general approach won over, being more conducive to the various performance-enchancing tricks and better adapting to the increasing computing horsepower.

The CPU effort is more involved in this case. Every element must be explicitly drawn by a CPU command. The background was generally the most complicated. This is why many early computer games used a static background. They basically had a single background image in video memory which they blitted to the framebuffer each frame, followed by a few sprites on top of it. Later PC games before the 3D era managed to equal or exceed the best contemporary consoles like the Super NES both through raw power (the 80486DX2/66, a common gaming processor of the early 90s, ran at 66 MHz, almost 10 times the clock speed of the Sega Genesis, and could run 32-bit code as an "extension" to 16-bit DOS) and through various programming tricks that took advantage of quirks in the way early PCs and VGA worked. John Carmack once described the engine underpinning his company's breakout hit Wolfenstein 3D as "a collection of hacks", and he was not too far off. (It was also the last of their games that could not only run, but was comfortably playable on an 80286 PC with 1 MB RAM — a machine that was considered low-end even in 1992 — which serves as a testament to the efficiency of some of those hacks.)

Before the rise of Windows in the mid-1990s, most PC games couldn't take advantage of newer graphics cards with hardware blitting support; the CPU had to do all the work, and this made both a fast CPU and a fast path to the video RAM essential. PCs with local-bus video and 80486 processors were a must for games like Doom and Heretic; playing them on an old 386 with ISA video was possible, but wouldn't be very fun.

Basic 3D GPU

The basic 3D-based GPU is much more complicated. It isn't as limiting as the NES-style 2D GPU.

This GPU concerns itself with drawing triangles. Specifically, triangles that appear to imitate shapes. They have special hardware in them that allows the user to map images across the surface of a triangular mesh, so as to give it surface detail. When an image is applied in this fashion, it is called a texture.

The early forms of this GPU were just triangle/texture renderers. The CPU had to position each triangle properly each frame. Later forms, like the first GeForce chip, incorporated triangle transform and lighting into the hardware. This allowed the CPU to say, "here's a bunch of triangles; render them," and then go do something else while they were rendered.

Modern 3D GPU

Around the time of the GeForce 3 GPU, something happened in GPU design.

Take the application of textures to a polygon. The very first GPU had a very simple function. For each pixel of a triangle:

color = textureColor * lightColor

A simple equation. But then, developers wanted to apply 2 textures to a triangle. So this function became more complex:

color = texture1 * lightColor * texture2

Interesting though this may be, developers wanted more say in how the textures were combined. That is, developers wanted to insert more general math into the process. So GPU makers added a few more switches and complications to the process.

The GeForce 3 basically decided to say "Screw that!" and let the developers do arbitrary stuff:

color = Write it Yourself!

What used to be a simple function had now become a user-written program. The program took texture colors and could do fairly arbitrary computations with them.

In the early days, "fairly arbitrary computations" was quite limited. Nowadays, not so much. These GPU programs, called shaders, commonly do things like video decompression and other sundry activities. Modern GPUs can become something called the General Purpose GPU, which people have taken advantage of massive calculation performance of the GPU to do work that would take a CPU much longer to do.

Difference between GPU and CPU

GPUs and CPUs are built around some of the same general components, but they're put together in very different ways. A chip only has a limited amount of space to put circuits on, and GPUs and CPUs use the available space in different ways. The differences can be briefly summarized as follows:

  • Execution units: These are the things that do things like add, multiply, and other actual "work". A GPU has dozens of times as many of these as a CPU, so it can do a great deal more total work than a CPU in a given amount of time, if there's enough work to do.
  • Control units: These are the things that read instructions and tell the execution units what to do. CPUs have many more of these than GPUs, so they can execute individual instruction streams in more complicated ways (out-of-order execution, speculative execution, etc.), leading to much greater performance for each individual instruction stream.
  • Storage: GPUs have vastly more fast storage (registers) than CPUs, but no cache. This means that they're ridiculously fast on workloads that can fit in their registers, but if more data is required, latency will shoot through the roof.

In the end, CPUs can execute a wide variety of programs at acceptable speed. GPUs can execute some special types of programs far faster than a CPU, but anything else it will execute much slower, if it can execute it at all.

The Future

GPUs today can execute a lot of programs that formerly only CPUs could, but with radically different performance characteristics. A typical home GPU can run hundreds of threads at once, while a typical home CPU can run two to four. On the other hand, each GPU thread progresses far more slowly than a CPU thread. Thus if you have thousands of almost identical tasks you need to run at once, like many pixels in a graphical scene or many objects in a game with physics, a GPU might be able to do work a hundred times faster than a CPU. But if you only have a few things to do and they have to happen in sequence, a CPU-style architecture will give vastly better performance. As general-purpose GPU programming progresses, GPUs might get used for more and more things until they're nearly as indispensable as CPUs. (Or maybe not.)

Some notable GPUs over the years:

    open/close all folders 

    1970s 
Fujitsu MB14241 (1975)

This was the first example of a video shifter chip, used by Taito and Midway to handle graphical tasks in early Arcade Games such as Gun Fight (1975) and Space Invaders (1978). Gun Fight was one of the first games to use sprites, while Space Invaders was the first to render up to 64 sprites on screen.


Motorola 6845 (1977)

Not actually a GPU on its own, but an important building block of one. The 6845 was a CRT controller, a chip that generated timing signals for the display and video memory. It was originally meant for dumb terminals (and has a few features that would have only made sense on a 1970s terminal, such as light-pen support), but since it didn't impose any restrictions on memory or resolution other than the size of its counters, it was very, very tweakable. This chip ended up being the basis of all of IBM's display adapters and their follow-ons and clones, and still exists in modern VGA cards in one form or another. It was also used by a few non-IBM machines such as the Commodore CBM range, Amstrad CPC and BBC Micro.


Motorola 6847 (1978)

Not based on the 6845, the 6847 was a "Video Display Generator" that could produce low-resolution graphics in up to 4 colors. Used in the TRS-80 Color Computer (which it may have originally been designed for), Acorn Atom and NEC PC-6001/NEC Trek. The 6847's interlaced output was designed for NTSC televisions, which for British companies like Acorn meant the inconvenience of adding a NTSC-to-PAL converter that produced graphical artifacts.


Namco Galaxian (1979)

The Namco Galaxian GPU chipset was produced by Namco for the popular Arcade Game Galaxian (1979). It introduced and popularized the use of tile-based rendering, and it was the first graphics chipset to use fully RGB color graphics, display 32 colors on screen, and render up to 64 multi-colored sprites on screen. From 1979 to 1982, the Namco Galaxian chipset was widely used in arcade games from Namco, Midway, Konami, Sega and Taito.


Nintendo Classic (1979)

An Arcade Game video board designed by Nintendo, originally for Radar Scope in 1979, and later better known for its use in the original Donkey Kong in 1981. It introduced a large RGB color palette, able to display up to 256 colors on screen out of a total palette of up to 768 colors. It also featured basic sprite-scaling capabilities. This was utilized to full effect in Radar Scope, which featured a pseudo-3D third-person perspective, with enemies zooming in as they approach the player, and had a background with a smooth gradient effect.


NEC µPD7220 GDC (1979)

The NEC µPD7220 GDC (Graphic Display Controller), designed by NEC, was one of the first implementations of a GPU as a single Large Scale Integration (LSI) integrated circuit chip. Ahead of its time, it became popular in the early 80's as the basis for early high-end computer graphics boards. It had its own instruction set and direct access to system memory, like a blitter. It was also capable of advanced graphical features such as horizontal scrolling and integer zooming. It also featured high display resolutions up to 1024x1024 pixels, and the color palette was also large for its time, able to display 16 colors on screen out of a total palette of up to 4096 colors. The GDC was first commercially released for NEC's own PC-98 computer in 1982, before becoming available for other computer platforms such as the Epson QX-10 and IBM PC. Intel licensed the NEC µPD 7220 design and called it the 82720 graphics display controller, which was the first of what would become a long line of Intel GPU's.


Texas Instruments 9918 (1979) and 9928 (1981)

The first home GPUs to implement tile-based rendering, the 9918 and 9928 were originally designed for and introduced with TI's 99/4 home computer in 1979. It used a 16-color Y Pb Pr palette and could handle up to 32 sprites. While the 99/4 was a flop, the 9918 and 9928 were far more successful, being used in the Colecovision and Sega SG-1000 consoles and the MSX, Sord M5 and Tomy Tutor computers; the GPUs in the Sega Master System and Sega Genesis consoles are improved versions of the 99x8. Also, the NES's PPU is an improved (but incompatible) 9918 workalike, with support for a 256-color palette (some of which are duplicates, making the usable number on NTSC somewhere around 55) and more sprites available at once (though only at one size, 8×16).

The 9918 had its own VRAM bus, and the NES in particular wired the VRAM bus to the cartridge slot, making it so that the tiles were always available for use. The NES had only 2 kilobytes of VRAM; since the tiles were usually in the game cartridge's CHR-ROM, the VRAM was only needed for tile maps and sprite maps. Other consoles (including Sega's machines and the SNES) stuck with the more traditional 9918-esque setup and required all access to the GPU to go through the main bus; however, the SNES and the Genesis can use DMA for this, something the NES didn't have.


Atari ANTIC & CTIA (1979) and GTIA (1981)

The first programmable home-computer GPU. ANTIC was ahead of its time; it was a full microprocessor with its own instruction set and direct access to system memory, much like the blitter in the Amiga 6 years later (which, not coincidentally, was designed by the same person). By tweaking its "display list" or instruction queue, some very wild special effects were possible, including smooth animation and 3D effects. CTIA and GTIA provided color palettes up to 128 or 256 colors, respectively, a huge number for home systems at the time; the CTIA was initially able to display up to 5 colors on screen, while the GTIA was later able to display up to 16 colors on screen.

    1980s 
Namco Pac-Man (1980)

A follow-up to the Namco Galaxian chipset, Namco introduced the Namco Pac-Man GPU chipset for the Arcade Games Pac-Man and Rally-X. Consisting of two Namco NVC 293 Custom Video Shifter chips and a Namco VRAM Addresser, the Namco Pac-Man chipset introduced a palette of up to 512 colors, along with hardware support for multi-directional scrolling and a second tilemap layer.


IBM Monochrome Display Adapter and Color Graphics Adapter (1981)

These were the GPUs offered with the original IBM PC. The MDA quickly gained a very good reputation for its crisp, high resolution text display, but lacked the ability to render graphics of any sort. For that you needed the CGA adapter, which is based on 1970s technology (it was based on the Motorola 6845 with some additional logic chips thrown in) and only supported four simultaneous colors in a low-resolution mode with two freaky, unnatural-looking color palettesnote  one of which didn't even have a true white, and cannot display color at all in "high-resolution mode". It is however capable of 15 colors in text mode. There was also a special hack was technically a 16-color text mode, but shrunken down to an effective resolution of 160x100. There was only one well known game at the time that used this mode: Round 42. There was however, a lesser known shareware game in the 80s - a Break Out clone called Bricks- that uses this mode. Another developer used this mode to create a Pac-Man clone in 2012. Not all clones of the CGA card supported this mode however, and displays graphics erratically when this mode is invoked.

Due to the lack of colors available, hackers tried to work out hacks that would allow the card to generate more color than it's capable of. One of the tricks to improve the amount of color was quickly discovered soon after the card was launched, but it only works with CGA cards hooked up to a NTSC color TV. In this mode, the program takes advantage of artifacting to generate colors. By careful use of dithering in either CGA modes, it's possible that the lines will interfere with the NTSC color burst signal and result in color. Notably, it was used by several Sierra adventure games like King's Quest. Another method that works similarly to the Amiga's Hold-and-modify mode was soon discovered in that with proper timing, it is possible to switch palettes during drawing of each line, allowing each individual lines on the screen to have it's own set of three colors. California Games from Epyx is one game that is known to use this trick. However, both methods also have flaws- the former only works with NTSC TVs, meaning if you have a PAL TV or an RGB monitor instead, you're shafted, while the latter relies on the precise timing of the CPU, if a faster CPU is used then the method falls flat on it's face and you get a flickering mess instead.

This ensured that the early PCs had a pretty dismal reputation for graphics work, and was a major factor in the Macintosh and the Amiga taking over that market. The CGA was superseded by the Enhanced Graphics Adapter (EGA) in 1984, and while that chip was at least usable for graphics work, it was still very limited (it was again based on the same Motorola 6845, but this time with a different, improved set of logic chips and it's own memory instead of sharing memory with the CPU); it wouldn't be until 1987 that the PC finally got an affordable, world-class color GPU.

On a side note: A IBM Monochrome Display Adapter and Color Graphics Adapter can co-exist on the same PC if you can work out the conflict (which is not really difficult, but it involves messing with the ANSI.SYS driver). DOS will boot into the monochrome display, but then switch to the CGA display when a program requiring graphics is run. Notably, a program called SignMaster supports this configuration and allows you to edit your text in the Monochrome screen while displaying the resulting sign on the CGA screen). This setup requires two displays- a IBM Monochrome Monitor and either a IBM RGB Monitor or a color TV, making it the earliest occurrence of a multi-display setup. The fact that few software supported the configuration (and some programs even refusing to run if two graphics cards are found in the system) ensured that it was only used in very niche markets.

Namco Pole Position (1982)

The Namco Pole Position video board's GPU chipset consists of the following graphics chips: three Namco 02xx Custom Shifter chips, two Namco 03xx Playfield Data Buffer chips, Namco 04xx Address Buffer, and Namco 09xx Sprite RAM Buffer. The video board also utilized an additional 16/32-bit Zilog Z80002 microprocessor dedicated to handling graphical tasks. Created for Namco's Pole Position Arcade Games, it was the most advanced GPU chipset of its time, capable of multi-directional scrolling, pseudo-3D sprite-scaling, two tilemap layers, and 3840 colors on screen.


Hercules Graphics Card (1982)

You may have worked out from the summary above that early PC users had something of a dilemma with their GPU card choices — either the "excellent text-display but no real graphics" of the MDA or the "crappy text display but with some actual graphics (which is also horrible)" of the CGA. And setting up a PC with both a MDA and a CGA card installed can be a daunting task, not to mention takes up more space on your desk since you will need two monitors (or a monitor and a TV) and leaves you with one less expansion slot in the PC. Hercules decided to Take a Third Option and produced their own graphics card which emulated the MDA's text mode, as well as a high-resolution (720x348) monochrome mode of their own, making it suitable for basic graphics work. This made the Hercules a hugely popular card, and the most popular solution for PC users for about five years. Hercules also later released an add-on CGA-compatible Hercules Color Card, which included circuitry that allowed it to coexist with HGC.


IBM Video Gate Array / Tandy Graphics Adapter (1984)

An upgraded version of IBM's earlier CGA adapter, introduced with their PCjr line. It offered the same resolutions as its predecessor, but with an expanded and more versatile 16-color palette, as opposed to the two 4-color palettes on CGA. The PCjr ended up bombing due to being too expensive and making too many design compromises, and IBM never used the Video Gate Array again, but the standard was co-opted by Tandy in their Tandy 1000 line of computers, which ended up being a much bigger hit than the PCjr. Despite being originally created by IBM, the standard is usually referred to as the Tandy Graphics Adapter (TGA) to avoid confusion with IBM's later Video Graphics Array card.


Sega Super Scaler (1985)

Sega's Super Scaler series of Arcade Game graphics boards, and their custom GPU chipsets, were capable of the most advanced pseudo-3D sprite-scaling graphics of the era, not rivalled by any home systems up until the mid-90's. These allowed Sega to develop pseudo-3D third-person games that worked in a similar manner to textured polygonal 3D graphics, laying the foundations for the game industry's transition to 3D.

The initial Super Scaler chipset for Hang-On in 1985 was able to display 6144 colors on screen out of a 32,768 color palette, and render 128 sprites on screen. Later that year, Space Harrier improved it further, with shadow & highlight capabilities tripling the palette to 98,304 colors, and enough graphical processing power to scale thousands of sprites per second. The Sega X Board for After Burner later added rotation capabilities, and could display up to 24,576 colors on screen. The last major Super Scaler graphics board for the Sega System Multi 32 in 1992 could render up to 8192 sprites on screen, and display up to 196,608 colors on screen.


Yamaha V9938/V9958 AKA MSX Video (1985)

An improved follow-on of the aforementioned TI 99x8, this GPU (actually Yamaha called it VDP, or Video Diplay Processor) was probably a pinnacle of hybrid 2D chips that handled both tile and framebuffer graphics. Allowing the high (at the time) 512x212/424 resolution and up to 256 colors (indexed mode supported 16 out of 512 colors and V9958 allowed up to 19268 colors in some circumstances), it also had hardware scrolling capabilities, programmable blitter, and accelerated line draw/area fill, giving the MSX2 computer, for which it was developed, the prettiest graphics of all 8-bit machines, superior to VGA-based PCs at the time and rivalling the Amiga. Unfortunately, it wasn't much used outside of the MSX platform and faded away with its demise in the early 90's.


IBM Enhanced Graphics Adapter (1985)

Not wanting to be outdone by the Tandy 1000's graphics capabilities — least of all with a graphics standard that IBM themselves had created — IBM released an updated graphics standard for PC/AT class computers. The EGA easily outgunned the old CGA standard, offering nearly double the resolution and a massively expanded color range, although it was quickly equalled by the cheaper Atari ST, and then surpassed by the Amiga. Most EGA hardware was backwards compatible, mostly, with CGA and MDA modes.


IBM Professional Graphics Controller (1985)

It was more similar to a modern-day GPU rather than just a display adapter, supporting full 2D hardware acceleration and high display resolutions (up to 640x480 pixels, and up to 26 colors on screen out of a 4096-color palette) that wouldn't be equaled on an IBM-compatible PC until the arrival of the Hitachi HD63484 ACRTC the following year and the Super VGA standard four years later. In addition, it effectively included a full computer on the card (including an Intel 8088 CPU), making it fully programmable in a way that most other cards wouldn't be until the turn of the century. Unfortunately, there was one very big flaw — the combined cost of the card and the proprietary IBM monitor that you had to use was $7,000, almost four times the cost of the Commodore Amiga that debuted a couple of months later. You can probably guess which of the two became the must-have solution for graphics professionals.


Commodore Denise (OCS) and Super Denise (ECS) (1985)

The GPUs of the Commodore's Amiga OCS chipsets, designed by the late Jay Miner (who oversaw the design of the ANTIC, GITA and CITA of the Atari computer graphics subsystem mentioned above). Capable of 320x240@p60 or 320x256@p50 up to 640x480@i60 and 640x512@i50 with up to a whopping 4096 colors via Hold-And-Modify mode, the OCS chipset was good value for money and offered high quality graphics at a relatively cheap price for it's time.

The Denise was used in the Amiga 500, 1000, 2000 and CDTV console.

In 1990, the OCS was given a minor upgrade called ECS and Super Denise replaced the regular Denise. Super Denise allowed for a 640x480@p50 or p60 (otherwise known as VGA) as well as Super-HiRes 1280x200@p60, 1280x256@p50, 1280x400@i60 and 1280x512@i50 resolutions, albeit with only 4 colors. Unlike OCS (in which there are separate parts that only support 50Hz and 60Hz resolutions), ECS supported both PAL and NTSC resolution. It also has a mode that allow for user-definable resolutions.

The Super Denise was used in the Amiga 500+, Amiga 600, the revised Amiga 2000, and Amiga 3000.
Hitachi HD63484 ACRTC (1986)

The Hitachi HD63484 ACRTC (Advanced CRT Controller) was the most advanced GPU available for an IBM-compatible PC in its time. Its display resolution went up to 1024x768 pixels, and it could display up to 256 colors on screen out of a palette of 16,777,216 colors. It also had advanced features such as a blitter, hardware scrolling in horizontal and vertical directions, and integer zooming.


Sharp-Hudson X68000 Chipset (1987)

The Sharp X68000 GPU chipset consists of four graphics chips developed by Sharp and Hudson Soft: Cynthia (sprite controller), Vinas (CRT controller), Reserve (video data selector), and VSOP (video controller). It was the most advanced GPU available on a home system in the 80's, and remained the most powerful home computer GPU well into the early 90's. Its resolution could go up to 1024x1024 pixels, and it could display up to 65,536 colors on screen. It could also render 128 sprites on screen, and display 2 tilemap layers and up to 4 bitmap layers. Because of its power, the X68000 received almost arcade-quality ports of many Arcade Games, and it even served as a development machine for Capcom's CPS arcade system in the late 80's to early 90's.


IBM Video Graphics Array (1987)

Introduced with the IBM PS/2 line, finally offering the IBM-compatible PC graphics abilities that outshone all its immediate competitors. Many noted that it would probably have been a Killer App for the PS/2, had it not been for the Micro-Channel Architecture fiasco (more on that elsewhere). In any case, this established most of the standards that are still used for graphics cards to this day, offering a then-massive 640x480 resolution with 16 colors, or 320x200 with an also-massive 256 colors. Subsequent cards included the Super VGA and XGA cards, which extended the maximum resolutions to 800x600 and 1024x768 respectively, but they are generally considered to fall under the VGA umbrella. The VGA also had a cheaper sister card, the Multi-Color Graphics Array, which was heavily stripped down and much nearer EGA in terms of specs, but compatible with VGA monitors.


IBM 8514/A (1987)

For all intents and purposes this was the successor of the Professional Graphics Controller, though without the absurd price tag and requirement to use a proprietary monitor. Specs-wise it was mostly equivalent to the VGA, but added in 256-color modes at 640x480 and 1024x768, though the latter mode was heavily compromised, requiring a special monitor and only running at an interlaced 43hz. Most importantly, the card retained the PGC's 2D hardware acceleration, and while it was still much more expensive than a VGA card, it made 2D acceleration affordable enough that graphic design apps began making widespread use of it. Since IBM's version was limited to their short-lived MCA bus, several other manufacturers began producing low-cost clones that used the ISA bus; in particular, one of ATI's first big successes was a cheaper incarnation of this card.


Texas Instruments TMS34010 (1987)

The first fully programmable graphics accelerator chip, the TMS34010 (followed in 1988 by the TMS34020) was featured in some workstations, the "TIGA" (Texas Instruments Graphics Architecture) PC cards of the early 1990s, and all of Atari and Microprose's early 3D Arcade Games; Midway Games actually used it as a CPU in many of their arcade games.


    1990s 
S3 86C911 (1991)

One of the most popular 2D accelerators, and one that made them matter. Originally, graphics chips on PCs were limited to a higher resolution text mode, or a smaller resolution graphics mode. 2D acceleration however, only had a graphics mode that could manipulate any arbitrary 2D image you wanted. The chip was named after the Porsche 911, and it delivered its claims so well, by the mid 90s, every PC made had a 2D accelerator. This chipset also received many revisions, the most famous and widespread being the Trio series. It's also so well documented that many PC emulators at least emulate this chipset for 2D video.


Commodore Lisa (AGA) (1992)

The Lisa GPU replaces Super Denise, and marks the start of Commodore's technological slip. While the chipset has much improved resolutions (up to 1440x560@i50 with 18-bit color via HAM8 mode but a full 16.7 million color palette), it was slow as it only ran with a 16-bit bus, lacked chunky graphics mode support which caused slowdowns in some programs, and the only mode that supported progressive scan were the 240 lines, 256 lines and 480 lines mode. While Commodore was developing several chipsets to address the shortcoming of the AGA, it went bankrupt before any of the ideas could be put into production.

This chipset was used in the Amiga 1200, 4000 and the ill-fated CD 32 console.


Fujitsu TGP MB86234 (1993)

The Sega Model 2 Arcade Game system was powered by the Fujitsu TGP MB86234 chipset. This gave Sega's Model 2 the most advanced 3D graphics of its time. It was the first gaming system to feature filtered, texture-mapped polygons, and it also introduced other advanced features such as anti-aliasing as well as T&L effects such as diffuse lighting and specular reflection. With all effects turned on, the Model 2 was capable of rendering 300,000 quad polygons per second, or 600,000 triangular polygons per second. Its 3D graphical capabilities were not be surpassed by any gaming system until the release of its own arcade successor, the Sega Model 3, in 1996.


nVidia NV1 (1995)

One of the earliest 3D accelerators on the PC, manufactured and sold as the Diamond Edge 3D expansion card. It was actually more than just a graphics card - it also provided sound, as well as a port for a Sega Saturn controller. It was revolutionary at the time, but very difficult to program due to its use of Quadrilateral rendering (which the Sega Saturn's GPU also used) instead of the Triangles (Polygons) used today. It was also very expensive and had poor sound quality. It was eventually killed when Microsoft released Direct X and standardized Polygon rendering. It was supposed to be followed by the NV2, which never made it past design stage.


S3 ViRGE (1995)

The first 3D accelerator chip to achieve any real popularity. While it could easily outmatch CPU-based renderers in simple scenes, usage of things like texture filtering and lighting effects would cause its performance to plummet to the point where it delivered far worse performance than the CPU alone would manage. This led to it being scornfully referred to as a "3D decelerator," though it did help get the market going.


Rendition Vérité V1000 (1995)

This chip included hardware geometry setup years before the GeForce. While it was used as its marketing point against 3dfx's Voodoo chip, its performance was fairly abysmal when it came to games. It was still the only 3D accelerator supported by a few games released at the time, like Indy Car Racing II (though its original release required a patch). It was used mostly in workstation computers.


NEC-VideoLogic PowerVR (1996)

The PowerVR GPU chipset developed by NEC and VideoLogic was the most advanced PC GPU when it was revealed in early 1996, surpassing the PlayStation and approaching near arcade-quality 3D graphics. It was showcased with a demo of Namco's 1995 Arcade Game Rave Racer, a port of which was running on a PowerVR-powered PC, with graphics approaching the quality of the arcade original (though at half the frame rate). It would not be rivalled by any other PC graphics cards until the release of the 3dfx Voodoo later in the year.


Real3D PRO-1000 (1996)

The graphics chipset that powered the Sega Model 3 Arcade Game system, released in 1996, this was the most powerful GPU for several years. Its advanced features included trilinear filtering, trilinear mipmapping, high-specular Gouraud shading, multi-layered anti-aliasing, alpha blending, perspective texture mapping, trilinear interpolation, micro texture shading, and T&L lighting effects. With all effects turned on, it was able to render over 1 million quad polygons/sec, over 2 million triangular polygons/sec, and 60 million pixels/sec.

A cheaper consumer version called the Real3D 100 was planned for PC, and was the most powerful PC GPU when it was demonstrated in late 1996. However, after many delays, it was cancelled. Later in 1998, a cheaper stripped-down version was released as the Intel i740 (see below).


Matrox Mystique (1996)

Matrox's first venture into the 3D graphics market. Was notable for being bundled with a version of Mechwarrior 2 that utilized the card. It did not perform well however against the 3dfx Voodoo and lacked a number of features the competition had. Matrox tried again with the slightly more powerful Mystique 220, but to no avail. The product line was soon after labelled the "Matrox Mystake" due to its dismal performance.


3dfx Voodoo (1996)

The chip that allowed 3D accelerators to really take off. Its world-class performance at a decent price helped ensure that it ascended to market dominance. However, it lacked a 2D graphics chip, meaning one had to have a separate card for essentially any non-video game task. Fortunately most users already had a graphics card of some kind so this wasn't a major issue, and 3dfx cleverly turned this to their advantage by saying the Voodoo wouldn't require users to throw their old graphics cards away in their marketing campaign. 3dfx did produce a single-card version called the Voodoo Rush, but used a rather cheap and nasty 2D GPU which had mediocre visual quality and actually prevented the Voodoo chipset from working optimally. As a result, they stuck to making 3D-only boards at the high-end until 1999's Voodoo3.


SGI Reality Co-Processor (1996)

Developed for the Nintendo 64, what this GPU brought to home video gaming was anti-aliasing and trilinear mipmapping (which helped textures look less pixelated up close). Mipmapping wasn't a standard feature for PCs until 1998, and anti-aliasing didn't show up until 2000, though impractical until 2002.


Fujitsu TGPx4 MB86235 (1996) and FXG-1 Pinolite MB86242 (1997)

The Fujitsu TGPx4 MB86235 was used in Sega's Model 2C CRX Arcade Game system in 1996, introducing true T&L geometry processing. It was then adapted for PC as the FXG-1 Pinolite MB86242 geometry processor in 1997, pioneering consumer hardware support for T&L, making near arcade-quality 3D graphics possible on a PC.

Rendition soon utilized the Fujitsu FXG-1 for their Hercules Thriller Conspiracy, which was to be the first consumer GPU graphics card featuring T&L, but its release was eventually cancelled. This was years before the Ge Force 256 released in 1999 and popularized T&L among PC graphics cards.


nVidia Riva 128 (1997)

nVidia's real major foray into the 3D graphics market and helped establish nVidia as a major 3D chipmaker. One of the first GPU's to fully utilize Direct X. The Riva 128 performed fairly well, but was not enough to really challenge the 3dfx Voodoo. An improved version, the Riva 128 ZX released the following year, had more onboard memory, a faster memory clock, and full Open GL support.


3dfx Voodoo2 (1998)

Aside from introducing dual-texturing (which would become multi-texturing), Voodoo2 also introduced the idea of putting two cards together to share the rendering workload. While it was termed SLI, the abbreviation stood for scan-line interleave. nVidia revived the trademark later. It also had a derivative where two of the GPUs were mashed into one unit. Like the first Voodoo, 3dfx created a single card solution, derived from the Voodoo2 and known as the Banshee. This one had the 2D and 3D chips combined into one unit and was actually somewhat successful in the OEM market; while it didn't catch on in the high-end, it provided the basis for the following year's Voodoo3.


Intel i740 (1998)

First introduced in order to promote its new Accelerated Graphics Port (AGP) slot. It was based on the 1996 chipset Real3D 100 (see above), but was stripped-down considerably. The Intel i740's 3D graphics performance was weaker than the competition, and it was quickly shelved months later. Oddly enough its 2D performance was actually very good, albeit not up to the standard of the Matrox G200, the 2D market leader at the time. It did, however, give Intel enough experience to incorporate it into its chipsets as integrated graphics, notably the GMA series.


NEC-VideoLogic PowerVR Series 2 (1998)

The GPU that drove the Dreamcast. What set it apart from the others was it used PowerVR's tile-based rendering and the way it did pixel shading. Tile based rendering worked on only a subset of the 3D scene, which can make use of lower memory buses. It also sorted the depth of polygons first then colored them, rather than coloring them then figuring out the depth later. Though the Dreamcast failed, and PowerVR stopped making PC cards as well, this card is notable in that it showed PowerVR that the true strength of their tile rendering hardware was in embedded systems. This lead to their Series 4 and 5 lines (see below). One notable feature it had thanks to its tile based rendered was Order-Independent Transparency, which allows transparent objects to move in front of each other and still look realistic. Direct X 11, released in 2010, made this feature a standard.


ATI Rage 128 (1998)

While ATI were very successful in the OEM market for most of the 1990s, most enthusiasts didn't give them a second thought. The Rage 128 started to change things, by being the first chip to support 32-bit color in both desktop and games, along with hardware DVD decoding. On top of that, ATI offered a variant called the "All-in-Wonder" which integrated a TV tuner. The Rage 128 quickly became popular as a good all-round solution for multimedia-centric users, but its rather average gaming performance (and ATI's then-notoriously bad drivers) meant most high-end gamers looked elsewhere.


S3 Savage (1998)

S3's major attempt at breaking into the 3D graphics market. This chip introduced the forgotten MeTal API, but is mostly remembered for the now industry standard S3 Texture Compression (S3TC) algorithm that allowed very large and highly detailed textures to be rendered even with relatively little Video RAM on board. The Savage itself however suffered poor chip yields and buggy drivers, thus it never really took off.


3dfx Voodoo3 (1999)

3dfx's first "performance" graphics card to contain a 2D graphics core, and actually more the successor to the Banshee than the Voodoo2. Despite shaky DirectX support and a lack of 32-bit color, it was still the fastest card around (the slowest version being as fast as two Voodoo2s in SLI) when it was released. However, this coincided with 3dfx making what many consider to be the mistake that sent the company down in flames; buying out board manufacturer STB and making all their own Voodoo-based graphics cards in-house which, combined with ATI and Matrox doing the same and most of the other GPU makers going bust at this time, basically handed nVidia a monopoly over the other graphics card makers and allowed them to utterly dominate the market for the next four years.


Matrox G400 (1999)

The first graphics chip which could natively drive two monitors. On top of that the G400 MAX version was the most powerful graphics card around on its release, meaning you could game across two monitors with just one card, something that neither ATi nor nVidia came up with for several more years. Unfortunately, management wrangles meant that Matrox's subsequent product releases were botched, and nothing ever came from the G400's strong positioning.


nVidia GeForce 256 (1999)

The first graphics processing unit to carry the term "GPU". What differentiated this from most other graphic processors was the inclusion of transform (placing polygons) and lighting in the hardware itself. The CPU normally handled those in prior hardware. The GeForce 2, an evolution of the 256, introduced two other kinds of graphics cards: budget (MX series) and professional (Pro series). The impact of the GeForce 256 was so great, many games up until 2006 only required its successor, the GeForce 2.

    2000s 
nVidia GeForce 2 (2000)

The baseline GeForce 2 GTS was largely the same as its predecessor, except clocked a lot higher and with the addition of a second texturing unit, allowing it to easily defeat anything else on the market at the time. Even higher-clocked versions, the GeForce 2 Pro and Ultra were introduced later in the year. However, easily the most influential model from this family was the GeForce 2 MX, which offered not only respectable performance and full T&L functionality, but dual-monitor support and even better multimedia functionality than the GTS and its sister chips. This made the MX a massive success, and the best-selling GPU ever made until that point.


3dfx Voodoo5 (2000)

What was notable about this video card was that in a desperate attempt to take the crown, 3dfx wanted this GPU to be scalable. Graphics cards sporting two GPUs were released, and it almost accumulated into 4 with the Voodoo5 6000, but the company folded and was acquired by Nvidia before that happened. One major factor that contributed to this demise was the sudden rise in the prices of RAM chips, and the Voodoo5 needed LOTS of it (because each GPU needed its own bank of RAM; they couldn't share it), which would have driven production costs over the roof. On a more positive note the, Voodoo5 was the first PC GPU to support anti-aliasing, though enabling it resulted in a major performance hit. A single-GPU version called the Voodoo4 was also released, but got steamrollered by the equally-priced, far more powerful GeForce 2 MX that came out at the same time.


ATi Radeon (2000)

The Radeon was ATI's first real contender for a well performing 3D graphics chip, and basically started the road that put ATI in the place of 3dfx as nVidia's main rival. Notably, it was the first GPU to compress all textures to make better use of memory bandwidth, and also optimized its rendering speed by only rendering pixels that were visible to the player. Despite being much more technologically elegant than the GeForce 2, it lacked the brute force of its rival, and only outperformed it at high resolutions where neither chip really delivered acceptable performance.


ATi Flipper (2001)

This was the GPU for the Nintendo GameCube, and was superficially similar to the GeForce 2 and the original Radeon. Where it stood out was that it integrated a megabyte of RAM into the actual graphics processor itself, making it possible to carry out low levels of Anti-Aliasing with virtually zero performance loss. The Flipper itself was recycled and slightly upgraded to serve in the GameCube's successor, the Wii.


nVidia GeForce 3 (2001)

As mentioned further up the page, this was the point where full vertex and pixel shaders were introduced, in line with the Direct3D 8 standard. However, there were a couple of other new features introduced with this line, including more efficient anti-aliasing routines which made usage of edge sampling and blur filters in order to make the performance loss a little less crippling. There was no real "budget" version of this family (the GeForce 2 MX continued to fill that role), save for the latter Ti200 model which was clocked lower than the original model, but retained all its functionality and proved very popular among enthusiasts. An even faster version, the Ti500 was also released at the same time.


nVidia nForce (2001)

While not exactly a GPU, the nForce was a brand of motherboard chipsets with one of NVIDIA's GPUs integrated in it. This meant that motherboards with integrated graphics no longer had abysmal performance in 3D applications. People who wanted to build entry-level computers with some graphical capabilities could now do it without a discrete graphics card. The nForce was exclusive to AMD processors until about 2005, when they started making Intel versions as well.

Eventually the nForce became a motherboard chispet without graphics to host NVIDIA exclusive features, such as SLI. When they brought back a GPU into the chipset, they also included a new feature known as HybridPower, which let the computer switch between the integrated graphics when not doing a 3D intensive task to save power. Eventually, NVIDIA dropped out of the chipset business (Intel withdrew permission for NVIDIA to make chipsets for their CPUs, while AMD bought out rivals ATI and co-opted their chipset line, resulting in NVIDIA quickly being abandoned by AMD enthusiasts), but most of their features were licensed or revamped. SLI is now a standard feature in Intel's Z68 chipset and HybridPower is seen in laptops as Optimus, though rumor is that NVIDIA is making a desktop version called Synergy.


ATi Radeon 8500 (2001)

The first product by ATi which made a real stab at the performance lead, which it actually held for a few weeks before the much pricier GeForce 3 Ti500 arrived on the scene. It extended the Direct3D functionality to version 8.1, boasted better image quality and texture filtering, and improve multi-monitor support. Only a few weeks after it launched however, it was discovered that the chip was using a driver cheat to post better benchmarks than the GeForce 3 in Quake III: Arena' (ATi's performance in OpenGL-based games had historically lagged quite a bit behind that of nVidia), which turned enthusiasts against the chip and ruined any chance of it outselling the GeForce 3 Ti200. Once the initial Internet Backdraft had calmed down however, it did gain some popularity thanks to its reasonable price and the drivers improving, and a cut-down version of the core was later released as the cheaper Radeon 9000.


Matrox Parhelia (2002)

Matrox's last major design. Like the G400 before it, the Parhelia included a lot of new features, including a 256-bit memory bus (which allows Anti-Aliasing with a much lesser performance loss) quad-texturing (which was nice in theory, but completely pointless in execution since games were now using pixel shaders to simulate the effect of multi-layered textures) and support for gaming across three displaysnote . While the Parhelia should have been more than a match for the competing GeForce 4 Ti series, Matrox made a serious blunder and neglected to include any sort of memory optimization system, crippling the Parhelia to the point where it struggled to match the GeForce 3. ATi and nVidia's subsequent efforts would implement the Parhelia's features more competently, and Matrox quickly dropped away to being a fringe player in the market. They released a few modified versions of the Parhelia (adding in support for PCI-Express and wide ranges of display connectivity) over the next decade, before announcing in 2014 that they were pulling the plug on their GPU line and would become a graphics card manufacturer instead.


nVidia GeForce 4 (2002)

Came in two versions; the higher-end "Ti" family was basically the same chip as the GeForce 3, but with higher clocks, better memory controllers and dual-monitor gaming support. Again, it was the lower-end MX chip that made the headlines... but for the wrong reasons this time, as they were in fact just heavily overclocked GeForce 2 MXes, with Direct3D functionality a generation-and-a-half behind the Ti models. While they still sold a lot of MX chips, nVidia took a lot of heat over this, and so ensured that all future budget chips would retain the same functionality as their higher-end counterparts.note 


ATi Radeon 9700 (2002)

What was actually stunning about this graphics card was that it supported the new Direct X 9.0 before it was officially released. But not only that, due to nVidia making a critical error (see below), it was a Curb-Stomp Battle against the GeForce FX in any game using Direct X 9 (in particular, Half-Life 2). Moreover, it still offered exceptional performance in older games, thanks to ATi ditching the until-then popular multitexture designs in favor of a large array of single-texturing processors, which generally offered better performance and remains the standard approach for GPUs to this day. Sweetening the deal was a much more competent execution of the 256-bit memory bus that Matrox had attempted with the Parhelia, finally making Anti-Aliasing a commonly used feature. This cemented ATi as competitor in graphics processors. A slightly revised version, the Radeon 9800 was released the following year, and ATi also started selling their graphics chips to third-party board makers, ending nVidia's monopoly on that front.


nVidia GeForce FX (2003)

After an unimpressive launch with the overheating, under-performing FX 5800 model, the succeeding FX 5900 was on par with ATi's cards in DirectX 7 and 8 games, but nVidia made some ill-advised decisions in implementing the shader processor across the series. Direct3D 9 required a minimum of 24-bit accuracy in computations, but nVidia's design was optimized around 16-bit math. It could do 32-bit, but only at half performance. nVidia had assumed that developers would write code specifically for their software. They didn't, and it resulted in the card performing barely half as well as the competing Radeons in Half-Life 2 and Far Cry.

The aforementioned FX 5800 introduced the idea of GPU coolers which took up a whole expansion slot all by themselves, which is now standard in anything higher than an entry level card. Unfortunately, nVidia got the execution of that wrong as well, using an undersized fan which constantly ran at full speed and made the card ridiculously loud. This eventually gave way to a more reasonable cooler in the FX 5900, and some fondly remembered Self-Deprecation videos from nVidia. In a bit of irony, the GeForce FX was developed by the team that came from 3dfx, whom nVidia bought a few years earlier.


XGI Volari (2004)

Notable mostly for being the last GPU which made a serious attempt to take on ATi and nVidia at the high-end, it followed in the footsteps of the Voodoo5 by offering two graphics chips on one board. Their top-end SKU was targeted at the GeForce FX 5900 and Radeon 9800, with the same range of Direct3D 9 functionality. Unfortunately, any chance XGI had of establishing themselves as serious competitors was dashed within a few weeks, when it ran into its own version of the driver scandal that had engulfed the Radeon 8500, only turned Up to Eleven. With XGI's driver hacks the chip was somewhat competitive with the FX 5900 and Radeon 9800, albeit with noticeably worse graphics quality, and without the hacks its performance was pretty dismal.


nVidia GeForce 6800 (2004)

Learning from its mistakes, nVidia this time released series for the then latest DirectX 9.0c standard, ensuring that it was more than capable of meeting the standard and turning the tables on ATi for a while, as the GeForce 6800 Ultra had double the performance of the Radeon 9800 XT at the same $500 price point. This GPU also marked the re-introduction of the idea that GPUs could be linked up much like 3dfx's SLI, this time as Scalable Link Interface, with the introduction of the PCI-Express interface to displace the existing Accelerated Graphics Port interface. As a result, ATi countered with CrossFire. While it offered more flexibility than SLI (which only worked with identical GPUs at the time), it was clunky to setup as one needed to buy a master Crossfire card and a funky dongle to connect to the other card, and only offered a very limited set of resolutions (they subsequently implemented CrossFire much better in the following year's Radeon X1900 family).

On the other hand, setting up SLI was clunky as well at that time- it required that the owner had a nForce chipset motherboard with two PCI Express slots. In a bid to plug their own chipset, Nvidia crippled the driver to ensure that the SLI option is not offered if the motherboard isn't using an nForce chipset. ATi used this restriction to their marketing advantage by not crippling their drivers and claiming that Crossfire will work on any motherboard with two PCI express slots. Nvidia responded by easing down on the restriction by allowing other chipset manufacturers to license SLI support, then removing the SLI restriction altogether when they left the chipset business at the turn of the decade.


S3 Chrome (2004)

Basically S3 waking up, looking around and realizing how they've slipped to last place, and shouting We're Still Relevant, Dammit after being sold to VIA and sleeping for four years, S3 released this evolution to the Savage line in 2004 to the sound of crickets chirping. It is not very popular outside of the budget PC market. But damn, does S3 care. This was followed the next year by cards supporting Multichrome, S3's take on SLI and Crossfire, but requiring no special chipset or master and slave cards. S3's take is that you could take just any two or more identical Chrome cards and put them in any motherboard with two or more PCI Express slots, and it will do Multichrome. Naturally ATi cards could do the same thing when the X1900 Radeon HDs came out.

S3 also introduced AcceleRAM, their take on Nvidia's TurboCache and ATi's HyperMemory. TurboCache and HyperMemory being flops due to the performance penalties placed on the GPU didn't win S3 any additional favors.


ATi Xenos (2005)

The GPU that powers the Xbox360. What set this apart from its competitors was the unified shader architecture. Traditionally, vector and pixel shading work is done on separate, fixed pipelines. Unified shading allows the GPU to assign any number of pipelines to do pixel or geometry work. Thus if one part of a scene needs more pixel output, the GPU allocates more pipelines to it and vice versa for geometry work. It was able to use the same number of transistors more efficiently.


nVidia GeForce 7950GX2 (2006)

Refining on the SLI technology, nVidia took their second best GPU, made two graphics cards out of them, and stacked them together. This essentially created a dual-GPU solution that would fit in a single PCI-Express slot. And then they took it Up to Eleven by allowing two of these graphics cards to work in SLI, creating quad-SLI (although this feature wasn't officially supported initially). Since then, both nVidia and ATi have introduced at least one graphics card of this type in subsequent generations. It should be noted though, that dual GPU versions of Voodoo2, ATi RAGE 128, and even NVIDIA's own 6600 GT, existed, but most of these (except the ATi card) were done by the board makers, not the chip makers.


nVidia GeForce 8800 and ATi HD Radeon 2900 (2006)

The idea of the unified shader pipeline kicked off. However, these two graphics processors took another approach. Instead of using vector units, the graphics cards would incorporate scalar processors. Vector units can work on multiple data at a time, but scalar processors can only work on one datum. But with scalar processors, there could be much, much more. It was also found that scalar processors are more efficient than vector ones, as they can reach near 100% utilization on heavy workloads.

However, since scalar processors are essentially agnostic to the work they must perform (i.e., you just throw it some datum and an operation, it doesn't care what it's for), it lead to another kind of "shader", the compute shader. This handles work that doesn't make sense to do on the vertex, geometry, or pixel shader, things like physics simulations or what not. This led to the creation of the General Purpose GPU, which both NVIDIA and ATi made graphics cards just for computational work. GPUs now can do so much computation that they've replaced CPUs by a good margin in the supercomputer market to the point where one can make a supercomputer class computer for around $2000.

It should be noted however, that GPUs are only good for their computational power, they are not exactly good for making decisions (i.e., logic/control).
nVidia GeForce 8800 GT (2007)

Released out of the blue as a refresh to the GeForce 8 series, the one notable thing about this card is that it performed almost as good as the high-end 8800 GTX, while costing half as much, $250 at the time. And in a rarity for high-end cards of the time, it occupied only one PCI-Express slot. In a one-two punch, an upgraded version was released for $300 a few weeks later. This started a trend of the "sweet spot" card, where $300 would get you really good performance. AMD even countered with the Radeon HD3850, which had all the charms of the 8800 GT.

The 8800 GT was such a good GPU design, at least in NVIDIA's eyes, that it was used in hree generations of graphics cards, and is one of the last video cards to be so fondly remembered.


Intel Larrabee

In 2008, Intel announced they would try their hands in the dedicated graphics market once more with a radical approach. Traditionally lighting is actually estimated using shading techniques done on each pixel. Intel's approach was to use ray tracing, which at the time was a hugely computationally expensive operation. Intel's design however was to use the Pentium architecture, but scale it down using modern integrated chip sizes, modify it for graphic related instructions. A special version of Enemy Territory Quake Wars was used to demonstrate it. It was axed in late 2009.

nVidia tried their hands on "real time" ray tracing with the GeForce GTX 480 using a proprietary API.


Intel GMA X3000/X3100 (2007)

It's not a very good performer, as expected of integrated graphics, but it's notable in that it took Intel eight years for their GMA line to even include hardware transform & lighting, a critical feature to Direct X 7 and the defining feature of the GeForce 256 mentioned above. Now they can at least run older games at a decent framerate.

Note that the earlier GMA 950, while being touted as Direct X 9-compliant and bearing hardware pixel shaders, does not have HT&L or hardware vertex shaders.


S3 Chrome 430 (2008)

Still struggling to keep relevant, S3 embraces the unified shader model two years after Nvidia and ATi. Again, to the sound of crickets and only embraced by budget card manufacturers. S3 would eventually be resold to cellphone maker HTC in 2011, but continues to make 3D chipsets for budget motherboard and video card manufacturers to this date.


ATi Radeon 5870 (2009)

This GPU finally allowed ATi to retake the GPU performance lead, having been pretty much continually behind nVidia for the previous five years. More notably however, it included gaming across three displays as a standard feature, and the special EyeFinity edition took things Up to Eleven by supporting gaming across six displays.


PowerVR Series 5 (2005-2010)

After several generations of failed attempts to re-enter the PC graphics market, PowerVR entered the embedded systems market. The Series 5 brought exceptional performance for mobile devices, including Apple's iPhone 4 and many Android powered tablets and phones. The second generation, SGXMP, is arguably the first multi-core GPU. A dual-core variant powers the iPad 2 while a quad-core version powers Sony's PS Vita.


Intel Graphics HD2000/HD3000 and AMD Radeon HD6000D/G (2010)

While neither of these will set world record benchmark scores, they have the noted ability to accelerate computational work in the processor if needed. This is usually used in video transcoding to speed up the process.

    2010s 
AMD Graphics Core Next (2011)
  • Implemented in: AMD Radeon HD 7000/8000, Radeon RX 200

AMD's GCN represents a radical change previous architectures. Unlike the earlier Northern Islands and Evergreen cards that uses VLIW shaders, GCN uses RISC based shaders. While this increases transistor requirements, the use of RISC based shaders helps increase GPGPU performance. Other major features include an even lower power mode when the GPU has been idling for a long time and support for unifying memory (either through Unified Virtual Memory or Heterogenous System Architecture). AMD's Mantle API was also introduced later in 2013. GCN received two updates. 1.1, added on later APUs, introduced an audio processing core called True Audio. GCN 1.2 included incremental updates to improve efficiency.

NVIDIA Kepler (2012)
  • Implemented in: GeForce 600/700/TITAN, Tegra K1

Kepler represented a sort of radical design from NVIDIA once it began to realize that previous GPU designs were becoming inefficient in terms of power and performance. Originally, their design was to have a smaller number of shaders clocked at twice the frequency of a base clock. But this caught up with them. Since clock frequency is often a hue power hog, it started with halving that for shaders. But now they had to at least double the shader count to break even on performance. This posed a problem with area on the chip. Thankfully, transistor sizes were enough that they could pack twice the shaders in about the same area compared to the previous generation.

Then they went further to find things to cut. They went the same route as Transmeta (see their entry in the CPU page) and refined it. They cut out the complex job scheduler in favor of a simpler one and software would handle scheduling. While this has had mixed results in CPU land, the type of jobs a GPU needs to do are actually perfect for this design. Unlike previous radical designs that NVIDIA has done in the past, this one worked very well on the first try.

Like previous designs (as well as AMD in their GCN architecture), Kepler is designed to be modular and scalable. So much so that not only are its implementations are in the highest performing supercomputers, but well down into the mobile market. NVIDIA claims a 192 shader Kepler GPU for mobile consumes less than 2W, yet performs as well as the Xbox360 or PlayStation 3

Intel Iris Pro (2013)

The Iris Pro is the result of Intel's seriousness in competing in the integrated graphics market. Utilizing eDRAM instead of piggybacking on main memory, Iris Pro does perform enough to compete with even lower end discrete graphics cards. However, where it's supposed to supplement like the HD2000 series is for compute performance. Iris Pro will allow a processor to perform just as good as higher end processors or discrete graphics solutions for about 30%-40% less power.

NVIDIA Maxwell (2013)
  • Implemented in: GeForce GTX 750 and GeForce 900

Refining on the principles of Kepler, Maxwell's aim was efficiency by designing for mobile first. Despite being on the same 28nm process as Kepler and GCN, some clever engineering allowed Maxwell to perform up to twice as much as the GTX 680 with virtually the same image quality. A much touted feature that was advertised was real-time global illumination and they showed this by debunking several conspiracy theories on the lunar landing (particularly about lighting). In an odd move, NVIDIA released a lower end version first. This was presumably to get something out the door as a "beta" product and make refinements later.
Display TechnologyHow Video Game Specs WorkGraphics Rendering
Grand Vizier JafarAdministrivia/Useful Notes Pages in MainGraphics Rendering

alternative title(s): GPU; Graphics Card
random
TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy
97721
1