troperville

tools

toys


main index

Narrative

Genre

Media

Topical Tropes

Other Categories

TV Tropes Org
random
Kickstarter Message
TV Tropes Needs Your Help
X
Big things are happening on TV Tropes! New admins, new designs, fewer ads, mobile versions, beta testing opportunities, thematic discovery engine, fun trope tools and toys, and much more - Learn how to help here and discuss here.
View Kickstarter Project
Graphics Processing Unit
A GPU is the common term for a piece of a computer/console/arcade hardware that is dedicated to drawing things, i.e. graphics. The general purpose of a GPU is to relieve the CPU of the responsibility for a significant portion of rendering. This also gives performance benefits as it gives the CPU time for core program code, which in term may be able to tell the GPU what to do more often. Both consoles and regular computers have had various kinds of GPUs. They had two divergent kinds of 2D GPUs, but they converged with the advent of 3D rendering.

The term "GPU" was coined by nVidia upon the launch of their GeForce line of hardware. This was generally a marketing stunt, though the GeForce did have some fairly advanced processing features in it. However, the term GPU has become the accepted shorthand for any graphics processing chip, even pre-GeForce ones.

See the trivia page for notable graphics processing units over the years.

Arcade 2D GPU

The first 2D GPU chipsets appeared in early Arcade Games of the 1970's. The earliest known example was the Fujitsu MB14241, a video shifter chip that was used to handle graphical tasks in Taito & Midway titles such as Gun Fight (1975) and Space Invaders (1978). Taito and Sega began manufacturing dedicated video graphics boards for their arcade games from 1977. In 1979, Namco and Irem introduced tile-based graphics with their custom arcade graphics chipsets, while Nintendo's Radar Scope video graphics board was able to display hundreds of colors on screen for the first time.

By the mid-80's, Sega were producing Super Scaler GPU chipsets with advanced pseudo-3D sprite-scaling graphics that would not be rivalled by any home computers or consoles until the mid-90's.

From the 1970s to the 1990s, arcade GPU chipsets were significantly more powerful than GPU chipsets for home computers and consoles, both in terms of 2D and 3D graphics. It was not until the mid-90's that home systems rivalled arcades in 2D graphics, and not until the early 2000s that home systems rivalled arcades in 3D graphics.

Console 2D GPU

This kind of GPU, introduced to home systems by the TMS 9918/9928 (see below) and popularized by the NES, Sega Master System and Sega Genesis, forces a particular kind of look onto the games that use them. You know this look: everything is composed of a series of images, tiles, that are used in various configurations to build the world.

This enforcement was a necessity of the times. Processing power was limited, and while tile-based graphics were somewhat limited in scope, it was far superior to what could be done without this kind of GPU.

In this GPU, the tilemaps and the sprites are all built up into the final image by the GPU hardware itself. This drastically reduces the amount of processing power needed — all the CPU needs to do is upload new parts of the tilemaps as the user scrolls around, adjust the scroll position of the tilemaps, and say where the sprites go.

Computer 2D GPU

Computers had different needs. Computer 2D rendering was driven by the needs of applications more so than games. Therefore, rendering needed to be fairly generic. Such hardware had a framebuffer, an image that represents what the user sees. And the hardware had video memory to store extra images that the user could use.

Such hardware had fast routines for drawing colored rectangles and lines. But the most useful operation was the blit or BitBlt: a fast video memory copy. Combined with video memory, the user could store an image in VRAM and copy it to the framebuffer as needed. Some advanced 2D hardware had scaled-blits (so the destination location could be larger or smaller than the source image) and other special blit features.

Some 2D GPUs combined these two approaches, having both a framebuffer and a tilemap, and being able to output hardware accelerated sprites and tiles, and perform tile transformation routines over what was stored in the framebuffer. These were most powerful and advanced among them, but usually pretty specialized and tied to the specific platforms (see below), and in the end more general approach won over, being more conducive to the various performance-enchancing tricks and better adapting to the increasing computing horsepower.

The CPU effort is more involved in this case. Every element must be explicitly drawn by a CPU command. The background was generally the most complicated. This is why many early computer games used a static background. They basically had a single background image in video memory which they blitted to the framebuffer each frame, followed by a few sprites on top of it. Later PC games before the 3D era managed to equal or exceed the best contemporary consoles like the Super NES both through raw power (the 80486DX2/66, a common gaming processor of the early 90s, ran at 66 MHz, almost 10 times the clock speed of the Sega Genesis, and could run 32-bit code as an "extension" to 16-bit DOS) and through various programming tricks that took advantage of quirks in the way early PCs and VGA worked. John Carmack once described the engine underpinning his company's breakout hit Wolfenstein 3D as "a collection of hacks", and he was not too far off. (It was also the last of their games that could not only run, but was comfortably playable on an 80286 PC with 1 MB RAM — a machine that was considered low-end even in 1992 — which serves as a testament to the efficiency of some of those hacks.)

Before the rise of Windows in the mid-1990s, most PC games couldn't take advantage of newer graphics cards with hardware blitting support; the CPU had to do all the work, and this made both a fast CPU and a fast path to the video RAM essential. PCs with local-bus video and 80486 processors were a must for games like Doom and Heretic; playing them on an old 386 with ISA video was possible, but wouldn't be very fun.

Basic 3D GPU

The basic 3D-based GPU is much more complicated. It isn't as limiting as the NES-style 2D GPU.

This GPU concerns itself with drawing triangles. Specifically, triangles that appear to imitate shapes. They have special hardware in them that allows the user to map images across the surface of a triangular mesh, so as to give it surface detail. When an image is applied in this fashion, it is called a texture.

The early forms of this GPU were just triangle/texture renderers. The CPU had to position each triangle properly each frame. Later forms, like the first GeForce chip, incorporated triangle transform and lighting into the hardware. This allowed the CPU to say, "here's a bunch of triangles; render them," and then go do something else while they were rendered.

Modern 3D GPU

Around the time of the GeForce 3 GPU, something happened in GPU design.

Take the application of textures to a polygon. The very first GPU had a very simple function. For each pixel of a triangle:

color = textureColor * lightColor

A simple equation. But then, developers wanted to apply 2 textures to a triangle. So this function became more complex:

color = texture1 * lightColor * texture2

Interesting though this may be, developers wanted more say in how the textures were combined. That is, developers wanted to insert more general math into the process. So GPU makers added a few more switches and complications to the process.

The GeForce 3 basically decided to say "Screw that!" and let the developers do arbitrary stuff:

color = Write it Yourself!

What used to be a simple function had now become a user-written program. The program took texture colors and could do fairly arbitrary computations with them.

In the early days, "fairly arbitrary computations" was quite limited. Nowadays, not so much. These GPU programs, called shaders, commonly do things like video decompression and other sundry activities. At first the shader execution units that did shaders were separated into pipelines for strictly vertex (positioning the 3D models) and pixels (for coloring) though with some clever programming, general operations could be done. Shader units in modern GPUs became generalized to take on any work. This led to to the General Purpose GPU, or GPGPU, which could do calculations much faster than a several traditional CPUs in tandem.

Difference between GPU and CPU

GPUs and CPUs are built around some of the same general components, but they're put together in very different ways. A chip only has a limited amount of space to put circuits on, and GPUs and CPUs use the available space in different ways. The differences can be briefly summarized as follows:

  • Execution units: These are the things that do things like add, multiply, and other actual work. A GPU has dozens of times as many of these as a CPU, so it can do a great deal more total work than a CPU in a given amount of time, if there's enough work to do.
  • Control units: These are the things that read instructions and tell the execution units what to do. CPUs have many more of these than GPUs, so they can execute individual instruction streams in more complicated ways (out-of-order execution, speculative execution, etc.), leading to much greater performance for each individual instruction stream.
  • Storage: GPUs have much smaller cache sizes and available RAM than CPUs. However, RAM bandwidth for GPUs typically exceeds the bandwidth of that of CPUs many times over since they need a fast constant stream of data to operate at their fullest. Any hiccups in the data stream will cause stuttering in the operation.

In the end, CPUs can execute a wide variety of programs at acceptable speed. GPUs can execute some special types of programs far faster than a CPU, but anything else it will execute much slower, if it can execute it at all.

The Future

GPUs today can execute a lot of programs that formerly only CPUs could, but with radically different performance characteristics. A typical home GPU can run hundreds of threads at once, while a typical home CPU can run two to four. On the other hand, each GPU thread progresses far more slowly than a CPU thread. Thus if you have thousands of almost identical tasks you need to run at once, like many pixels in a graphical scene or many objects in a game with physics, a GPU might be able to do work a hundred times faster than a CPU. But if you only have a few things to do and they have to happen in sequence, a CPU-style architecture will give vastly better performance. As general-purpose GPU programming progresses, GPUs might get used for more and more things until they're nearly as indispensable as CPUs. (Or maybe not)
Display TechnologyHow Video Game Specs WorkGraphics Rendering
Grand Vizier JafarAdministrivia/Useful Notes Pages in MainThe Great British Seaside

alternative title(s): GPU; Graphics Card
random
TV Tropes by TV Tropes Foundation, LLC is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available from thestaff@tvtropes.org.
Privacy Policy
15690
1