Graphics rendering is the process of composing the images that a player sees on the screen while playing a game. Quite a few tasks are needed to produce even a simple graphical game like Super Mario Bros.
to a more modern game like Gears of War
or Modern Warfare
. Since 2D and 3D drawing methods are entirely different, they'll be covered in different sections.
2D Graphics rendering
2D graphics can be summed up in two different methods: Raster graphics and Vector graphics.
Raster graphics is a very common method and involves drawing elements on the screen pixel by pixel based on what's in some memory buffer.
Initially there were only two major methods of raster graphics: a text mode and direct drawing. In the former, the video processing chip had a table of characters in ROM that it knew how to draw. Thus rendering something was looking up values in this table and drawing them. As it was higher resolution than direct drawing, sometimes special characters were put in the table to allow for basic tiling graphics, though this was a hack job at best and smooth motion was practically impossible. In direct drawing, it started off as requiring the software to draw the pixel in the image when the display was just about to draw it. Later, when memory was more affordable, each pixel was drawn in that memory before it was sent to the display.
2D consoles in the later 8-bit era started taking a hybrid approach to rendering graphics. The screen was divided up into layers that were static with simple animations and meant to be the foreground and background of the view, while sprites were meant for interactive objects that had complex animations. Everything was built up from tiles on a fixed grid from a tilemap, similar to text mode, in order to make it easier on memory requirements. As hardware got better, layers could be blended together for transparency effects.
One limitation in all 2D consoles was how many sprites the video processor could handle at once per horizontal line. If there were too many, sprites could simply be dropped and those would be rendered invisible. The NES had a quirk to try and avoid some sprites being dropped all the time by rotating through which sprites got to be displayed and which didn't, which resulted in flickering.
Today 2D raster graphics directly draw on the screen. Elements are typically pulled from a picture map that contains known sizes of certain elements. The difference between this and tiled 2D is that each element can now be placed on the pixel level rather from a grid position that may have, for example, a 4x4 pixel area and that the elements can be any arbitrary size.
Vector graphics on the other hand is a mathematical approach. Everything is rendered on the fly using points and lines that connect them. The first games to use vector graphics were Atari
. The Vectrex was also a console based on vector graphics. Early vector graphics were simply wireframes of the model or image being rendered, hence there was a lack of color and features other than the outline. Eventually the spaces could be filled as hardware got more powerful.
The advantage of vector graphics is its infinite scalability. Because everything is created on the fly, a low resolution vector image will look just as good as a high definition one. Whereas if you scaled a low resolution raster graphics, you would get an interpolated (blurred) or pixelated high resolution one, with ugly results. It also allows an artist to draw rather freely, with the graphics software rendering each input as a vector. The downside is that vector graphics are computationally expensive, evident in high quality Flash movies or games.
Graphical user interfaces elements are built from vectors because of their need to scale with some raster elements like icons. Fonts are also built from vectors because of their need to scale as well.
3D Graphics rendering
Much like 2D Graphics rendering, 3D has two main methods.
Voxel 3D Graphics
Voxel is a portmanteau of volumetric and pixel, or more precisely, volumetric picture element. In a 3D space, this would be the most basic element, akin to a pixel being the smallest element in a picture. In fact, a voxel model is basically like how raster graphics work. It's an old concept and is still relatively unused due to hardware constraints (See disadvantages).
Voxels are advantageous for a few reasons:
- Voxels can represent a 3D object in a similar way that a picture represents a 2D one. Imagine what you can do to a 2D picture and apply it with another dimension.
- Since voxels fill up a space to represent an object, you could break apart objects without the need for creating new geometry as in a polygon based renderer. You would simply break off a chunk of the object.
- Voxels can have their own color, eliminating the need for textures entirely.
However, there's still a few things to overcome:
- Voxels require a lot more memory than a 2D image (or even 3D models). A 16x16 with 1 byte per pixel for instance, requires 256 bytes to store. A 16x16x16 with 1 byte per voxel model, requires 4096 bytes. A way around this is to find groups of voxels and clump them together into one big voxel.
- Detailed voxel models are computationally expensive to setup. Hence they are limited mostly to major industries that need the detail.
are currently the only known voxel based graphical engines with potential for games. Other games used voxel based technology for certain elements.
Polygonal 3D Graphics
Much like 2D vector graphics, polygonal 3D graphics go for an mathematical approach to represent the object. The polygons themselves have all the benefits of vector graphics. The other elements are typically constrained to the same as raster graphics.
Polygonal 3D graphics are comprised of the following elements:
- Vertex: An point in space that represents the coordinates of a polygon's vertex. This is the smallest unit of a 3D scene.
- Polygon: A polygon is a 2D plane that occupies a space between 3 or 4 vertices. Early polygonal graphics used quadrilaterals as the most basic unit because it was computationally simple. Triangles are used today because they are the smallest polygonal element.
- Texture: Textures give polygons an image or color to help the 3D model look like something.
- Sprite and particles: These are 2D elements. Sprites usually always face the player regardless of the viewpoint, something called "billboarding" in the industry. Particles are sprites that form together to create complex effects, like explosions and smoke.
In the early days of 3D graphics, the CPU was responsible for rendering everything, which was known as software rendering. Things didn't really take off though until they started including floating point units, which allowed for much smoother animations. As graphics hardware got more powerful, they soon started taking over parts of the rendering process. Eventually it accumulated today, where the CPU's only graphics rendering job is to create display lists
, which are instructions on what the GPU needs to do in order to render the scene.
open/close all folders
3 D rendering steps
- Job batching: The CPU takes the data from the game code (physics solutions, AI decisions, user input etc.) and creates a batch of jobs called display lists. These are basically instructions on how to do everything.
- Transform: All vertices and polygons are positioned in 3D space.
- Geometry Instancing: Some models are only loaded once and any entity using this will refer to this, rather than copy it entirely. This saves memory and rendering time.
- Primitives generation/Tessellation: Polygons can be added or subtracted as necessary to detail the silhouette. This saves memory, but not necessarily rendering time.
- Clipping: Once the world is setup, it would be very inefficient to have to do lighting calculations on everything. Clipping decides what the player is seeing at the moment and removes all assets that cannot be seen. For example, if one is in a house, the 3D engine will not render rooms that are not immediately visible to the player.
- Rasterization: This takes the 3D scene and creates a 2D representation of it for the monitor. This is necessary until we can invent actual 3D displays.
- Lighting: The color of each pixel is figured out.
- Forward rendering: The gist of forward rendering is for every piece of geometry, do all shader operations on a single render target, which becomes the final output. While very simple to implement, it scales poorly with the amount of geometry entities and lights.
- Deferred rendering: Geometry is rendered without lighting to generate multiple render targets. These render targets are combined, then lighted. While more complicated to implement, it scales basically to just the number of lights you have. The only downside is it has trouble with transparency and some anti-aliasing methods.
- Post Processing: Special coloring effects are applied to the 2D output.
- Final Output: The image is buffered to be sent to the monitor.
Types of textures
- Diffuse map: Basically this is just a picture to provide what the object looks like. For example, a brick wall picture could be placed on a polygon to resemble one.
- Specular map: Allows for simple shiny effects.
- Cube map: A texture with an approximation of the world's surroundings. This is used in simple reflections.
- Bump map: A texture that affects how lighting is done on the surface of a polygon. It can make a flat surface look like it has features.
- Normal map: A type of bump map that stores a pixel's "direction" it's facing. This can make a seemingly flat object look like it was made of many polygons.
- Parallax map: A type of bump map that stores a pixel's "depth". Most commonly used with bricks, where the bricks look like they can obscure the cement holding them, but the entire wall is actually flat.
- Height/Displacement Map: Usually used for large landscapes, this texture describes how much a polygon or vertex should stick out.
- Flat shading: Lighting is calculated on the polygonal level. It has a very polygonal/pixelated look.
- Gouraud (per-vertex) lighting: Lighting done on each vertex, each pixel is shaded accordingly based on the light the vertex received. It has a better lighting effect than flat shading, but it does produce some poor lighting results occasionally.
- Per-pixel lighting: Lighting done on each individual pixel.
- Ray tracing: The "holy grail" of realistic computer graphics. Each pixel casts a ray and the color is calculated by how this ray bounces around objects. This considers all pixels and all objects, which is computationally expensive.
- Ray casting: A lighter version of ray tracing. Color is determined by the first object that the ray intersects.
- High dynamic range lighting: A sub-type of lighting. In standard lighting, all light sources are clamped to a dynamic range of 256 values. This causes strange artifacts in lighting. For instance, if a surface reflects 505 of light, then light from the sun (which is brighter than anything) can look as bright as a flashlight. With HDR lighting, the dynamic range increases, and then is sampled down. This allows light from the sun to remain as bright as it should be, even if the reflectivity is low.