Follow TV Tropes

Following

History MediaNotes / GraphicsRendering

Go To

OR

Added: 1135

Changed: 900

Removed: 1398

Is there an issue? Send a MessageReason:
None


* Absorption Map: defines where a type of volumetric shader should absorb light in the way smoke or fog might.
* Bump map: A bump map creates the effect of raised or lowered details on flat surfaces, such as tiny scratches on metal or glass, or the grain of wood. Works best with small details and simple distortions, and can become obvious at extreme viewing angles.



* Bump map: A bump map creates the effect of raised or lowered details on flat surfaces, such as tiny scratches on metal or glass, or the grain of wood. Works best with small details and simple distortions, and can become obvious at extreme viewing angles.
* Normal Map: Alters the angle at which the render engine "sees" the pixel- it might be used to create the effect of dents in metal, distortions in glass, or scales on a reptile. Works best for shallow details that need realistic shading, and bizarre AlienGeometry effects where part of an object interacts with light and shadow in strange ways.
* [[MotionParallax Parallax Map]]: Alters how "deep" the rendering engine sees the pixel. This allows even more three-dimensional detail to be seen on the surface of a polygon. For instance, if you had a parallax map of a grill over a vent, moving up and down will give the appearance that the vanes of the grill obscure each other. The illusion however fails in certain points, such as seeing the corner of a parallax mapped cube. Also if the map isn't detailed enough, you can see discreet steps in the depth of the pixel.



* Specular map: Shows where and how shiny the object is.
* Transparency map: Shows where an object should be translucent or transparent

to:

* Specular Emission map: Shows where and how shiny the object is.
* Transparency map: Shows where
defines areas of an object should be translucent or transparentto emit light instead of interact with it.



* Subsurface Scattering map: defines a type of internal translucent glow seen in materials like skin or plastic

to:

* Subsurface Scattering map: defines a type Normal Map: Alters the angle at which the render engine "sees" the pixel- it might be used to create the effect of internal translucent glow dents in metal, distortions in glass, or scales on a reptile. Works best for shallow details that need realistic shading, and bizarre AlienGeometry effects where part of an object interacts with light and shadow in strange ways.
* [[MotionParallax Parallax Map]]: Alters how "deep" the rendering engine sees the pixel. This allows even more three-dimensional detail to be
seen on the surface of a polygon. For instance, if you had a parallax map of a grill over a vent, moving up and down will give the appearance that the vanes of the grill obscure each other. The illusion however fails in materials like skin or plasticcertain points, such as seeing the corner of a parallax mapped cube. Also if the map isn't detailed enough, you can see discreet steps in the depth of the pixel.



* Emission map: defines areas of an object to emit light instead of interact with it.
* Absorption Map: defines where a type of volumetric shader should absorb light in the way smoke or fog might.



* Shadow Map: Adds shadows dynamically to surfaces, rather than rely on the diffuse map to bake this in. While the calculation itself is cheaper than doing ray-traced based shadowing, shadow maps can become expensive to do since it requires basically re-rendering the scene from the light's point of view.

to:

* Shadow Map: Adds shadows dynamically (i.e., calculated per frame) to surfaces, rather than rely on the diffuse map to bake this in. While the calculation itself This is cheaper than doing ray-traced based shadowing, shadow maps can become expensive to do since it requires basically generated by essentially re-rendering the scene from the light's point of view.view, minus several steps since this only needs to see what's blocking the light. While cheaper than ray-traced shadows for each individual light, calculations can easily pile up with more lights and the resolution of the shadow map to the point where ray tracing becomes cheaper.
* Specular map: Shows where and how shiny the object is.
* Subsurface Scattering map: defines a type of internal translucent glow seen in materials like skin or plastic
* Transparency map: Shows where an object should be translucent or transparent
Is there an issue? Send a MessageReason:
Page was movedfrom UsefulNotes.Graphics Rendering to MediaNotes.Graphics Rendering. Null edit to update page.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* Shadow Map: Adds shadows dynamically to surfaces, rather than rely on the diffuse map to bake this in. While the calculation itself is cheaper than doing ray-traced based shadowing, shadow maps can become expensive to do since it requires basically re-rendering the scene from the light's point of view.
Is there an issue? Send a MessageReason:
None


* Geometry Work: 3D models are moved to their new location and extra work is done on them to handle the amount of detail that will be seen. Some extra processing may be done to work on this:

to:

* Geometry Work: 3D models are moved to their new location and extra location, with some optimizations or other work is done on them to handle the models with the ultimate goal of presenting as much detail as possible with the minimum amount of detail that will be seen. Some extra processing may be done to work on this:processing. Vertex, geometry, and mesh shading stages of the rendering pipeline happen here.



* Rasterization: This takes the 3D scene and creates a 2D representation of it for the monitor. This is done by laying the pixels over the 3D scene and if a triangle passes over a sample point in the pixel, the triangle will be rendered for that pixel. There are two types:

to:

* Rasterization: This takes the 3D scene and creates a 2D representation of it for the monitor. This is done by laying the pixels over the 3D scene and if a triangle passes over a sample point in the pixel, the triangle will be rendered for that pixel. In 3D graphics parlance, these samples are known as "fragments." There are two types:types of rasterization:



* Lighting: The color of each pixel is figured out, usually based on the textures that were applied earlier.
** Forward rendering: The gist of forward rendering is for every object, shade it based on the lights in the scene to a single render target, which becomes the final output. While simple to implement, the computation complexity becomes objects multiplied by lighting. A common optimization is to limit how many lights affect that object.

to:

* Lighting: The color of each pixel is figured out, usually based on the textures that were applied earlier.
earlier. The pixel shader stage happens here.
** Forward rendering: The gist of forward rendering is for every object, shade it based on the lights in the scene to a single render target, which becomes the final output. While simple to implement, the computation complexity becomes objects multiplied by lighting. A common optimization is to limit how many far a light can influence objects, which may often result in either a low number of dynamic lights affect that object.or dynamic lights with an unrealistic short range.



** Forward Plus: An improvement over Forward Rendering. The scene is broken up into tiles and then for each tile, figure out how many lights are actually influencing it. Afterwards, apply forward rendering as usual but use only the lights for that tile rather than consider every single light. This scales as good as deferred rendering for a smaller number of lights, but gains the advantage when more lights are used.

to:

** Forward Plus: An improvement over Forward Rendering. The scene is broken up into tiles and then for each tile, figure out how many lights are actually influencing it. Afterwards, apply forward rendering as usual but use only the lights for that tile rather than consider every single light. This scales as good as deferred rendering for a smaller number of lights, up to around 100 light sources, but gains the advantage scales better when more lights are used.going beyond that.



* High dynamic range lighting: An enhancement for lighting. In standard lighting, all light sources are clamped to a dynamic range of 256 values. This causes strange artifacts in lighting. For instance, if a surface reflects 50% of light, then light from the sun can look as bright as a flashlight. With HDR lighting, lighting is done in a higher dynamic range, then is sampled down. This allows light from the sun to remain as bright as it should be, even if the reflectivity is low. Note this is different than outputting HDR for displays.
* Level of detail ([=LOD=]) and Mipmapping: An early optimization in rendering. If a 3D model or texture is far enough away from the player camera, it would be wasteful to render its full detailed version since the player can only see some of it. To prevent this from happening, the 3D model or texture is swapped with a lower detailed version of it. LOD exclusively refers to this optimization with 3D models while mipmapping is exclusively for textures.

to:

* High dynamic range lighting: An enhancement for calculating lighting. In standard lighting, all light sources are clamped to a dynamic range of 256 values. This causes values, which can lead to strange artifacts in lighting.artifacts. For instance, if a surface reflects 50% of light, then light from the sun can look as bright as a flashlight. With HDR lighting, lighting is done in a higher dynamic range, then is sampled down. This allows light from the sun to remain as bright as it should be, even if the reflectivity is low. Note this is different than outputting HDR for displays.
* Level of detail ([=LOD=]) and Mipmapping: An early optimization in rendering. If a 3D model or texture is far enough away from the player camera, it would be wasteful to render its full detailed version since the player can only see some of it. To prevent this from happening, the 3D model or texture is swapped with a lower detailed version of it. The term LOD exclusively refers to this optimization with itself is generally only used for 3D models models, while mipmapping mimapping is exclusively only used for textures though LOD may also be used for textures.

Added: 2268

Changed: 1148

Removed: 1322

Is there an issue? Send a MessageReason:
None


* Transform: All vertices and polygons are positioned in 3D space.

to:

* Transform: All vertices Geometry Work: 3D models are moved to their new location and polygons are positioned in 3D space. extra work is done on them to handle the amount of detail that will be seen. Some extra processing may be done to work on this:



* Ray Tracing (if used): From the viewpoint of the monitor, rays are cast out and they interact with the 3D scene. The final color of the pixel is determined by what the ray returns as visible along with lighting and coloring properties with it.
* Clipping: Once the world is setup, it would be inefficient to have to render things that can't be seen by the player. Clipping removes assets that can't be seen by the player so it won't be rendered. For example, if one is in a house, the 3D engine will not render rooms that are not immediately visible to the player. A common form of clipping is called backface culling, which is when the same polygon is invisible when viewed from one side but not the other.

to:

* Transform: The coordinates of models are transferred from world space (how the game sees the world) to camera space (how the player sees the world)
* Ray Tracing (if used): From the viewpoint of the monitor, player camera, rays are cast out and they interact with the 3D scene. The final color of the pixel is determined by what the ray returns as visible along with lighting and coloring properties with it.
* Clipping: Once After getting the world is setup, it would scene from the camera's point of view, it'd be inefficient to have to render things that can't be seen by the player. Clipping removes assets that can't be seen by the player so it won't be rendered. For example, if one is in a house, the 3D engine will not render rooms that are not immediately visible to the player. A common form of clipping is called backface culling, which is when the same polygon is invisible when viewed from one side but not the other.



The overall point of a texture is to provide detail to polygons or meshes without needing to spend more polygons to represent the detail. There are downsides to this, but allow rendering to happen at higher performance. A texture type is usually called a "map", because it's a "map" on how the pixels should look on screen.

to:

The overall point of a texture is to provide detail to polygons or meshes without needing to spend more polygons to represent the detail. There While there are downsides to this, but allow some obvious downsides, the tradeoff is faster rendering to happen at higher performance.times. A texture type is usually called a "map", because it's a "map" on how the pixels should look on screen.



* [[MotionParallax Parallax Map]]: Alters how "deep" the rendering engine sees the pixel. This allows even more three-dimensional detail to be seen on the surface of a polygon. For instance, if you had a parallax map of a grill over a vent, moving up and down will give the appearance that the vanes of the grill obscure each other. The illusion however fails in certain points, such as seeing the corner of a parallax mapped cube. Also if the map isn't detailed enough, the depth tends to look as it's in hard steps rather than a smooth transition.

to:

* [[MotionParallax Parallax Map]]: Alters how "deep" the rendering engine sees the pixel. This allows even more three-dimensional detail to be seen on the surface of a polygon. For instance, if you had a parallax map of a grill over a vent, moving up and down will give the appearance that the vanes of the grill obscure each other. The illusion however fails in certain points, such as seeing the corner of a parallax mapped cube. Also if the map isn't detailed enough, you can see discreet steps in the depth tends to look as it's in hard steps rather than a smooth transition.of the pixel.



* High dynamic range lighting: An enhancement for lighting. In standard lighting, all light sources are clamped to a dynamic range of 256 values. This causes strange artifacts in lighting. For instance, if a surface reflects 50% of light, then light from the sun can look as bright as a flashlight. With HDR lighting, lighting is done in a higher dynamic range, then is sampled down. This allows light from the sun to remain as bright as it should be, even if the reflectivity is low. Note this is different than outputting HDR for displays.
* Level of detail ([=LOD=]) and Mipmapping: An early optimization in rendering. If a 3D model or texture is far enough away from the player camera, it would be wasteful to render its full detailed version since the player can only see some of it. To prevent this from happening, the 3D model or texture is swapped with a lower detailed version of it. LOD exclusively refers to this optimization with 3D models while mipmapping is exclusively for textures.
* Materials: A composite type of property for objects which combines texture mapping, sound, and physics. For example, if the game engine comes with a wood material, applying it to an object makes it look like wood, scrape like wood, sound like wood, and break like wood. Likewise, applying a metallic material would make the same object look like metal, shine like metal, and sound like metal.



* Physically based rendering: A style of rendering gaining ground since 2012. It's not a more complex type of lighting, but a change in philosophy to make lights and lighting effects behave realistically. An example of this is "conservation of energy": a reflection of light cannot be brighter than the light itself, which may have been a case in some scenes to achieve a desired effect.



* Materials: A composite type of property for objects which combines texture mapping, sound, and physics. For example, if the game engine comes with a wood material, applying it to an object makes it look like wood, scrape like wood, sound like wood, and break like wood. Likewise, applying a metallic material would make the same object look like metal, shine like metal, and sound like metal.
* High dynamic range lighting: An enhancement for lighting. In standard lighting, all light sources are clamped to a dynamic range of 256 values. This causes strange artifacts in lighting. For instance, if a surface reflects 50% of light, then light from the sun can look as bright as a flashlight. With HDR lighting, lighting is done in a higher dynamic range, then is sampled down. This allows light from the sun to remain as bright as it should be, even if the reflectivity is low. Note this is different than outputting HDR for displays.
* Physically based rendering: A style of rendering gaining ground since 2012. It's not a more complex type of lighting, but a change in philosophy to make lights and lighting effects behave realistically. An example of this is "conservation of energy": a reflection of light cannot be brighter than the light itself, which may have been a case in some scenes to achieve a desired effect.
Is there an issue? Send a MessageReason:
None


** Forward rendering: The gist of forward rendering is for every object, shade it based on the lights in the scene to a single render target, which becomes the final output. While simple to implement, the calculation complexity becomes objects multiplied by lighting. A common optimization is to limit how many lights affect that object.
** Deferred rendering: Geometry is rendered without lighting to generate multiple render targets, usually consisting of applied textures (see below) and depth. These render targets are combined, then shaded. While more complicated to implement, the complexity is now objects plus lighting. The downsides are it doesn't support transparent objects as it only considers the front-most thing the pixel "sees" and it only supports one lighting model so you can't mix say Main/CelShading with realistic lighting.

to:

** Forward rendering: The gist of forward rendering is for every object, shade it based on the lights in the scene to a single render target, which becomes the final output. While simple to implement, the calculation computation complexity becomes objects multiplied by lighting. A common optimization is to limit how many lights affect that object.
** Deferred rendering: Geometry is rendered without lighting to generate multiple render targets, usually consisting of applied textures (see below) and depth. These render targets are combined, then shaded. While more complicated to implement, the computation complexity is now increases by objects plus lighting. The downsides are it doesn't support transparent objects as it only considers the front-most thing the pixel "sees" and it only supports one lighting model so you can't mix say Main/CelShading with realistic lighting.

Added: 108

Changed: 353

Is there an issue? Send a MessageReason:
None


* Lighting: The color of each pixel is figured out.

to:

* Texture mapping: Various textures are applied to the polygons. This requires some complex math to project a 2D image onto a 3D surface properly. Since there's not much variation in how to map a texture to a polygon, [=GPUs=] have been able to do this really fast with dedicated, fixed function hardware.
* Lighting: The color of each pixel is figured out.out, usually based on the textures that were applied earlier.



** Forward Plus: An improvement over Forward Rendering. The scene is broken up into tiles and then for each tile, figure out how many lights are actually influencing it. Afterwards, apply forward rendering as usual but use only the lights for that tile rather than consider every single light. This scales as good as deferred rendering for a smaller number of lights, but gains the advantage over when more lights are used.

to:

** Forward Plus: An improvement over Forward Rendering. The scene is broken up into tiles and then for each tile, figure out how many lights are actually influencing it. Afterwards, apply forward rendering as usual but use only the lights for that tile rather than consider every single light. This scales as good as deferred rendering for a smaller number of lights, but gains the advantage over when more lights are used.



* Environment map: A texture with an approximation of the world's surroundings. This is used in simple reflections, and to create realism when accuracy is less important than looking good or matching a scene it will be composited into. It could be either in the form of sphere maps, cube maps or in some cases [[https://www.youtube.com/watch?v=8TMwKB7Qe54 paraboloid maps]], which can produce cubemap-like reflections while using fewer passes.

to:

* Environment map: A texture with an approximation of the world's surroundings. This is used in simple reflections, and to create realism when accuracy is less important than looking good or matching a scene it will be composited into. It could be either in the form of sphere maps, cube maps or in some cases [[https://www.youtube.com/watch?v=8TMwKB7Qe54 paraboloid maps]], which can produce cubemap-like cube map-like reflections while using fewer passes.



* Absorbtion Map: defines where a type of volumetric shader should absorb light in the way smoke or fog might.
* Scattering Map: Similar to an absorbtion map, but with a glow-like effect for simulating subtle atmospherics and haze

to:

* Absorbtion Absorption Map: defines where a type of volumetric shader should absorb light in the way smoke or fog might.
* Scattering Map: Similar to an absorbtion absorption map, but with a glow-like effect for simulating subtle atmospherics and haze



* Ray tracing: A family of algorithms where from the point of view, a ray or multiple rays are shot out to simulate light. Color is calculated by how these rays interacts with objects

to:

* Ray tracing: A family of algorithms where from the point of view, a ray or multiple rays are shot out to simulate light. Color is calculated by how these rays interacts with objectsobjects.



** Path tracing: This is normally what's used when the term "ray tracing" is used. Multiple rays at random angles per pixel are shot out into the scene, with the culmination of these rays determining the final color of the pixel. This is considered to be the holy grail of real-time 3D rendering, and has been too computationally expensive to perform for real-time applications to an acceptable degree until late 2018.

to:

** Path tracing: This is normally what's used when the term "ray tracing" is used. Multiple rays at random angles per pixel are shot out into the scene, with the culmination of these rays determining the final color of the pixel. The more rays that can be shot, the better the result. This is considered to be the holy grail of real-time 3D rendering, and has been too computationally expensive to perform for real-time applications to an acceptable degree until late 2018.
Is there an issue? Send a MessageReason:
None


* Parallax Map: Alters how "deep" the rendering engine sees the pixel. This allows even more three-dimensional detail to be seen on the surface of a polygon. For instance, if you had a parallax map of a grill over a vent, moving up and down will give the appearance that the vanes of the grill obscure each other. The illusion however fails in certain points, such as seeing the corner of a parallax mapped cube. Also if the map isn't detailed enough, the depth tends to look as it's in hard steps rather than a smooth transition.

to:

* [[MotionParallax Parallax Map: Map]]: Alters how "deep" the rendering engine sees the pixel. This allows even more three-dimensional detail to be seen on the surface of a polygon. For instance, if you had a parallax map of a grill over a vent, moving up and down will give the appearance that the vanes of the grill obscure each other. The illusion however fails in certain points, such as seeing the corner of a parallax mapped cube. Also if the map isn't detailed enough, the depth tends to look as it's in hard steps rather than a smooth transition.
Is there an issue? Send a MessageReason:
None


* Shader: a set of rules defining how light interacts with a surface or object, based on the textures they are given. Simple shaders might only calculate shadows or a particular type of reflected highlight, while complex principled shaders can take input from a stack of texture types to simulate a huge variety of materials in a unified process. When extreme detail and realism are required, many different shaders might be combined to compensate for each method's limitations.

to:

* Shader: a set of rules defining how light interacts with a surface or object, based on the textures they are given. Simple shaders might only calculate shadows or a particular type of reflected highlight, while complex principled shaders can take input from a stack of texture types to simulate a huge variety of materials in a unified process. When extreme detail and realism are required, many different shaders might be combined to compensate for each method's limitations.
limitations. For instance, rather than create a glowing crystal by brute force simulation, an artist might pair a realistic glass shader for the surface with a simple shader that efficiently "fakes" the interior effects in a way that might look ''better'' than full physically-accurate simulation.

Added: 510

Changed: 2006

Is there an issue? Send a MessageReason:
Revising some bits for clarity or to update the content.


Today 2D raster graphics directly draw on the screen. Elements are typically pulled from a picture map that contains known sizes of certain elements. The difference between this and tiled 2D is that each element can now be placed on the pixel level rather from a grid position that may have, for example, a 4x4 pixel area and that the elements can be any arbitrary size.

to:

Today 2D raster graphics directly draw on the screen. Elements are typically screen with elements pulled from a picture map that contains known sizes of certain elements. The difference between this and tiled map. Unlike early 2D is that each element consoles, these picture maps can now be placed on have elements of arbitrary size, though for the pixel level rather from a grid position that may have, sake of making things easier for example, a 4x4 pixel area and that the computer to process, are typically limited to a power of 2. Also unlike 2D consoles, these elements can be any arbitrary size.
placed anywhere on the screen, rather than be forced into fixed positions based on a tile map.



The advantage of vector graphics is its infinite scalability. Because everything is created on the fly, a low resolution vector image will look just as good as a high definition one. Whereas if you scaled a low resolution raster graphics, you would get a blurry or pixelated high resolution one, often with ugly results (though there are AI based upscaling algorithms that do a really good job at preserving detail). Its major downside is it's more computationally expensive to render an image this way, since everything has to be calculated rather than plucked from a table. To put it in another way, raster graphics is like putting together a clip-art scene while vector graphics requires the artist to draw everything in.

Graphical user interfaces elements are built from vectors because of their need to scale with some raster elements like icons. Fonts are also built from vectors because of their need to scale as well.

to:

The advantage of vector graphics is its infinite scalability. Because everything is created on the fly, a low resolution vector image will look just as good as a high definition one. Whereas if you scaled a low resolution raster graphics, you would get a blurry or pixelated high resolution one, often with ugly results (though there are results. While AI based upscaling algorithms that can do a really good job at preserving detail). of guessing the detail, it still won't make say Mario's sprite from the first Super Mario Bros. look like something in his later 2D incarnations on the DS.

Its major downside is it's more computationally expensive to render an image this way, since everything has to be calculated rather than plucked from a table. To put it in another way, raster graphics is like putting together a clip-art scene while vector graphics requires the artist to draw everything in.

Graphical A major use of vector graphics are graphical user interfaces elements are built from vectors because of their need to scale with some raster scale. Raster elements may be used for things like icons.icons or diagrams. Fonts are also built from vectors because of their need to scale as well.



[[http://advsys.net/ken/voxlap.htm Voxlap]] and [[http://atomontage.com/ Atomontage]] are currently the only known voxel based graphical engines with potential for games. Other games used voxel based technology for certain elements. {{VideoGame/Minecraft}} uses voxels for map data, ''VideoGame/CommandAndConquerTiberianSun'' uses them for vehicles.

to:

Despite these limitations, voxels were used occasionally in games during the 90s, such as in the [[Creator/NovaLogic Comanche series]] and for mapping the ground detail in ''{{VideoGame/Outcast}}''. It gained more prominence towards the late 2000s and throughout the 2010s, especially when ''{{VideoGame/Minecraft}}'' hit the scene, which uses voxels for mapping data. Voxels are also used readily in polygonal 3D graphics to aid in lighting and other graphical effects.

Some attempts have been made to make a game engine using voxels entirely, examples including
[[http://advsys.net/ken/voxlap.htm Voxlap]] and [[http://atomontage.com/ Atomontage]] are currently the only known voxel based graphical engines with potential for games. Other Atomontage]]. Some games have also used voxel based technology for certain elements. {{VideoGame/Minecraft}} uses voxels for map data, ''VideoGame/CommandAndConquerTiberianSun'' uses them for vehicles.
all assets as well, such as ''{{VideoGame/Teardown}}''



* Sprites: These are 2D elements, typically made of one or two polygons in pure 3D engines. Sprites tend to always face the player in an effect called "billboarding." Sprites were common in early 3D games, especially for generally round objects (balls, trees) or far away objects.

to:

* Sprites: These are 2D elements, typically made of one or two polygons in pure 3D engines. Sprites tend to always face the player in an effect called "billboarding." Sprites were common in early 3D games, especially for generally round games to depict most objects (balls, trees) or that weren't part of the world geometry such as enemies, items, and even greenery such as trees. Today they build up "fuzzy" things like grass, fur, and leaves, but are also used to depict far away objects. objects.

Top