In computer graphics, shader is a type of computer program originally used for shadow (the production of light levels, darkness, and precise colors in the image), but which now performs various specializations working in various areas of computer graphics special effects or perform post-video processing unrelated to shadows, or even functions that are not related to graphics at all.
Shader calculates the rendering effect on graphics hardware with a high degree of flexibility. Most shaders are encoded for a graphics processing unit (GPU), although this is not a strict requirement. Shading languages ââare commonly used to program a GPU programmable pipe rendering, which has largely replaced the fixed-function pipeline that allows only general geometry transformations and pixel shading functions; with shaders, special effects can be used. The position, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to create the final image can be changed quickly, using algorithms defined in the shader, and can be modified by external variables or textures introduced by the program calling shaders.
Shaders are widely used in postprocessing cinema, computer-generated imagery, and video games to produce a wide range of effects. In addition to simple lighting models, more complex usage includes changing the hue, saturation, brightness or image contrast, resulting in blur, light bloom, volumetric lighting, normal mapping for depth effects, bokeh, cel shading, posterization, lump mapping, distortion, chroma keying called the "bluescreen/greenscreen" effect), edge detection and motion detection, psychedelic effects, and other assortments.
Video Shader
Histori
The modern use of "shaders" was introduced publicly by Pixar with the "RenderMan Interface Specification, Version 3.0" originally published in May 1988.
As the graphics processing unit evolves, major graphics software libraries such as OpenGL and Direct3D start supporting shaders. The first shader-capable GPU only supports pixel shadows, but the vertex shader is quickly introduced once the developer realizes the power of the shader. The first video card with a programmable pixel shader was Nvidia GeForce 3 (NV20), released in 2000. [1] Geometry shaders were introduced with Direct3D 10 and OpenGL 3.2. Eventually graphics hardware evolved toward an integrated shader model.
Maps Shader
Design
Shader is a simple program that describes the properties of either vertex or pixel. Vertex shader describes the properties (position, coordinate texture, color, etc.) of a point, while the pixel shader describes the properties (color, z-depth and alpha value) of a pixel. A vertex shader is called for each vertex in a primitive (possibly after tessellation); so one vertex in, one (updated) vertex out. Each dot is then displayed as a series of pixels to the surface (memory block) that will eventually be sent to the screen.
Shaders replaced part of the graphics hardware that is usually called Fixed Function Pipeline (FFP), which is so called because it does the mapping of lighting and textures the hard way. Shader provides a programmable alternative to this hard-coded approach.
Basic graphics pipe is as follows:
- The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card.
- In the vertex shader, the geometry is changed.
- If the geometry shader is in the graphics processing unit and active, some geometry changes in the scene are performed.
- If the shader tessellation is in the graphics processing unit and is active, the geometry in the scene can be subdivided.
- The calculated geometry is triangulated (subdivided into triangles).
- The triangle is broken into the square of the fragment (one quad fragment is a primitive fragment 2 Ã-2).
- The quadricity of the fragment is modified according to the fragment shader.
- The depth test is performed, the escaped fragment will be written to the screen and may be mixed into the frame buffer.
The graphical pipe uses these steps to transform three dimensional (or two dimensional) data into useful two-dimensional data for display. In general, this is a large pixel matrix or "frame buffer".
Type
There are three types of shaders that are commonly used, with new ones added. While older graphics cards use separate processing units for each type of shader, newer cards have an integrated shader capable of running any type of shader. This allows the graphics card to make processing power usage more efficient.
2D Shaders
2D shaders act on digital images, also called textures in computer graphics works. They modify the pixel attribute. 2D shaders can take part in rendering 3D geometry. Currently the only type 2D shader is pixel shaders.
Pixel shaders
Pixel shaders, also known as shader fragments, count colors and other attributes of each "fragment" - a technical term usually means one pixel. The simplest pixel shader type produces one pixel of the screen as the color value; a more complex shader with lots of input/output is also possible. Pixel shaders range from always producing the same color, to applying the lighting values, to mapping bumps, shadows, specular spots, translucent and other phenomena. They can change the depth of the fragment (for Z-buffering), or produce more than one color if multiple target renders are active. In 3D graphics, pixel shaders alone can not produce very complex effects, as they only operate on one fragment, without knowledge of the geometry of a scene. However, pixel shaders have knowledge of the coordinates of the screen being drawn, and can take samples of nearby screens and pixels if the entire screen content is passed as texture to the shader. This technique can enable a wide range of two-dimensional postprocessing effects, such as blur, or edge/enhancement detection for cartoon/cel shaders. Pixel shaders can also be applied in an intermediate stage to each two-dimensional image - sprite or texture - inside the pipe, while vertex shaders always require 3D views. For example, pixel shaders are the only shaders that can act as postprocessors or filters for video streams after rasterized.
3D Shaders
3D shaders act on 3D models or other geometries but can also access the colors and textures used to draw models or mesh. Vertex shader is the oldest 3D shader type, generally modified based on per-vertex. The shader geometry can generate new nodes from within the shader. Shader Tessellation is a newer 3D shader that works on a collection of vertices at once to add detail - such as dividing the model into small triangular or other primitive groups during processing, to fix things like curves and bumps, or changing other attributes.
Vertex shaders
Vertex shader is the most established and common 3D shader type and run once for every point given to the graphics processor. The goal is to change each 3D vertex position in the virtual space into 2D coordinates where it appears on the screen (as well as the depth value for Z-buffer). Vertex shaders can manipulate properties such as position coordinates, colors and textures, but can not create new nodes. The output of the vertex shader goes to the next stage in the pipe, which either shader geometry if present, or rasterizer. Vertex shaders can enable strong control over position detail, movement, lighting, and color in every scene involving 3D models.
Geometry shaders
Shader geometry is a relatively new type of shader, introduced in Direct3D 10 and OpenGL 3.2; previously available in OpenGL 2.0 with the use of extensions. This type of shader can generate new primitive graphics, such as dots, lines, and triangles, from primitives that are sent to the start of the graphics pipeline.
The shader program geometry is run after vertex shaders. They take primitive overall input, perhaps with neighboring information. For example, when operating on a triangle, three vertices are the input of shader geometry. Shaders can emit zero or more primitives, the rasterisation and their fragments are eventually forwarded to the pixel shader.
Typical uses of shader geometry include the manufacture of point sprites, geometric tessellation, shadow volume extraction, and single pass rendering to cube maps. Real-world examples of the benefits of shader geometry will modify the automatic complexity nets. A series of strip lines representing the control point for the curve is passed to the shader geometry and depending on the complexity required the shader can automatically generate additional lines each of which gives a better curve approach.
shader Tessellation
In OpenGL 4.0 and Direct3D 11, a new shader class called shader tessellation has been added. This adds two new shader stages to the traditional model: tessellation control shaders (also known as hull shaders) and shaders evaluation tessellation (also known as Domain Shaders), which together allow for a simpler meshes to be divided into finer meshes on when run-time corresponds to the math function. This function can be attributed to various variables, especially the distance from the viewer camera to enable an active level-of-detail scale. This allows the objects close to the camera to have fine detail, while the farther ones can have a more rugged snout, but look comparable in quality. It can also drastically reduce the mesh bandwidth required by allowing mesh to be perfected once inside the shader unit, rather than the very complex downsampling of memory. Some algorithms can increase nettes haphazardly, while others allow "hinting" the nets to dictate the most distinctive nodes and edges.
Primitive shader
The AMD Vega microarchitecture adds support for the new shader stage - primitive shaders.
More
Calculate shader
Compute shaders are not limited to graphics applications, but use the same execution resources for GPGPU. They can be used in graphical pipes for example for an additional stage in animation or lighting algorithms, (for example, rendering forwards tiles). Some rendering APIs allow compute shaders to easily split data sources with graphical pipes.
Parallel processing
Shaders are written to apply transformations to a large number of elements at a time, for example, to each pixel in the display area, or to any point of a model. This is perfect for parallel processing, and most modern GPUs have multiple shader pipelines to facilitate this, increasing computational throughput.
A programming model with a shader is similar to a higher order function for rendering, taking shaders as arguments, and providing a specific data stream between intermediate results, enabling both data parallelism (across pixels, nodes etc) and pipeline parallelism (inter stages). (see also reduce map).
Programming
The language in which the shaders are programmed depends on the target environment. The official OpenGL and OpenGL ES languages ââare OpenGL Shading Language, also known as GLSL, and the official Direct3D shading language is High Level Shader Language, also known as HLSL. However, Cg is a third-party shadow language left by Nvidia that produces both OpenGL and Direct3D shaders. Apple released its own shadow language called Metal Shading Language as part of the Metal framework.
See also
- GLSL
- Spir-V
- HLSL
- Calculate the kernel
- Hatching language
- GPGPU
- List of common shadow algorithms
- Vector processor
References
Further reading
- Upstill, Steve. The RenderMan Companion: A Programmer's Guide to Realistic Computer Graphics . Addison-Wesley. ISBN: 0-201-50868-0.
- Ebert, David S; Musgrave, F. Kenton; Peachey, Darwyn; Perlin, Ken; Worley, Steven. Texturing and modeling: procedural approach . AP Professional. ISBN: 0-12-228730-4.
- Fernando, Randima; Kilgard, Mark. Cg Tutorial: Definite Guide for Real-Time Programmable Graphics . Addison-Wesley Professional. ISBNÃ, 0-321-19496-9.
- Lean, Randi J. OpenGL Shading Language . Addison-Wesley Professional. ISBNÃ, 0-321-19789-5.
External links
- OpenGL geometry shader extension
- DirectX & amp; Riemer HLSL Tutorial: HLSL Tutorial using DirectX with many sample code
- Stage Line Channel (Direct3D 10)
Source of the article : Wikipedia