What are shaders in OpenGL and what do we need the

2019-02-01 22:54发布

问题:

I'm not native English speaker and when I'm trying to get through openGL wiki and tutorials on www.learnopengl.com, it never ends up understandable by intuition how whole concept works. Can someone maybe explain me in more abstract way how it works? What are vertex shader and fragment shader and what do we use them for?

回答1:

The OpenGL wiki gives a good definition:

A Shader is a user-defined program designed to run on some stage of a graphics processor.

In the past, graphics cards were non-programmable pieces of silicon which performed a set of fixed algorithms: points / colors / lights came in, and a 2D image came out using a fixed algorithm (typically along https://en.wikipedia.org/wiki/Phong_reflection_model).

But that was too restrictive for programmers who wanted to create many different visual effects.

So as technology advanced, GPU vendors started allowing some the parts of the rendering pipeline to be programmed programming languages like GLSL.

Those languages are then converted to semi-undocumented instruction sets that runs on small "CPUs" built-into those newer GPU's.

Until not long ago, shader languages were not even Turing complete.

The term General Purpose GPU (GPGPU) refers to this increased programmability of modern GPUs.

In the OpenGL model, only the blue stages of the following diagram are programmable:

Image source.

Shaders take the input from the previous pipeline stage (e.g. vertex positions, colors, and rasterized pixels) and customize the output to the next stage.

The two most important ones are:

  • vertex shader:
    • input: position of points in 3D space
    • output: 2D projection of the points (using 4D matrix multiplication). See: https://stackoverflow.com/a/36046924/895245
  • fragment shader:
    • input: 2D position of all pixels of a triangle + (color of edges or a texture image) + lightining parameters
    • output: the color of every pixel of the triangle (if it is not occluded by another closer triangle), usually interpolated between vertices

The name shader is not very descriptive for current architetures: "shaders" in GLSL also manage vertex positions, not to mention OpenGL 4.3 GL_COMPUTE_SHADER, which allows for arbitrary calculations completely unrelated to rendering, much like OpenCL.

TODO could OpenGL be efficiently implemented with OpenCL alone, i.e., making all stages programmable? Of course, there must be a performance / flexibility trade-off.

The first GPUs with shaders used different specialized hardware for vertex and fragment shading, since those have quite different workloads. Current architectures however, use a single type of hardware (basically small CPUs) for all shaders, which saves some hardware duplication. This concept is known as: https://en.wikipedia.org/wiki/Unified_shader_model

Image source.

To truly understand shaders and all they can do, you have to look at many examples and learn the APIs. https://github.com/JoeyDeVries/LearnOpenGL for example is a good source.

In modern OpenGL 4, even hello world triangle programs use super simple shaders, instead of older deprecated immediate APIs like glBegin and glColor. Here is an example: https://stackoverflow.com/a/36166310/895245

One classic cool application of a non-trivial shader are dynamic shadows:

Image source.



回答2:

Shaders basically give you the correct coloring of the object that you want to render, based on several light equations. So if you have a sphere, a light, and a camera, then the camera should see some shadows, some shiny parts, etc, even if the sphere has only one color. Shaders perform the light equation computations to give you these effects.

The vertex shader transforms each vertex's 3D position in virtual space (your 3d model) to the 2D coordinate at which it appears on the screen.

The fragment shader basically gives you the coloring of each pixel by doing light computations.



标签: opengl shader