I am trying to do / understand all the basic mathematical computations needed in the graphics pipeline to render a simple 2D image from a 3D scene description like VRML. Is there a good example of the steps needed, like model transformation (object coordinates to world coordinates), view transformation (from world coordinate to view coordinate), calculation of vertex normals for lighting, clipping, calculating the screen coordinates of objects inside the view frustum and creating the 2D projection to calculate the individual pixels with colors.
相关问题
- Is GLFW designed to use without LWJGL (in java)?
- Direct2D Only Partially Linking in C++ Builder
- d3.js moving average with previous and next data v
- How to get a fixed number of evenly spaced points
- Check if a number is a perfect power of another nu
相关文章
- ceil conterpart for Math.floorDiv in Java?
- why 48 bit seed in util Random class?
- Algorithm for partially filling a polygonal mesh
- Robust polygon normal calculation
- Behavior of uniforms after glUseProgram() and spee
- Keep constant number of visible circles in 3D anim
- Need help generating discrete random numbers from
- How do you create a formula that has diminishing r
I am used to OpenGL style render math so I stick to it (all the renders use almost the same math)
First some therms to explain:
Transform matrix
Represents a coordinate system in 3D space
where:
X(xx,xy,xz)
is unit vector ofX
axis in GCS (global coordinate system)Y(yx,yy,yz)
is unit vector ofY
axis in GCSZ(zx,zy,zz)
is unit vector ofZ
axis in GCSP(x0,y0,z0)
is origin of represented coordinate system in GCSTransformation matrix is used to transform coordinates between GCS and LCS (local coordinate system)
->
LCS:Al = Ag * m;
<-
LCS:Ag = Al * (m^-1);
Al (x,y,z,w=1)
is 3D point in LCS ... in homogenous coordinatesAg (x,y,z,w=1)
is 3D point in GCS ... in homogenous coordinateshomogenous coordinate
w=1
is added so we can multiply 3D vector by 4x4 matrixm
transformation matrixm^-1
inverse transformation matrixIn most cases is
m
orthonormal which meansX,Y,Z
vectors are perpendicular to each other and with unit size this can be used for restoration of matrix accuracy after rotations,translations,etc ...For more info see Understanding 4x4 homogenous transform matrices
Render matrices
There are usually used these matrices:
model
- represents actual rendered object coordinate systemview
- represents camera coordinate system (Z
axis is the view direction)modelview
- model and view multiplied togethernormal
- the same asmodelview
butx0,y0,z0 = 0
for normal vector computationstexture
- manipulate texture coordinates for easy texture animation and effect usually an unit matrixprojection
- represent projections of camera view ( perspective ,ortho,...) it should not include any rotations or translations its more like Camera sensor calibration instead (otherwise fog and other effects will fail ...)The rendering math
To render 3D scene you need 2D rendering routines like draw 2D textured triangle ... The render converts 3D scene data to 2D and renders it. There are more techniques out there but the most usual is use of boundary model representation + boundary rendering (surface only) The 3D
->
2D conversion is done by projection (orthogonal or perspective) and Z-buffer or Z-sorting.So the pipeline is as this:
obtain actual rendered data from model
v
n
t
convert it to appropriate space
v=projection*view*model*v
... camera space + projectionn=normal*n
... global spacet=texture*t
... texture spaceclip data to screen
This step is not necessary but prevent to render of screen stuff for speed and also face culling is usually done here. If normal vector of rendered 'triangle' is opposite then the polygon winding rule set then ignore 'triangle'
render the 3D/2D data
use only
v.x,v.y
coordinates for screen rendering and v.z for z-buffer test/value also here goes the perspective division for perspective projectionsv.x/=v.z,vy/=v.z
Z-buffer works like this: Z-buffer (
zed
) is 2D array with the same size (resolution) as screen (scr
). Any pixelscr[y][x]
is rendered onlyif (zed[y][x]>=z)
in that casescr[y][x]=color; zed[y][x]=z;
The if condition can be different (it is changeable)In case of using triangles or higher primitives for rendering The resulting 2D primitives are converted to pixels in process called rasterization for example like this:
For more clarity here is how it looks like:
[Notes]
Transformation matrices are multiplicative so if you need transform
N
points byM
matrices you can create singlematrix = m1*m2*...mM
and convertN
points by this resultingmatrix
only (for speed). Sometimes are used3x3
transform matrix +shift vector
instead of4x4
matrix. it is faster in some cases but you cannot multiply more transformations together so easy. For transformation matrix manipulation look for basic operations like Rotate or Translate there are also matrices for rotations inside LCS which are more suitable for human control input but these are not native to renders like OpenGL or DirectX. (because they use inverse matrix)Now all the above stuff was for standard polygonal rendering (surface boundary representation of objects). There are also other renderers out there like Volumetric rendering or (Back)Ray-tracers and hybrid methods. Also the scene can have any dimensionality not just 3D. Here some related QAs covering these topics:
You can have a look Chapter 15 from the book Computer Graphics: Principles and Practice - Third Edition by Hughes et al. That chapter