the difference between glOrtho and glViewPort in o

2020-05-21 04:41发布

问题:

I am struggling to figure out something let's say im rendering some image that has a height of 100 and a width of 100.

In scenario A

I am taking a glOrtho(0,100,0,100,-100,100) and glViewPort(0,0,50,50) when glOrthois defined as (left,right,bottom,top,zNear,zFar) and glViewPort is defined as (lower left corner x, lower left corner y, width, height).

In scenario B

I am taking a glOrtho(0,50,0,50,-100,100) and glViewPort(0,0,100,100) when glOrthois defined as (left,right,bottom,top,zNear,zFar) and glViewPort is defined as (lower left corner x, lower left corner y, width, height)

That basically means that in scenario A the image will be rendered to a lower width and height than it requires (i.e will be rendered s.t every two pixels). In the original image will be mapped to one in the destination "surface" but still the entire image will be seen.

In scenario B, however, the image will be clipped and so only the upper left quarter of it will be visible. Am I correct? - just to be clear, it's a question from a CG test im having tommorow and I want to make sure I got openGL correctly... (already read the API... =\)

回答1:

glViewPort is in screen pixel units: that's it, it has nothing to do with the 3D world "inside" your graphics card. It just tells wich part of the window will be used for rendering (or just will be visible).

glOrtho instead changes the "inner" world and is OpenGL units: More OpenGL units will fit into the visible part of the screen, so "bigger" objects will fit easily into viewable area if you increase the ortho size.

Modifying the viewport does not change the frustum, infact the same image will just be stretched to fit the new viewport.

Explicative Images:

Picture 1: viewport is half window

Picture 2: If I just double the viewport, the image becomes stretched (same frustum that fill a different surface)

So the only solution to keep aspect ratio is to double ortho size too (in this case i double left and right values)

Picture 3: final result (note that now a bigger part of the 3d world is visible):

Further details are available at quite familiar site on OpenGL NeHe productions.



回答2:

These two things affect different stages of GL's coordinate transformation pipeline. OpenGL uses a viweing frustum which in normalized device space is a cube in the range [-1,1] along all 3 dimensions. The glOrtho() call is typically used to set up a projection matrix, which will transform eye space coordinates into clip space. GL will internally transform from clip space to NDC. In the orthogonal case, you could even assume as clip space and NDC are the same thing. The viewport describes the transformation from NDC to window space, which is where the rasterization happens.

Am I correct? - just to be clear, it's a question from a CG test im having tommorow and I want to make sure I got openGL correctly...

You are probably correct for case A. In case B, probably the bootm left quarter is seen. But actually, the question is unanswerable if no further information is given. You say that the image has a "width" and "height" of 100. Typically, such dimensions are interpreted as the number of pixels in each direction. But in this case, the question seems to imply that also the quad which is texured with the image is rendered in such a way that it will end up ine eye space from (0,0) to (100,100) (either by using that directly as object coordinates, or by using another mdoel and/or view transform). Also it is not specified how the image is mapped, i.e, it could be rotated (which makes it impossible to determine which part of the image is seen in scenario B with any reasonably confidence).

Another thing worth noting is that glOrtho() will multiply the current matrix by an orthogonal projection matrix. So if the initial state of that matrix is not known, it is impossible to say what the resulting transform will be.

I hope the real test will not contain such ill-specified questions.