When rendering a sky with a fixed texture in 3D games, people often create 6 textures in a cube map first, and then render a cube around the camera. In GLSL, you can access the pixels in the textures with a normal instead of a texture coordinate, and you can easily get this normal by normalizing the fragment position relative to the camera. However, this process can be done with any shape that surrounds the camera, because when you normalize each position it will always result in a sphere. Now I'm wondering: Why is it always a cube and not a tetrahedron? Rendering a cube takes 12 triangles, a tetrahedron only 4. And as I already said, any shape that surrounds the camera works. So tetrahedrons take less VRAM and are faster to render, without any downsides? Why not use them?
相关问题
- Is GLFW designed to use without LWJGL (in java)?
- glDrawElements only draws half a quad
- Direct2D Only Partially Linking in C++ Builder
- Scaling png font down
- OpenGL buffer update [duplicate]
相关文章
- Converting glm::lookat matrix to quaternion and ba
- Algorithm for partially filling a polygonal mesh
- Robust polygon normal calculation
- Behavior of uniforms after glUseProgram() and spee
- Keep constant number of visible circles in 3D anim
- GLEW and Qt5 redefinition of headers
- How do I remove axis from a rotation matrix?
- How to smooth the blocks of a 3D voxel world?
You don't need some environment geometry at all. All you need to do is drawing a full screen quad, and just compute the correct texture coordinates for it. Now with modern GL, we don't even need to supply vertex data for this, we can use attributless rendering:
Vertex Shader:
where
invPV
isinverse(Projection*View)
, so it will take your camera orientation as well as the projection into account. This can in principle be eithen further simplyfied, depending on how much constraints you can put on the projection matrix.Fragment Shader:
To use this, you simply need to bind an empty VAO and your texture, upload your
invPV
matrix and callglDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
.This approach could of course be used for spherical texture mapping instead of cube maps
it is a question of the view depth and shape
the best skybox shape is (half)sphere because its rendered surface is projecting to camera space almost without distortions. If you use any other shape then projection artifacts will occur especially on corners for example most Apps/Games use cube skybox. Look at sun with finite radius (not just single dot) and rotate the view so sun gets from middle to side of view. Then usually the sun gets distorted from circular/disc shape to elliptic/oval shape:
this is due to changing distance between skybox and camera. If you compare it to directly rendered star:
then you can see the difference. First image is the first relevant image found by google (from some game) the second is screen shot form Space Engineers I think and the last is rendered by mine astro app see
So the more is the shape far from sphere the more distortions you get.
using 4-sided pyramid is even worse then cube because the angles between sides are worse creating even bigger artifacts. Another problem is you need bigger size of pyramid to cover the same space. If you use Depth Buffer for some purpose during skybox rendering you could significantly affect precision by increasing Z_far plane.
Overhead
diference between 6 and 4 polygons is not that much because the skybox is huge (cover whole view) the speed is determined mostly by the count of pixel/texel filed to screen not the vertices count. So the pyramid could be even slower then cube because it needs to have bigger faces (more interpolator iterations are needed). But if you want to use spherical skybox then you need also spherical texture because if you would use the standard cube texture the distortion will be still present and these are harder to maintain create,... and that is why cubes are more used.
Spherical skyboxes
they need different type of texture. Hemispherical texture looks like this: