I am trying to implement an octree traversal scheme using OpenGL and GLSL, and would like to keep the data in textures. While there is a big selection of formats to use for the texture data (floats and integers of different sizes) I have some trouble figuring out if there is a way to have more precise control over the bits and thus achieving greater efficiency and compact storage. This might be a general problem, not only applying to OpenGL and GLSL.
As a simple toy example, let's say that I have a texel containing a 16 bit integer. I want to encode two booleans of 1 bit each, one 10 bit integer value and then a 4 bit integer value into this texel. Is there a technique to encode this when creating the texture, and then decode these components when sampling the texture using a GLSL shader?
Edit: Looks like I am in fact looking for bit manipulation techniques. Since they seem to be supported, I should be fine after some more researching.
Integer and bit-manipulations inside GLSL shaders are supported since OpenGL 3 (thus present on DX10 class hardware, if that tells you more). So you can just do this bit mainulation on your own inside the shader.
But working with integers is one thing, getting them out of the texture is another. The standard OpenGL texture formats (that you may be used to) are either storing floats directly (like
GL_R16F
) or normalized fixed point values (likeGL_R16
, effectively integers for the uninitiated ;)), but reading from them (usingtexture
,texelFetch
or whatever) will net you float values in the shader, from which you cannot that easily or reliably deduce the original bit-pattern of the internally stored integer.So what you really need to use is an integer texture, which require OpenGL 3, too (or maybe the
GL_EXT_texture_integer
extension, but hardware supporting that will likely have GL3 anyway). So for your texture you need to use an actual integer internal format, like e.g.GL_R16UI
(for a 1-component 16-bit unsigned integer) in constrast to the usual fixed point formats (like e.g.GL_R16
for a normalized [0,1]-color with 16 bits precision).And then in the shader you need to use an integer sampler type, like e.g.
usampler2D
for an unsigned integer 2D texture (and likewiseisampler...
for the signed variants) to actually get an unsigned integer from yourtexture
ortexelFetch
calls:CPU:
GPU: