Precise control over texture bits in GLSL

2019-02-16 17:20发布

I am trying to implement an octree traversal scheme using OpenGL and GLSL, and would like to keep the data in textures. While there is a big selection of formats to use for the texture data (floats and integers of different sizes) I have some trouble figuring out if there is a way to have more precise control over the bits and thus achieving greater efficiency and compact storage. This might be a general problem, not only applying to OpenGL and GLSL.

As a simple toy example, let's say that I have a texel containing a 16 bit integer. I want to encode two booleans of 1 bit each, one 10 bit integer value and then a 4 bit integer value into this texel. Is there a technique to encode this when creating the texture, and then decode these components when sampling the texture using a GLSL shader?

Edit: Looks like I am in fact looking for bit manipulation techniques. Since they seem to be supported, I should be fine after some more researching.

1条回答
叼着烟拽天下
2楼-- · 2019-02-16 17:59

Integer and bit-manipulations inside GLSL shaders are supported since OpenGL 3 (thus present on DX10 class hardware, if that tells you more). So you can just do this bit mainulation on your own inside the shader.

But working with integers is one thing, getting them out of the texture is another. The standard OpenGL texture formats (that you may be used to) are either storing floats directly (like GL_R16F) or normalized fixed point values (like GL_R16, effectively integers for the uninitiated ;)), but reading from them (using texture, texelFetch or whatever) will net you float values in the shader, from which you cannot that easily or reliably deduce the original bit-pattern of the internally stored integer.

So what you really need to use is an integer texture, which require OpenGL 3, too (or maybe the GL_EXT_texture_integer extension, but hardware supporting that will likely have GL3 anyway). So for your texture you need to use an actual integer internal format, like e.g. GL_R16UI (for a 1-component 16-bit unsigned integer) in constrast to the usual fixed point formats (like e.g. GL_R16 for a normalized [0,1]-color with 16 bits precision).

And then in the shader you need to use an integer sampler type, like e.g. usampler2D for an unsigned integer 2D texture (and likewise isampler... for the signed variants) to actually get an unsigned integer from your texture or texelFetch calls:

CPU:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, ..., GL_R, GL_UNSIGNED_SHORT, data);

GPU:

uniform usampler2D tex;

...
uint value = texture(tex, ...).r;
bool b1 = (value&0x8000) == 0x8000, 
     b2 = (value&0x4000) == 0x4000;
uint i1 = (value>>4) & 0x3FF, 
     i2 = value & 0xF;
查看更多
登录 后发表回答