I'm looking to parallelize some complex math, and webgl looks like the perfect way to do it. The problem is, you can only read 8 bit integers from textures. I would ideally like to get 32 bit numbers out of the texture. I had the idea of using the 4 color channels to get 32 bits per pixel, instead of 4 times 8 bits.
My problem is, glsl doesn't have a "%" operator or any bitwise operator!
TLDR: How do I convert a 32bit number to 4 8bit numbers by using the operators in glsl.
Some extra info on the technique (using bitwise operators):
How to store a 64 bit integer in two 32 bit integers and convert back again
In general, if you want to pack the significant digits of a floating-point number in bytes, you have consecutively to extract 8 bits packages of the significant digits and store it in a byte.
Encode a floating point number in a predefined range
In order to pack a floating-point value in 4 * 8-bit buffers, the range of the source values must first be specified.
If you have defined a value range [
minVal
,maxVal
], it has to be mapped to the range [0.0, 1.0]:The function
Encode
packs a floating point value in the range [0.0, 1.0] into avec4
:The function
Decode
extracts a floating point value in the range [0.0, 1.0] from avec4
:The following functions packs and extracts an floating point value in and from the range [
minVal
,maxVal
]:Encode a floating point number with an exponent
Another possibility is to encode the the significant digits to 3 * 8-bits of the RGB values and the exponent to the 8-bits of the alpha channel:
Note, since a standard 32-bit IEEE 754 number has only 24 significant digits, it is completely sufficient to encode the number in 3 bytes.
See also the answers to teh following questions:
You can bitshift by multiplying/dividing by powers of two.
As pointed out in the comments the approach I originally posted was working but incorrect, here's one by Aras Pranckevičius, note that the source code in the post itself contains a typo and is HLSL, this is a GLSL port with the typo corrected: