How do I convert a vec4 rgba value to a float?

2020-01-29 08:41发布

问题:

I packed some float data in a texture as an unsigned_byte, my only option in webgl. Now I would like unpack it in the vertex shader. When I sample a pixel I get a vec4 which is really one of my floats. How do I convert from the vec4 to a float?

回答1:

The following code is specifically for the iPhone 4 GPU using OpenGL ES 2.0. I have no experience with WebGL so I cant claim to know how the code will work in that context. Furthermore the main problem here is that highp float is not 32 bits but is instead 24 bit.

My solution is for fragment shaders - I didnt try it in the vertex shader but it shouldnt be any different. In order to use the you will need to get the RGBA texel from a sampler2d uniform and make sure that the values of each R,G,B and A channels are between 0.0 and 255.0 . This is easy to achieve as follows:

highp vec4 rgba = texture2D(textureSamplerUniform, texcoordVarying)*255.0;

You should be aware though that the endianess of your machine will dictate the correct order of your bytes. The above code assumes that floats are stored in big-endian order. If you see your results are wrong then just swap the order of the data by writing

rgba.rgba=rgba.abgr;

immediately after the line where you set it. Alternatively swap the indices on rgba. I think the above line is more intutive though and less prone to careless errors. I am not sure if it works for all given input. I tested for a large range of numbers and found that decode32 and encode32 are NOT exact inverses. Ive also left out the code I used to test it.

#pragma STDGL invariant(all) 

highp vec4 encode32(highp float f) {
    highp float e =5.0;

    highp float F = abs(f); 
    highp float Sign = step(0.0,-f);
    highp float Exponent = floor(log2(F)); 
    highp float Mantissa = (exp2(- Exponent) * F);
    Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa));
    highp vec4 rgba;
    rgba[0] = 128.0 * Sign  + floor(Exponent*exp2(-1.0));
    rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0);  
    rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0)));
    rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0)));
    return rgba;
}

highp float decode32(highp vec4 rgba) {
    highp float Sign = 1.0 - step(128.0,rgba[0])*2.0;
    highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0; 
    highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000);
    highp float Result =  Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 )); 
    return Result;
}

void main()  
{  
    highp float result;

    highp vec4 rgba=encode32(-10.01);
    result = decode32(rgba);
}

Here are some links on IEEE precision I found useful. Link1. Link2. Link3.



回答2:

Twerdster posted some excellent code in his answer. So all credit go to him. I post this new answer, since comments don't allow for nice syntax colored code blocks, and i wanted to share some code. But if you like the code, please upvote Twerdster original answer.

In Twerdster previous post he mentioned that the decode and encode might not work for all values.

To further test this, and validate the result i made a java program. While porting the code i tried to stayed as close as possible to the shader code (therefore i implemented some helper functions). Note: I also use a store/load function to similate what happens when you write/read from a texture.

I found out that:

  1. You need a special case for the zero
  2. You might also need special case for infinity, but i did not implement that to keep the shader simple (eg: faster)
  3. Because of rounding errors sometimes the result was wrong therefore:
    • subtract 1 from exponent when because of rounding the mantissa is not properly normalised (eg mantissa < 1)
    • Change float Mantissa = (exp2(- Exponent) * F); to float Mantissa = F/exp2(Exponent); to reduce precision errors
    • Use float Exponent = floor(log2(F)); to calc exponent. (simplified by new mantissa check)

Using these small modifications i got equal output on almost all inputs, and got only small errors between the original and encoded/decoded value when things do go wrong, while in Twerdster's original implementation rounding errors often resulted in the wrong exponent (thus the result being off by factor two).

Please note that this is a Java test application which i wrote to test the algorithm. I hope this will also work when ported to the GPU. If anybody tries to run it on a GPU, please leave a comment with your experience.

And for the code with a simple test to try different numbers until it failes.

import java.io.PrintStream;
import java.util.Random;

public class BitPacking {

    public static float decode32(float[] v)
    {
        float[] rgba = mult(255, v);
        float sign = 1.0f - step(128.0f,rgba[0])*2.0f;
        float exponent = 2.0f * mod(rgba[0],128.0f) + step(128.0f,rgba[1]) - 127.0f;    
        if(exponent==-127)
            return 0;           
        float mantissa = mod(rgba[1],128.0f)*65536.0f + rgba[2]*256.0f +rgba[3] + ((float)0x800000);
        return sign * exp2(exponent-23.0f) * mantissa ;     
    }   

    public static float[] encode32(float f) {           
        float F = abs(f); 
        if(F==0){
            return new float[]{0,0,0,0};
        }
        float Sign = step(0.0f,-f);
        float Exponent = floor(log2(F)); 

        float Mantissa = F/exp2(Exponent); 

        if(Mantissa < 1)
            Exponent -= 1;      

        Exponent +=  127;

        float[] rgba = new float[4];
        rgba[0] = 128.0f * Sign  + floor(Exponent*exp2(-1.0f));
        rgba[1] = 128.0f * mod(Exponent,2.0f) + mod(floor(Mantissa*128.0f),128.0f);  
        rgba[2] = floor(mod(floor(Mantissa*exp2(23.0f -8.0f)),exp2(8.0f)));
        rgba[3] = floor(exp2(23.0f)*mod(Mantissa,exp2(-15.0f)));
        return mult(1/255.0f, rgba);
    }

    //shader build-in's

    public static float exp2(float x){
        return (float) Math.pow(2, x);
    }

    public static float[] step(float edge, float[] x){
        float[] result = new float[x.length];
        for(int i=0; i<x.length; i++)
            result[i] = x[i] < edge ? 0.0f : 1.0f;
        return result;      
    }

    public static float step(float edge, float x){
        return x < edge ? 0.0f : 1.0f;
    }

    public static float mod(float x, float y){
        return x-y * floor(x/y);
    }

    public static float floor(float x){
        return (float) Math.floor(x);
    }

    public static float pow(float x, float y){
        return (float)Math.pow(x, y);
    }

    public static float log2(float x)
    {
        return (float) (Math.log(x)/Math.log(2));
    }

    public static float log10(float x)
    {
        return (float) (Math.log(x)/Math.log(10));
    }

    public static float abs(float x)
    {
        return (float)Math.abs(x);
    }   

    public static float log(float x)
    {
        return (float)Math.log(x);
    }

    public static float exponent(float x)
    {
        return floor((float)(Math.log(x)/Math.log(10)));
    }

    public static float mantissa(float x)
    {
        return floor((float)(Math.log(x)/Math.log(10)));
    }

    //shorter matrix multiplication 
    private static float[] mult(float scalar, float[] w){
        float[] result = new float[4];
        for(int i=0; i<4; i++)
            result[i] = scalar * w[i];
        return result;
    }

    //simulate storage and retrieval in 4-channel/8-bit texture 
    private static float[] load(int[] v)
    {
        return new float[]{v[0]/255f, v[1]/255f, v[2]/255f, v[3]/255f};
    }

    private static int[] store(float[] v)
    {
        return new int[]{((int) (v[0]*255))& 0xff, ((int) (v[1]*255))& 0xff, ((int) (v[2]*255))& 0xff, ((int) (v[3]*255))& 0xff};       
    }

    //testing until failure, and some specific hard-cases separately
    public static void main(String[] args) {

        //for(float v : new float[]{-2097151.0f}){ //small error here 
        for(float v : new float[]{3.4028233e+37f, 8191.9844f, 1.0f, 0.0f, 0.5f, 1.0f/3, 0.1234567890f, 2.1234567890f, -0.1234567890f, 1234.567f}){
            float output = decode32(load(store(encode32(v))));
            PrintStream stream = (v==output) ?  System.out : System.err;
            stream.println(v + " ?= " + output);
        }   

        //System.exit(0);

        Random r = new Random();
        float max = 3200000f;
        float min = -max;
        boolean error = false;
        int trials = 0;
        while(!error){
            float fin = min + r.nextFloat() * ((max - min) + 1);
            float fout = decode32(load(store(encode32(fin))));
            if(trials % 10000 == 0)
                System.out.print('.');
            if(trials % 1000000 == 0)
                System.out.println();
            if(fin != fout){
                System.out.println();
                System.out.println("correct trials = " + trials);
                System.out.println(fin + " vs " + fout);                
                error = true;
            }
            trials++;
        }       
    }
}


回答3:

Since you didn't deign to give us the exact code you used to create and upload your texture, I can only guess at what you're doing.

You seem to be creating a JavaScript array of floating-point numbers. You then create a Uint8Array, passing that array to the constructor.

According to the WebGL spec (or rather, the spec that the WebGL spec refers to when ostensibly specifying this behavior), the conversion from floats to unsigned bytes happens in one of two ways, based on the destination. If the destination is considered "clamped", then it clamps the number to the destination range, namely [0, 255] for your case. If the destination is not considered "clamped", then it is taken modulo 28. The WebGL "specification" is sufficiently poor that it is not entirely clear whether the construction of Uint8Array is considered clamped or not. Whether clamped or taken modulo 28, the decimal point is chopped off and the integer value stored.

However, when you give this data to OpenWebGL, you told WebGL to interpret the bytes as normalized unsigned integer values. This means that the input values on the range [0, 255] will be accessed by users of the texture as [0, 1] floating point values.

So if your input array had the value 183.45, the value in the Uint8Array would be 183. The value in the texture would be 183/255, or 0.718. If your input value was 0.45, the Uint8Array would hold 0, and the texture result would be 0.0.

Now, because you passed the data as GL_RGBA, that means that every 4 unsigned bytes will be taken as a single texel. So every call to texture will fetch those particular four values (at the given texture coordinate, using the given filtering parameters), thus returning a vec4.

It is not clear what you intend to do with this floating-point data, so it is hard to make suggestions as to how best to pass float data to a shader. However, a general solution would be to use the OES_texture_float extension and actually create a texture that stores floating-point data. Of course, if it isn't available, you'll still have to find a way to do what you want.

BTW, Khronos really should be ashamed of themselves for even calling WebGL a specification. It barely specifies anything; it's just a bunch of references to other specifications, which makes finding the effects of anything exceedingly difficult.



回答4:

I tried Arjans solution, but it returned invalid values for 0, 1, 2, 4. There was a bug with the packing of the exponent, which i changed so the exp takes one 8bit float and the sign is packed with the mantissa:

//unpack a 32bit float from 4 8bit, [0;1] clamped floats
float unpackFloat4( vec4 _packed)
{
    vec4 rgba = 255.0 * _packed;
    float sign =  step(-128.0, -rgba[1]) * 2.0 - 1.0;
    float exponent = rgba[0] - 127.0;    
    if (abs(exponent + 127.0) < 0.001)
        return 0.0;           
    float mantissa =  mod(rgba[1], 128.0) * 65536.0 + rgba[2] * 256.0 + rgba[3] + (0x800000);
    return sign *  exp2(exponent-23.0) * mantissa ;     


}

//pack a 32bit float into 4 8bit, [0;1] clamped floats
vec4 packFloat(float f) 
{
    float F = abs(f); 
    if(F == 0.0)
    {
        return  vec4(0,0,0,0);
    }
    float Sign =  step(0.0, -f);
    float Exponent = floor( log2(F)); 

    float Mantissa = F/ exp2(Exponent); 
    //std::cout << "  sign: " << Sign << ", exponent: " << Exponent << ", mantissa: " << Mantissa << std::endl;
    //denormalized values if all exponent bits are zero
    if(Mantissa < 1.0)
        Exponent -= 1;      

    Exponent +=  127;

    vec4 rgba;
    rgba[0] = Exponent;
    rgba[1] = 128.0 * Sign +  mod(floor(Mantissa * float(128.0)),128.0);
    rgba[2] = floor( mod(floor(Mantissa* exp2(float(23.0 - 8.0))), exp2(8.0)));
    rgba[3] = floor( exp2(23.0)* mod(Mantissa, exp2(-15.0)));
    return (1 / 255.0) * rgba;
}


回答5:

You won't be able to just interpret the 4 unsigned bytes as the bits of a float value (which I assume you want) in a shader (at least not in GLES or WebGL, I think). What you can do is not store the float's bit representation in the 4 ubytes, but the bits of the mantissa (or the fixed point representation). For this you need to know the approximate range of the floats (I'll assume [0,1] here for simplicity, otherwise you have to scale differently, of course):

r = clamp(int(2^8 * f), 0, 255);
g = clamp(int(2^16 * f), 0, 255);
b = clamp(int(2^24 * f), 0, 255);     //only have 24 bits of precision anyway

Of course you can also work directly with the mantissa bits. And then in the shader you can just reconstruct it that way, using the fact that the components of the vec4 are all in [0,1]:

f = (v.r) + (v.g / 2^8) + (v.b / 2^16);

Although I'm not sure if this will result in the exact same value, the powers of two should help a bit there.