Do OpenGL GLSL samplers always return floats from

2019-02-13 15:49发布

问题:

I've created a couple of floating point RGBA texture...

glBindTexture( GL_TEXTURE_2D, texid[k] );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA,
             GL_FLOAT, data);

and then I double-buffer render/sample into them alternately in a shader program

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
                       GL_TEXTURE_2D, texid[i], 0)

...

    state_tex_loc = glGetUniformLocation( program, "state_tex" )
    glUniform1i( state_tex_loc, 0 )
    glActiveTexture( GL_TEXTURE0 )
    glBindTexture( GL_TEXTURE_2D, texid[1-i] )

...

    void main( void )
    {
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
        vec2 sample_pos = gl_Vertex.xy / vec2( xscale, yscale );
        vec4 sample = texture2D( state_tex, sample_pos.xy );
        sample.rgb = sample.rgb + vec3( 0.5, 0.5, 0.5 );
        if ( sample.r > 1.1 )
            sample.rgb = vec3( 0.0, 0.0, 0.0 );
        gl_FrontColor = sample;
    }

...

    void main( void )
    {
        gl_FragColor = gl_Color;
    }

Notice the check for sample.r being greater than 1.1. This never happens. It seems that either the call to texture2D or the output of the fragment shader clamps the value of sample.rgb to [0.0..1.0]. And yet, my understanding is that the textures themselves have complete floating-point types in them.

Is there any way to avoid this clamping?

UPDATE:

As per instructions below, I've fixed my glTexImage2D() call to use GL_RGBA32F_ARB, but I still don't get a value greater than 1.0 out of the sampler.

UPDATE 2:

I just tried initializing the textures to values larger than 1.0, and it works! texture2d() returns the initial, > 1.0 values. So perhaps this means that the problem is in the fragment shader, writing to the texture?

UPDATE 3:

I've tried changing the shaders, and this works:

    varying vec4 out_color;
    void main( void )
    {
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
        vec2 sample_pos = gl_Vertex.xy / vec2( xscale, yscale );
        vec4 sample = texture2D( state_tex, sample_pos.xy );
        sample.rgb = sample.rgb + vec3( 0.5, 0.5, 0.5 );
        if ( sample.r > 1.1 )
            sample.rgb = vec3( 0.0, 0.0, 0.0 );
        out_color = sample;
    }

...

    varying vec4 out_color;
    void main( void )
    {
        gl_FragColor = out_color;
    }

Why does using a custom varying work, but using the built-in varying gl_FrontColor/gl_Color not work?

回答1:

I've created a couple of floating point RGBA texture...

No, you did not.

glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_FLOAT, data);

This statement does not create a floating-point texture. Well, maybe it does if you're using OpenGL ES, but it certainly doesn't in desktop GL. Though I'm pretty sure OpenGL ES doesn't let you use "4" as the internal format.

In desktop GL, the third parameter to glTexImage2D defines the image format. It is this parameter that tells OpenGL whether the data is floating-point, integer, or whatever. When you use "4" (which you should never do, because it's a terrible way to specify the internal format. Always use a real internal format), you're telling OpenGL that you want 4 unsigned normalized integer components.

The last three parameters specify the location, format, and data type of the pixel data that you want to upload to the texture. In desktop GL, this has no effect on how the data is stored. You're just telling OpenGL what your input pixels look like. The OpenGL ES specification unwisely changes this. The last three parameters do have some effect on what the internal format of the data is.

In any case, if you want 32-bit floats, you should ask for them:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, data);

Why does using a custom varying work, but using the built-in varying gl_FrontColor/gl_Color not work?

Because it's built-in. I haven't used built-in GLSL stuff in years, so I never even noticed that.

The 3.3 compatibility spec has a function glClampColor that defines vertex (and fragment) color clamping behavior. It only affects the built-ins. Personally? I'd avoid it and just not use built-in stuff at all.