Render to texture, then render texture to screen i

2019-05-30 01:19发布

I am trying to render a simple scene into a FBO that is backed by a texture as a color attachment and then draw a quad to the screen using this texture in iOS. This will help me do some post processing of the final scene. There are a few questions on SO that address similar (but not quite the same) questions and I've tried whatever I could understand. Nothing works.

I have two shader programs. The first one _program simply takes the vertex positions and renders them with a single color. The second one quadProgram takes a texture, the texture coords and the quad Coords. (In the interest of brevity I am omitting the code for these shaders.)

Both shaders work correctly. They independently produce correct results. However, when I try to render the "rendered texture" to the quad instead of using a sample wood texture, I simply get a black screen. Here are the relevant bits of code. I setup everything like so:

- (void)setupGL
{
    [EAGLContext setCurrentContext:self.context];

    _program = [self createProgramWithVertexShader:@"Shader" fragShader:@"Shader"];
    GLuint GLKVertexAttribPosition = glGetAttribLocation(_program, "position");

    quadProgram = [self createProgramWithVertexShader:@"PostShader" fragShader:@"PostShader"];
    quadTexCoord = glGetAttribLocation(quadProgram, "a_texcoord");
    quadPosition = glGetAttribLocation(quadProgram, "a_position");
    diffuseTexture = glGetUniformLocation(quadProgram, "s_diffuse");

    glEnable(GL_DEPTH_TEST);

    glGenVertexArraysOES(1, &_vertexArray);
    glBindVertexArrayOES(_vertexArray);

    glGenBuffers(1, &_vertexBuffer);
    glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
    glBufferData(GL_ARRAY_BUFFER, sizeof(gCubeVertexData), gCubeVertexData, GL_STATIC_DRAW);

    glEnableVertexAttribArray(GLKVertexAttribPosition);
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));

    glBindVertexArrayOES(0);

    // ============
    // ---- for render to texture
    width = self.view.bounds.size.width;
    height = self.view.bounds.size.height;

    // create fbo
    glGenFramebuffers(1, &framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

    // create and attach backing texture
    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,  width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);

    // create and attach depthbuffer
    glGenRenderbuffers(1, &depthRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);

    // Check if framebuffer was loaded correctly
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
    if(status != GL_FRAMEBUFFER_COMPLETE) {
        NSLog(@"failed to make complete framebuffer object %x", status);
    }

    // sample texture
    NSError *theError;

    NSString *filePath = [[NSBundle mainBundle] pathForResource:@"wood_floor_256" ofType:@"jpg"]; // 1

    spriteTexture = [GLKTextureLoader textureWithContentsOfFile:filePath options:nil error:&theError]; // 2
}

And then my main loop:

- (void)update
{
    // First save the default frame buffer.
    static GLint default_frame_buffer = 0;
    glGetIntegerv(GL_FRAMEBUFFER_BINDING, &default_frame_buffer);

    // render to texture
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
    glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    glBindVertexArrayOES(_vertexArray);

    glUseProgram(_program);

    glDrawArrays(GL_TRIANGLES, 0, 36);

    // render to screen
    glBindFramebuffer(GL_FRAMEBUFFER, default_frame_buffer);
    glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    glUseProgram(quadProgram);

    const float quadPositions[] = {  1.0,  1.0, 0.0,
        -1.0,  1.0, 0.0,
        -1.0, -1.0, 0.0,
        -1.0, -1.0, 0.0,
        1.0, -1.0, 0.0,
        1.0,  1.0, 0.0 };
    const float quadTexcoords[] = { 1.0, 1.0,
        0.0, 1.0,
        0.0, 0.0,
        0.0, 0.0,
        1.0, 0.0,
        1.0, 1.0 };

    // stop using VBO and other bufs.
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
    glBindVertexArrayOES(0);

    glBindTexture(GL_TEXTURE_2D, texture);
    glActiveTexture(GL_TEXTURE_2D);

    // setup buffer offsets
    glVertexAttribPointer(quadPosition, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), quadPositions);
    glVertexAttribPointer(quadTexCoord, 2, GL_FLOAT, GL_FALSE, 2*sizeof(float), quadTexcoords);

    // ensure the proper arrays are enabled
    glEnableVertexAttribArray(quadPosition);
    glEnableVertexAttribArray(quadTexCoord);

    // draw
    glDrawArrays(GL_TRIANGLES, 0, 2*3);
}

I don't even know how to tell if the backing texture contains the rendered screen.

The render to texture part should render into the FBO the following: iPad render

If I replace the two lines:

    glBindTexture(GL_TEXTURE_2D, texture);
    glActiveTexture(GL_TEXTURE_2D);

with

glBindTexture(spriteTexture.target, spriteTexture.name);
glEnable(spriteTexture.target);

I get the following:

ipadwood

Which indicates that the code to render a quad to screen with a texture is correct.

I have no idea how to get the first screen rendered to the quad.

1条回答
beautiful°
2楼-- · 2019-05-30 01:29

Since the texture you are using is NPOT, it needs a non-default wrap parameter.

Add the following right below the other glTexParameteri call. It should solve the problem:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
查看更多
登录 后发表回答