Bitmap rendering methodology in OpenGL?

2020-07-27 03:48发布

I've been working on a bit of a bitmap parser lately in pure C just to understand the low level workings of simpler image formats. So far, using Wikipedia's article on bitmap files, I've been able to (what I think at least) parse the information correctly - at least, most of it.

The problem is that I'm not quite sure what to do from there: since I've been working with a 3.1 context, I have access to many more modernized features which is nice, albeit I'm still lost. I have a window setup with GLFW and so far haven't really rendered anything because I've just been focusing on the parsing/low level details.

Since I'm trying to really hard to avoid looking at actual code examples, it'd be cool if someone could explain to me what the process involved behind rendering a bitmap is, using just OpenGL/GLFW and the ISO C Standard Library.

While I have a couple shaders in place, and I'm able to load them without any issues, I'm thinking that what I need to do is render an [invisible] quad which conforms to the dimensions (width, height) of the image itself, and then pass the pixel data to OpenGL. The main problem however is the shaders are setup like this:

Vertex Shader

#version 150
#extension GL_ARB_separate_shader_objects : enable

layout(location = 0) in vec2 Position;
layout(location = 1) in vec2 UV_In;

out vec2 UV;

void main()
{
    gl_Position = vec4( Position, 0.0f, 1.0f );
    UV = UV_In;
}

Fragment Shader

#version 150
#extension GL_ARB_separate_shader_objects : enable

in vec2 UV;

out vec3 Output;

uniform sampler2D TheSampler;

void main()
{
    Output = texture2D( TheSampler, UV ).rgb;
}

And I'm not sure how to obtain the actual UV coordinates the shader requires. I'm thinking I'll need to generate the vertices, store them in an array, and call something along the lines of glVertexAttribPointer(...) for the UV coordinates, but I'm not sure what data I should use from the image to obtain this, or even whether or not I have it already parsed within the function. I would imagine it would involve crawling the image using an inner/outer for loop (the outer representing the x, the inner the y) in a row/column fashion. Still, I feel somewhat confused about this and I'm not sure if this is what I need.

Either way, any advice on how to do this would be greatly appreciated.


The actual code to parse the image ( HEADER_LENGTH = 54 bytes ):

GLuint Image_LoadBmp( const char* fname, image_bmp_t* data )
{   
    uint8_t  header[ HEADER_LENGTH ];

    FILE* f = fopen( fname, "rb" );

    if ( !f )
    {
        printf( "ERROR: file \"%s\" could not be opened [likely] due to incorrect path. :/ \n", fname );

        return 0; // return false
    }

    data->filename = strdup( fname ); // TODO: write a wrapper for strdup which exits program on NULL returns

    const size_t num_bytes_read = fread( ( void* )header, sizeof( uint8_t ), HEADER_LENGTH, f );

    if ( num_bytes_read != HEADER_LENGTH )
    {
        printf( "ERROR: file \"%s\" could not be opened due to header size being " _SIZE_T_SPECIFIER " bytes; "\
                "this is an invalid format. \n", fname, num_bytes_read );

        return 0;
    }

    if ( header[ 0 ] != *( uint8_t* )"B" || header[ 1 ] != *( uint8_t* )"M" )
    {
        printf( "ERROR: file \"%s\" does NOT have a valid signature \n", fname );

        return 0;
    }

    data->image_size         = *( uint32_t* )&( header[ 0x22 ] );
    data->header_size        = ( uint32_t )( header[ 0x0E ] );
    data->width              = ( uint32_t )( header[ 0x12 ] );
    data->height             = ( uint32_t )( header[ 0x16 ] );
    data->pixel_data_pos     = ( uint32_t )( header[ 0x0A ] );
    data->compression_method = ( uint8_t )( header[ 0x1E ] );
    data->bpp                = ( uint8_t )( header[ 0x1C ] );

    // TODO (maybe): add support for other compression methods

    if ( data->compression_method != CM_RGB )
    {
        puts( "ERROR: file \"%s\" does NOT have a supported compression method for this parser; \n" \
              "\t Currently, the compression methods supported are: \n" \
              "\t - BI_RGB \n\n"
             );

        return 0;
    }



    return 1;
}

And my debug output according to the image information gathered from the current image looks as follows:

Info for "assets/sprites/nave/nave0001.bmp" {  
     Size        = 3612      Header Size = 40  
     Width       = 27      Height      = 43  
     Pixel Array Address = 54      Compression Method  = 0  
     Bits Per Pixel      = 24
 }

2条回答
老娘就宠你
2楼-- · 2020-07-27 04:44

There's a lot here, you may need to break this into multiple questions, but here's the overview:

You don't need to pass the actual pixel data to the shaders; what you need to do is create a texture object in GL up front using the pixel data, and then you reference that texture in your shader. The actual geometry you need to draw is (as you suspect) just one quad with its four corners and corresponding texture coordinates (which are trivial in this case, just 0.0 and 1.0 for each axis on the corners).

The magic of the shaders is that the fragment shader will run for every pixel in the output, and you'll just sample the texture at the varying texture coordinates that GL hands your shader.

(If you're new to GL, try drawing a simple quad first in some fixed color to get that working before you try getting BMP data into a texture.)

查看更多
手持菜刀,她持情操
3楼-- · 2020-07-27 04:45

First let me say: Your approach on reading the header is almost perfect. Only drawback: Your code doesn't deal with Endianess and you're truncating your header's fields (it will break for images larger than 255 in any dimension.

Here's a fix

data->image_size = (uint32_t)header[0x22] | (uint32_t)header[0x23] << 8 | (uint32_t)header[0x24] << 16 | (uint32_t)header[0x25] << 24;

And the same pattern for all other fields larger than 8 bit. The cast for each header field is neccessary to prevent truncation. Cast it to the destination variable type. Also don't worry about performance, modern compilers turn this into very efficient code.

So far your function still lacks reading the image data. I'll just assume the data will be in a field data->pixels later on.

After you've read in your image you can pass it to OpenGL. OpenGL manages its images in so called "Texture Objects". The usual stanza is:

  1. Create a texture object name with glGenTextures
  2. Bind the texture object with glBindTexture
  3. Set pixel transfer parameters with glPixelStorei on all of the GL_UNPACK_… parameters
  4. Upload the texture with glTexImage2D 5.

    • Turn of mipmapping

    or

    • Generate Mipmaps.

This goes as follows

GLuint texName;
glGenTexture(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);

glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_LSB_FIRST, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);   // could also be set to image size, but this is used only
glPixelStorei(GL_UNPACK_IMAGE_HEIGHT, 0); // if one wants to load only a subset of the image
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_IMAGES, 0);
glPixelStorei(GL_UNPACK_SKIP_ALIGNMENT, 4); // that one's pretty important. For TrueColor DIBs the alignment is 4

GLenum internalformat;
switch(data->bpp) {
case 24:
    internalformat = GL_RGB; break;

case 32:
    internalformat = GL_RGBA; break;
}

glTexImage2D(GL_TEXTURE_2D, 0, internalformat,
             data->width, data->height, 0
             GL_BRGA, GL_UNSIGNED_INT_8_8_8_8, data->pixels);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

The GL_UNSIGNED_INT_8_8_8_8 type needs explanation. You see, DIBs treat a 32 bit unsigned integer als a compound color structure. In fact in Windows you can find a color type, which is a typedef-ed integer. And that's what's contained in DIBs. By using BGRA format with the 4×8 component integer type we make OpenGL unpack a pixel in exactly that format.

查看更多
登录 后发表回答