I'm confused. To use the Framebuffer Object extension (FBO) in OpenGL 1.x on Windows, which of these do I use?:
wglGetProcAddress("glGenFramebuffers");
// or
wglGetProcAddress("glGenFramebuffersEXT");
As far as I can tell from reports from users with different hardware, some drivers support all combinations of neither, one of the two, or both.
Which is the right one to use? Do some drivers really support one but not the other? Is it correct to try to fall back from one to the other if not found?
Edit: I'm still having serious problems with ATI Radeon cards and the code around this. We just launched a commercial editor using this code (www.scirra.com). It seems no matter what combination of code I use to use FBOs, some different combination of users reports they cannot see anything at all (i.e. nothing renders).
Here's the code where I detect whether to use the ARB functions (no suffix) or the EXT-suffixed functions. This runs on startup:
gl_extensions = reinterpret_cast<const char*>(glGetString(GL_EXTENSIONS));
gl_vendor = reinterpret_cast<const char*>(glGetString(GL_VENDOR));
gl_renderer = reinterpret_cast<const char*>(glGetString(GL_RENDERER));
gl_version = reinterpret_cast<const char*>(glGetString(GL_VERSION));
gl_shading_language = reinterpret_cast<const char*>(glGetString(GL_SHADING_LANGUAGE_VERSION));
// If OpenGL version >= 3, framebuffer objects are core - enable regardless of extension
// (the flags are initialised to false)
if (atof(gl_version) >= 3.0)
{
support_framebuffer_object = true;
support_framebuffer_via_ext = false;
}
else
{
// Detect framebuffer object support via ARB (for OpenGL version < 3) - also uses non-EXT names
if (strstr(gl_extensions, "ARB_framebuffer_object") != 0)
{
support_framebuffer_object = true;
support_framebuffer_via_ext = false;
}
// Detect framebuffer object support via EXT (for OpenGL version < 3) - uses the EXT names
else if (strstr(gl_extensions, "EXT_framebuffer_object") != 0)
{
support_framebuffer_object = true;
support_framebuffer_via_ext = true;
}
}
Then later on during startup it creates a FBO in anticipation of rendering to texture:
// Render-to-texture support: create a frame buffer object (FBO)
if (support_framebuffer_object)
{
// If support is via EXT (OpenGL version < 3), add the EXT suffix; otherwise functions are core (OpenGL version >= 3)
// or ARB without the EXT suffix, so just get the functions on their own.
std::string suffix = (support_framebuffer_via_ext ? "EXT" : "");
glGenFramebuffers = (glGenFramebuffers_t)wglGetProcAddress((std::string("glGenFramebuffers") + suffix).c_str());
glDeleteFramebuffers = (glDeleteFramebuffers_t)wglGetProcAddress((std::string("glDeleteFramebuffers") + suffix).c_str());
glBindFramebuffer = (glBindFramebuffer_t)wglGetProcAddress((std::string("glBindFramebuffer") + suffix).c_str());
glFramebufferTexture2D = (glFramebufferTexture2D_t)wglGetProcAddress((std::string("glFramebufferTexture2D") + suffix).c_str());
glCheckFramebufferStatus = (glCheckFramebufferStatus_t)wglGetProcAddress((std::string("glCheckFramebufferStatus") + suffix).c_str());
glGenerateMipmap = (glGenerateMipmap_t)wglGetProcAddress((std::string("glGenerateMipmap") + suffix).c_str());
// Create a FBO in anticipation of render-to-texture
glGenFramebuffers(1, &fbo);
}
I have been through many variations of this code, and I simply cannot get it to work for everyone. There is always a group of users who report nothing renders at all. ATI Radeon HD cards seem to be particularly problematic. I'm not sure if there's a driver bug involved, but I guess it's more likely my above code is making an incorrect assumption.
500 rep bounty and I'll send a free Business license to anyone who knows what's wrong! (Worth £99)
Edit 2: some more details. Here is a list of cards that this is known to fail on:
ATI Mobility Radeon HD 5650
ATI Radeon X1600 Pro
ATI Mobility Radeon HD 4200
No rendering to texture is actually done. It appears the glGenFramebuffers
call alone stops rendering completely on these cards. I could defer creation of the FBO to the first time render-to-texture is actually done, but then presumably it will just stop rendering again.
I could use GLEW, but what does it do that my code doesn't? I had a look through the source and it seems to use a similar list of wglGetProcAddress
. Methods are being returned in my case, else glGenFramebuffers
would be NULL and crash. Any ideas...?
If the extension
GL_EXT_framebuffer_object
is present, then you can usewglGetProcAddress("glGenFramebuffersEXT");
.If the OpenGL version is >= 3.0 (in this version the FBO extension was added to the core), then you can use
wglGetProcAddress("glGenFramebuffers");
.The code you have included is not the problem. Of course, the binding points shall be obtained as you already do.
In the case ARB_framebuffer_object is supported, uses the entry points without EXT suffix. In the case EXT_framebuffer_object is supported, uses the entry points with EXT suffix. If both are supported, you can select the implementation by getting the right binding points.
I am very interested on this issue (since I had a similar doubt because you).
If you compare the ARB specification with the EXT specification, you can notice a lot of differences.
Here, I will quote the most interesting ARB specification paragraphs on this topic.
The specification is the very long, but the ARB variant contains many discussions on the EXT compatibility issue. Since you're application run, the entry points are probably correct, and the error (as suggested by Nicolas Bolas) could be in the framebuffer completeness.
Introduce every check possible, and double-check the implementation twice (one taking ARB spec in mind, one taking EXT spec in mind).