controlling FPS limit in OpenGL application

2020-02-05 04:10发布

问题:

I am trying to find a solid method for being able to set exactly how many FPS I want my OpenGL application to render on screen. I can do it to some extent by sleeping for 1000/fps milliseconds but that doesn't take into account the time needed to render. Which is the most consistent way to limit fps to desired amount?

回答1:

you can sync to vblank by using wglSwapIntervalEXT in opengl. its not nice code, but it does work.

http://www.gamedev.net/topic/360862-wglswapintervalext/#entry3371062

bool WGLExtensionSupported(const char *extension_name) {
    PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL;

    _wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC)wglGetProcAddress("wglGetExtensionsStringEXT");

    if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL) {
        return false;
    }

    return true;
}

and

PFNWGLSWAPINTERVALEXTPROC       wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC    wglGetSwapIntervalEXT = NULL;

if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
    wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");

// this is another function from WGL_EXT_swap_control extension
    wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC)wglGetProcAddress("wglGetSwapIntervalEXT");
}


回答2:

Since OpenGL is just a low-level graphics API, you won't find anything like this built into OpenGL directly.

However, I think your logic is a bit flawed. Rather than the following:

  1. Draw frame
  2. Wait 1000/fps milliseconds
  3. Repeat

You should do this:

  1. Start timer
  2. Draw frame
  3. Stop timer
  4. Wait (1000/fps - (stop - start)) milliseconds
  5. Repeat

This way if you are only waiting exactly the amount you should be, and you should end up very close to 60 (or whatever you're aiming for) frames per second.



回答3:

OpenGL itself doesn't have any functionality that allows limiting framerate. Period.

However, on modern GPUs there's a lot of functionality covering framerate, frame prediction, etc etc. There was John Carmack's issue that he pushed to make some functionality for it available. And there's NVidia's adaptive sync.

What does all that mean for you? Leave that up to GPU. Assume that drawing is totally unpredictable (as you should when sticking to OpenGL only), time the events yourself and keep the logic updates (such as physics) separate from drawing. That way users will be able to benefit from all those advanced technologies and you won't have to worry about that anymore.



回答4:

Don't use sleeps. If you do, then the rest of your application must wait for them to finish.

Instead, keep track of how much time has passed and render only when 1000/fps has been met. If the timer hasn't been met, skip it and do other things.

In a single threaded environment it will be difficult to make sure you draw at exactly 1000/fps unless that is absolutely the only thing your doing. A more general and robust way would be to have all your rendering done in a separate thread and launch/run that thread on a timer. This is a much more complex problem, but will get you the closest to what your asking for.

Also, keeping track of how long it takes to issue the rendering would help in adjusting on the fly when to render things.

static unsigned int render_time=0;
now = timegettime();
elapsed_time = last_render_time - now - render_time;
if ( elapsed_time > 1000/fps ){
    render(){
        start_render = timegettime();

        issue rendering commands...

        end_render = timegettime();
        render_time = end_render - start_render;
    }
    last_render_time = now;
}


回答5:

Put this after drawing and the call to swap buffers:

//calculate time taken to render last frame (and assume the next will be similar)
thisTime = getElapsedTimeOfChoice(); //the higher resolution this is the better
deltaTime = thisTime - lastTime;
lastTime = thisTime;

//limit framerate by sleeping. a sleep call is never really that accurate
if (minFrameTime > 0)
{
    sleepTime += minFrameTime - deltaTime; //add difference to desired deltaTime
    sleepTime = max(sleepTime, 0); //negative sleeping won't make it go faster :(
    sleepFunctionOfChoice(sleepTime);
}

If you want 60fps, minFrameTime = 1.0/60.0 (assuming time is in seconds).

This won't give you vsync, but will mean that your app won't be running out of control, which can affect physics calculations (if they're not fixed-step), animation etc. Just remember to process input after the sleep! I've experimented with trying to average frame times but this has worked best so far.

For getElapsedTimeOfChoice(), I use what's mentioned here, which is

  • LINUX: clock_gettime(CLOCK_MONOTONIC, &ts);
  • WINDOWS: QueryPerformanceCounter


回答6:

Another idea is to use WaitableTimers (when possible, for instance on Windows)

basic idea:

while (true) 
{ 
    SetWaitableTimer(myTimer, desired_frame_duration, ...); 
    PeekMsg(...) 
    if (quit....) break; 
    if (msg)
        handle message; 
    else 
    { 
        Render(); 
        SwapBuffers();
    }
    WaitForSingleObject(myTimer); 
}

More info: How to limit fps information



回答7:

An easy way is to use GLUT. This code may do the job, roughly.

static int redisplay_interval;

void timer(int) {
    glutPostRedisplay();
    glutTimerFunc(redisplay_interval, timer, 0);
}

void setFPS(int fps)
{
    redisplay_interval = 1000 / fps;
    glutTimerFunc(redisplay_interval, timer, 0);
}