I've seen this code several times.
long lastTime = System.nanoTime();
final double ticks = 60D;
double ns = 1000000000 / ticks;
double delta = 0;
The code above takes the System time and stores it to lastTime
. The 60 ticks should equate to the number of times to update per second.
while(running){
long now = System.nanoTime();
delta += (now - lastTime) / ns;
lastTime = now;
if(delta >= 1){
tick();
delta--;
}
It takes now
and subtracts lastTime
, then converts it to nanoseconds/60. Is there some guarantee that the difference in time between now
and lastTime
to nano over 60 will cause delta to be greater than or equal to 1, 60 times per second? I can't understand why tick();
will run around 60 times per second. From my calculation every time the loop runs delta increases by 0.0025 or so.
Without knowing what's in tick()
, I can't be sure, but I'd guess that it also uses 60D
to attempt to sleep for about the correct amount of time? So: no, there's isn't a guarantee, which is what this code is there to fix.
It's saying "If tick()
slept for less than a tick, then don't do anything; if it slept by a tick or more, tick once".
Presumably, if ticking goes over enough (for example 80ns then 80ns, meaning that the 1st loop increments tick and the second increases it by 2), then eventually there will be another loop that only has a 40ns delta, causing everything to even out.
It's making sure the tick() method is only being executed 60 times per second. It's called delta timing, and is referenced all over the game community. The reason delta timing is used is to make games run at the same rate on all platforms. Things like physics and animation inside games need to run at the same rate, regardless of the speed of the system. If you have a slow system and the game didn't include delta timing, the physics would run slower than if the game was running on a fast system.
How it works
figure out how many times you want to update per second, in your case it is 60 times per second.
find the resolution of the languages built in time function. Your code uses Java's built in System.nanotime() function, which returns a long value for how many nanoseconds the system has been running.
there are 1000000000 nanoseconds in a second, which means the difference in time per call of tick() is equal to 1000000000/60. Which is approximately 16666666 nanoseconds. This means that whenever tick() has been called, the program will wait 16666666 nanoseconds before it calls tick() again.
what you must do is find the time between the current frame and the last frame in nanoseconds. This value divided by the timestep (16666666) will give you a decimal percentage of how much of the necessary time has passed. If you add this decimal percentage to the delta variable, when the delta variable >= 1, 1/60th of a second has passed, which means the program can now call tick().
finally you minus 1 from the delta variable. The reason you don't set the delta variable to 0 is because there may have been a bigger timestep than 1 and the delta variable will be above 1, which means you'll need to account for that the next time you loop through and call tick()
I've commented the code a bit to show you more clearly what is happening.
//Get the system time
long lastTime = System.nanoTime();
//Specify how many seconds there are in a minute as a double
//store as a double cause 60 sec in nanosec is big and store as final so it can't be changed
final double ticks = 60D;
//Set definition of how many ticks per 1000000000 ns or 1 sec
double ns = 1000000000 / ticks;
double delta = 0;
while(running){
//Update the time
long now = System.nanoTime();
//calculate change in time since last known time
delta += (now - lastTime) / ns;
//update last known time
lastTime = now;
//continue while delta is less than or equal to 1
if(delta >= 1){
//Go through one tick
tick();
//decrement delta
delta--;
}
Now I'm pretty sure that's what this does but I can't say for certain without seeing what tick() is