Gaffer suggests doing linear interpolation between the previous state and the current state for rendering. The current state is 1 full fixed timestep apart from the previous state. But why not just plug in the remaining dt into the normal simulator, render the scene, and then throw away the new state until a full step has elapsed?
A real-world example using Windows' QueryPerformanceCounter:
QueryPerformanceCounter(&after);
float dt = (double)(after.QuadPart-before.QuadPart)/(double)freq.QuadPart;
before = after;
float step = 1.0f/60.0f;
while (dt > step){
Simulate(step);
dt -= step;
}
Render(dt);
I assume it's just a performance thing, to avoid running the simulation each frame and instead do a cheaper linear interpolation to a new state that you only have to calculate once.
This isn't "just a performance thing". Running the integration step again at the end of the update loop would violate exactly what the article is advocating for:
By passing in a
dtless than the fixed time step at the end of the update loop you invite problems with inconsistent physics simulation. Using linear interpolation to estimate the current state based on previous fixed time step calculations avoids this issue by ensuring that the integration step is always performed using the same fixed time step and consuming the remaining frame time in the interpolation step.But why not just plug in the remaining dt into the normal simulator, render the scene, and then throw away the new state until a full step has elapsed?
This sentence is unclear to me. If by "remaining dt" the OP refers to the fractional dt remaining after consuming fixed time steps, then plugging that fractional dt into the integration step is the problem described above.
On the other hand, maybe OP means to approach the total frame time by fixed time steps and then to throw away the remaining fractional dt without simulation. If this is the intent, the result could be stuttering because the physics and rendering are out of step: