The view update part was the most confusing, but I tried to understand it this way:
Imagine you had a magical game engine that rendered the entire world perfectly accurately for every point in space and direction a viewer could possibly be looking at. All you had to do was say:
RenderGameFrameForEveryPossiblePoint();
... // Who knows how much time
viewerPosition = QuicklyGetViewerPosition();
TellTheGPUToShowWorldAccordingTo(viewerPosition);
Well then, you could postpone calling those last two functions until the absolute last minute. This way you have very little or no movement of the viewer's head between when you read their position and when you show the view for that position.
But naturally, RenderGameFrameForEveryPossiblePoint() is slightly out of bounds of current technology. A lot of what Carmack was discussing, as I understood it, was simulating this effect as closely as possible. The way to do that, it seems, is:
That final bit is just a perspective transformation of a bunch of rendering that was already computed and given to the GPU. But if the viewer moves too quickly, you can easily move somewhere in the world that wasn't actually rendered, or your perspective could shift such that and object that was once occluded is now visible, or vice versa. It seems a lot of the complexity is there.
The last thing he talked about, time warping, seems to be a similar thing only it's scanline by scanline. So in effect you're saying "hey, video card and display, I know you're going to force me to draw a whole frame at once, so I'm going to give you a frame where each scanline gets rendered a little bit into the future according to where the player is moving."
The effect on a monitor would probably look like a forward shear, but on an HMD (if done correctly), it would correct for the natural shear caused by having to "freeze frame" the viewer's perspective for one entire frame instead of just a scanline.
Some of this may be woefully incorrect, but it was how I explained it to myself. Please correct anything that's wrong or overly simplified.
That time warping has already been implemented in the project of Lagless MAME. It compensates for input and display lag by always rendering a few frames into the future. It's commonly used for games where frame-accurate timing is critical, notably 2D scrolling shmups and fighting games.
Lagless MAME renders into the future assuming that the state of the input controls remains constant over that future time, and saves the emulation state every frame. When a button is pressed or released or whatever, Lagless MAME rewinds to the saved state for that frame and quickly re-emulates from that point forward. So the result is to send your input back in time past the lag, to the moment in the emulation exactly synchronized to when you saw it on the screen. The experience isn't perfect -- your spaceship would jump a few pixels then move smoothly -- but by and large it's far superior to playing with the actual lag.
This technique could be used for lag compensation in almost any environment. The limiting factor is the cost of re-computing several frames of game state on every input action. Of course, as Carmack says, actually eliminating lag is far preferable to masking it with such techniques.
The lag compensation in Guitar Hero and Rock Band games works essentially this way too.
Imagine you had a magical game engine that rendered the entire world perfectly accurately for every point in space and direction a viewer could possibly be looking at. All you had to do was say:
Well then, you could postpone calling those last two functions until the absolute last minute. This way you have very little or no movement of the viewer's head between when you read their position and when you show the view for that position.But naturally, RenderGameFrameForEveryPossiblePoint() is slightly out of bounds of current technology. A lot of what Carmack was discussing, as I understood it, was simulating this effect as closely as possible. The way to do that, it seems, is:
That final bit is just a perspective transformation of a bunch of rendering that was already computed and given to the GPU. But if the viewer moves too quickly, you can easily move somewhere in the world that wasn't actually rendered, or your perspective could shift such that and object that was once occluded is now visible, or vice versa. It seems a lot of the complexity is there.The last thing he talked about, time warping, seems to be a similar thing only it's scanline by scanline. So in effect you're saying "hey, video card and display, I know you're going to force me to draw a whole frame at once, so I'm going to give you a frame where each scanline gets rendered a little bit into the future according to where the player is moving."
The effect on a monitor would probably look like a forward shear, but on an HMD (if done correctly), it would correct for the natural shear caused by having to "freeze frame" the viewer's perspective for one entire frame instead of just a scanline.
Some of this may be woefully incorrect, but it was how I explained it to myself. Please correct anything that's wrong or overly simplified.