There's nothing I want less than multi-frame generation. I guess some people want to feel like they're getting their money's worth from their 240 Hz monitors.
It's a great option to have. Once you reach the 2-7ms frame time territory, you're approaching the CPU bottleneck for many game engines even on the fastest hardware. For newer titles like GTA VI, framegen might be the only reliable path to 120+ FPS without pinning all of your cores.
Framegen is also a good fit for low-end hardware like the Steam Deck, which can hit 30 or 45 FPS in stuff like Elden Ring but is far from the max 90hz of the OLED model's panel. For a handheld, trading a bit of 720p visual clarity for locked 90hz gameplay is a solid trade if you can get it working.
Would you say a game is running at 90fps if, 45 times per socond, two frames are produced, the second of which is a linear interpolation of the frame before and after it?
How about if the two frames are 100% identical?
Does either of these situations differ substantially from what is being discussed, wherein the render pipeline can only produce a new render 45 times per second?
> the second of which is a linear interpolation of the frame before and after it
If I understand what you describe, this is generating a frame "in the past", an average between 2 frames you already generated, so not very useful? If you already have frames #1 and #2, you want to guess frame #3, not generate frame #1.5.
The higher the "real frame" rate, the smaller the differences from one to the next. This makes it easier to predict those differences, and "hide" a bad prediction. On the other hand if you have 10FPS you have to "guess" 100ms worth of changes to the frame which is a lot to guess or hide if the algorithm gets it wrong.
I chose the two scenarios I did to illustrate that "frames per second" is clearly not meant to be measured in terms of times the display refreshed, but rather in terms of times content was actually rendered by the game engine.
In my opinion it is quite difficult to provide a definition of "fps" that somehow makes 45-fps-native-with-frame-doubling be counted as 90 but doesn't also make either of the ludicrous examples I presented be counted as 90.
I understand now, but I think any full frame that comes out of the GPU frame buffer is a frame. A real rendered frame or a generated frame using some algorithm. Even in the silly "I duplicate each frame" example, you are outputting that number of FPS. If you stand still in a game and nothing changes in the frame you're still counting all those practically identical frames.
A measure for "FPS effectiveness" sounds interesting. Like how much detail, changes, information can you discretely convey per second relative to what the game is continuously generating.
A Nyquist of sorts. Are you just duplicating samples? Are you sampling a high frequency signal (fast motion in the game) at high enough rate (lots of discrete FPS)?
My understanding is that frame generation uses motion vectors to (slightly?) adjust the scene to produce a "highly plausible" next frame to drop in before the following "real" frame.
I've only seen videos, so from a somewhat unrealistic perspective, it seems like an acceptable compromise for low end hardware in particular.
My comment isn't denigrating frame generation, which can be useful.
It's pointing out the absurdity of calling "45fps plus 1-for-1 frame generation" as if it is in any sense "90fps". It's not, and you aren't hitting a 90Hz refresh rate target at any more with it than you were without it. In point of fact, it lowers real FPS because it consumes resources that would have otherwise been available for the render pipeline.
I wish reviewers in particular would stop saying e.g. "120fps with DLSS FG enabled" and instead call out the original render rate. It makes the discourse very confusing.
120 Hz is around the point where I'd start to consider frame generation in the first place, assuming everything else in the system is optimized for minimal latency.
At 100 Hz or less, I've yet to experience frame generation in any form that doesn't result in unacceptably floaty input relative to the same system with framegen disabled.
If you have a high frame rate to start with it’s pretty nice and feels smoother. But a low frame rate turned into a high one looks good but feels laggy.
So arguably you never need frame gen for a game, since it only really works when it’s already pretty nice.
You will never ever get decent 1% lows in most titles, the software stack is architecturally fucked in the popular engines and can’t do it. You would need a CPU that’s literally 100x faster than today’s top models for it to be able to compile shaders on-demand within a single frame without hitching. (Or maybe it’s more accurate to say that there’s a massive gulf between what the hardware/drivers need - compiled pipeline objects built/known ahead of time - versus what game engines are doing, building pipelines on the fly on demand, surfacing new permutations frame-by-frame)
This requires knowing what to compile, which these engines don't really do, because the necessary data is pooped out by arbitrary game logic / scripts. That's why precompiling shaders in e.g. UE5 basically relies on running the engine through a pre-recorded gameplay loop and then making a list of the shaders/PSOs used; those are then pre-compiled. Any shader not used in that loop will cause stutter. A newer UE5 technique is to have heuristics which try to guess which PSOs might be needed ahead of time.
If you read their proposed solutions, it's quite clear they only have patchy workarounds, and the inability to actually pre-compile the needed PSOs and avoid shader and traversal stutter is architectural. It should be noted that these engines are also stuttering on console, but it's not as noticeable since performance is generally much lower anyway.
If you're on Intel integrated graphics, it's a free potential upgrade that makes use of existing silicon, and you don't have to turn it on. I don't get the hate. Just don't turn it on if you don't want it.
I get that people want more real frames rather than more "fake" frames, but in that case you wouldn't be buying integrated graphics, or if you did end up with iGPU, you'd be aware of the limits and be happy for any improvements arriving via software.
It's like people let their hate of AI and LLM bubble blind them, and their brains can't compartmentalize good from bad news anymore.
> It's like people let their hate of AI and LLM bubble blind them, and their brains can't compartmentalize good from bad news anymore.
DLSS is also AI and people like it.
People don't like framegen because the manufacturers are not being honest about it and using it for deceptive hype marketing. Anyone with a brain knows that it introduces latency and is only useful if you're already 40+ FPS, we also know that companies will use it to pad benchmarks. NVIDIA themselves said that the 5070 had 4090 performance because it supports framegen.
My concern is that it will make developers even more lazy when optimising their code. What one hand giveth the other takes away. When has any advancement in the hardware not led to the same or worse software performance in few years time? There surely must be a name for this paradox. This will not result in you getting 1000fps. You will end up with the same `acceptable` refresh rates with worse rendering through novel hacks.