I was wondering the same thing and thinking about it.
When AMD bought ATI they viewed the GPU as a potential differentiator on CPUs. They've invested a lot of effort into CPU-GPU fusion with their APU products. That has the potential to start paying off in a big way sometime - especially if they figure our how to fuse high end GPU and CPU and just offer a GPGPU chip to everyone. I can see why AMD might put their bets here.
But the trade off was that Nvidia put a lot of effort in doing linear algebra quickly and easily on their GPUs and AMD doesn't have a response to that. Especially since they probably strategised on BLAS on an APU. But it turns out there were a lot of benefits to fast BLAS and Nvidia is making all the money from that.
In short, Nvidia solved a simpler problem that turned out to be really valuable, it would take AMD a long time to organise to do the same thing and it may be a misfit in their strategy. Hence ROCm sucks and I'm not part of the machine learning revolution. :(
AMD's graphics R&D is driven by consoles - literally. Microsoft and Sony pay huge sums in early-stage R&D and they get to set the direction of the R&D as a result. RDNA was run explicitly from the start as a semi-custom project (much to the chagrin of Raja Koduri, as this was not his fief). So was RDNA2.
As such, if the console market doesn't want it, it doesn't get built. AMD is not willing to put its own money into graphics research.
AMD does not really have the marketshare to get the PC market to adopt AMD-backed features that use accelerators that aren't present in the consoles. If AMD takes 20% of the market in a given year, and the PC market turns over every 6 years, this hardware support would be present in 0-3% of the PC market and 0% of the console market. So even if RDNA3 had a magic "DLSS-level" improvement that relied on some unique new accelerator they'd added in RDNA3, it'd be an uphill fight to get it adopted. Nor is AMD going to spend the money to just implement a bunch of software features anyway - they only even invested in FSR2 after it became a competitive disadvantage for them not to have something.
They won't even go the 16-series vs 20-series route of having consoles be a basic architecture (with size-reduced implementations of features) and then a full-size/higher-performance implementations on PC dGPUs with more full-fledged accelerators bolted on/etc. For example they could have done this with the ML accelerators on RDNA3 - they have a slower (microcoded?) ML instruction in the basic RDNA3, and they could have thrown a more full-fledged implementation into dGPU implementations where there's more space to spare.
But it's just not worth spending on any of that for them - it's a lot of R&D for a fairly narrow slice of the market that would be impacted.
So yeah I mean she's just not that into you. Consoles set the direction of their graphics R&D. They'll tap a few other lucrative markets like HPC but they're not going to make big spends that don't have obvious ROI involved, and AMD doesn't really have the PC-gaming marketshare to care about dGPUs as an independent market worthy of R&D.
People ask "why does Intel need anything except iGPUs" and for AMD the question is "why do they need anything except consoles". The rest is interesting in a "someday" sense and potentially strategically important, but day-to-day it's pretty obvious which verticals are bringing in the bacon.
And for NVIDIA that's both dGPUs and datacenter - they still make a lot of money from consumer gaming, and it gives a foothold for development to progress from curiosity to research project to business deployment. AI accelerators and CUDA being on consumer hardware has been a huge boon to R&D (contrast ROCm/HIP being essentially unusable outside enterprise hardware) and the commercial market has found uses for RT cores as well. Because NVIDIA had the realization, a lot of years ago, that they are in fact a software company, that writes the software that sells the hardware.
People mocked Jensen for that for a lot of years, but he was completely right and that's why he's succeeded while AMD has spun their wheels on GPGPU for 15 years now.
And the problem for AMD is, consoles won't pay for a 5% more expensive chip based on blue-sky prospects of something maybe being useful in 3+ years. Or at least not unless it gets an internal backer, like DirectStorage/RDMA obviously has been adopted despite an extremely slow burn on actual usage.
Optical Flow Accelerator is probably the most recent iteration of this - GCN actually had this capability as "Fluid Motion" accelerator but consoles wanted it taken back out, because it was wasted space. Now it's the underpinning of DLSS3 and likely future work in DLSS4 - the principles of "variable temporal+spatial rate shading" AMD outlines in their recent GTC presentation seem like an obvious "DLSS2 for DLSS3". I have also spoken about this idea before and I think that is where NVIDIA is going with DLSS4, but AMD has to do it without the hardware optical flow engine (except on older GCN cards ironically).
What's the deal with that anyway? A lot of people want a real alternative to Nvidia, and AMD just... Doesn't care?
I guess we'll have to wait for intel to release something like CUDA and then AMD will finally do something about the GPGPU demand.