I should also give a bit of background information, because I find it funny that some surface-level API for Rust is portrait as "solving" the problem in three months which C struggled for years. (And by that I do not want to downplay Arias impressive work, it is a very nice and clean interface). The fundamental problem is that compiler writers build optimizations without a clear mathematical understanding of the rules. One could argue that this is the fault of the C standard by not being a mathematically precise specification, but this was also never really the goal of ISO C. The main issue is that compilers implemented rules that were not consistent even between different optimizer passes. WG14's main mission is to standardize existing practice. DR260 (failed) attempt to clarify some of this came out this. I agree that this should have been done better, but the important thing to understand is that WG14 as a consenus-based standardization committee is not really equipped to do this kind of work. The basic assumption is that compilers implement sound models, which can then later be standardized, only harmonizing differences between different compilers. But here we had to clean up the mess compiler vendors created. Not at all what WG14 was meant for. To be able to start fixing compilers, one had to find a common and consistent formulation for provenance. For C this meant also considering what all existing code needs and what optimization different compilers implement, and deciding which version to pick and what specific behavior is a useful optimization, what is a compiler bug, and what optimization can be scarified to come to a consistent formulation. Only with this can one decide what an optimizer bug is and what not. To the extend these bugs are still there they affect Rust too and to they extend they are fixed, Rust benefits. (vice-versa C benefits from Rust's push towards safety which also puts pressure on compilers to fix such bugs). But in any case, the API on top is not at all the most difficult part and Aria's document is not precise enough to even differentiate between subtle points of provenance.
I think you're correct that it's a big problem that the serious compilers don't actually have coherent internal behaviour, but I think that's a distinct problem which has been masked by the provenance problem in C and thus C++. It meant that real bugs in their software can be argued as "Works as intended" pointing to DR260 rather than anybody needing to fix the compiler.
Once the GCC work is closer to finished, I expect we'll see the same there.
I disagree that you need to solve the C problem first to solve the compiler problem, and I think it was misguided to start there. You seem to have focused on the fact that exposure is even an option in Aria's implementation, but let me quote: "The goal of the Strict Provenance experiment is to determine whether it is possible to use Rust without expose_addr and from_exposed_addr". Setting PNVI-ae-udi as "the" provenance rule is your end goal for C, but there's a reason it's called the "Strict Provenance" experiment in Rust, the goal is something like what you call PNVI-plain.
APIs like map are key to that goal, Rust has them and N3005 does not.
So like I said, rather than just being a "translation" I think the most that can be said is you've got the same problem albeit in a very different context, and your solutions are related in the way we'd expect when competent people attack similar problems.
We have no "end goal". PVNI-ae-udi is intended to capture the semantics of most existing C code. I mention terminology such as "exposure" etc. just because this makes it obvious that this work builds on C's provenance model (the terminology did not exist before as far as I know). PVNI plain is a stricter subset. You can already use it in C and a lot of C code would just work fine with it. Note that some compilers people and formal semantics people were pushing for PVI.
If you think that "map_addr" etc. is an important API, then I agree that this is an innovation on top of what we did. But I personally do not quite see the importance of this API. Yes, it allows some things in the scope of strict provenance which in C would now require PVNI-ae-udi. We envisioned future extensions that prevent exposure of pointers for certain operations, but somehow this seems more academic at this point in time. If you are not using hardware such as CHERI, this does not matter. On the other hand, PVNI-ae-udi makes most existing C code follow a precise provenance model, which I think is a huge step forward.