That worse-is-better is self-reinforcing and that it's the only stable strategy in an environment with less-than-perfect cooperation (i.e. it's the only Nash equilibrium) may both be true at the same time. In fact, if the latter is true then the former is alsmost certainly true.
The real question is, then, whether doing "the right thing" is a stable and winning strategy at all, i.e. viable and affordable. As you yourself suspect, the answer may well be no. Not only because it takes a few tries to figure out the right foundations, but also because what foundation is right is likely to change over time as conditions change (e.g. hardware architecture changes, programming practices -- such as the use of AI assistants -- change etc.).
The real question is, then, whether doing "the right thing" is a stable and winning strategy at all, i.e. viable and affordable. As you yourself suspect, the answer may well be no. Not only because it takes a few tries to figure out the right foundations, but also because what foundation is right is likely to change over time as conditions change (e.g. hardware architecture changes, programming practices -- such as the use of AI assistants -- change etc.).