I'm not surprised if this is true to a wide degree (not fully).
What the people that predicted that AI will change productivity drastically in just a year or two imho underestimate: Just because a technology is capable of this in theory (and I think it is, even today) - it doesn't mean we are willing or able to deploy it to its full potential at scale. There will be a few individuals and very few companies which will do so or come close.
For most of the companies, they're still doing a lot of processes manually that could have been automated with 80s technology. Why should a change in the technology suddenly make everyone change their mindset, culture, way of thinking? This may take 1-2 generations to more fully form. Maybe that's also better: Slower change is less destablizing for society / individuals, I think of it more like a defensive mechanism of the system to keep it stable.
Daily tasks are then deterministic checklists inside the repo, e.g. “Create 3 drafts for next Tuesday with images in /assets/images and entries added to 2025-12-calendar.md under campaign X”.
This is nice, but that it goes into its vendor specific .codex/ folder is a bit of a drag.
I hope such things will be standardized across vendors. Now that they founded the Agentic AI Foundation (AAIF) and also contributed AGENTS.md, I would hope that skills become a logical extension of that.
This is interesting. Also "Discover tools on-demand". Are there any stats or estimates how many tools an LLM / agent could handle with this approach vs. loading them all into context as MCP tools?
(shameless plug: im building an cloud based gateway where the set of servers given to an mcp client can be controlled using "profiles": https://docs.gatana.ai/profiles/)
Let me take the other position in this comment: I also see that how MCP works really helped its quick adoption. Because you could just build a local MCP server as a proxy around existing APIs and functionality, there is no need to touch anything existing. And MCP often starts as "MCP Server" that is basically a software artifact that you'd just configure and run - often locally. I don't think that just doing REST or extending existing REST APIs wouldn't have delivered this part of the MCP success story.
But now that many companies focus on MCP as a remote API, the question obviously comes up why not just use standard API protocols for that and just optimize the metadata for AI consumption.
What bothers me about MCP is that there is not even a standard way to describe an entire MCP server in a single JSON file. Like OpenAPI for REST. This makes exchanging metadata and building catalogs unnecessarily unstandardized.
The article also mentioned that OpenAPI is too verbose: I totally see that, but you could optimize this by stripping an OpenAPI file down to the basics that you need for LLM use, maybe even using the Overlay spec. Or you convert your OpenAPI files to the https://www.utcp.io format that pylotlight mentioned.
Some "curation" of what's really relevant for AI consumption may be a helpful anyway, as too many tools will also lead to problems in picking the right ones.
Thanks for posting the reddit comment, it nicely explains the line of thinking and the current adoption of MCP seems to confirm this.
Still, I think it should only be an option, not a necessity to create an MCP API around existing APIs. Sure, you can do REST APIs really badly and OpenAPI has a lot of issues in describing the API (for example, you can't even express the concept of references / relations within and across APIs!).
REST APIs also don't have to be generic CRUD, you could also follow the DDD idea of having actions and services, that are their own operation, potentially grouping calls together and having a clear "business semantics" that can be better understood by machines (and humans!).
My feeling is that MCP also tries to fix a few things, we should consider fixing with APIs in general - so at least good APIs can be used by LLMs without any indirections.
What the people that predicted that AI will change productivity drastically in just a year or two imho underestimate: Just because a technology is capable of this in theory (and I think it is, even today) - it doesn't mean we are willing or able to deploy it to its full potential at scale. There will be a few individuals and very few companies which will do so or come close.
For most of the companies, they're still doing a lot of processes manually that could have been automated with 80s technology. Why should a change in the technology suddenly make everyone change their mindset, culture, way of thinking? This may take 1-2 generations to more fully form. Maybe that's also better: Slower change is less destablizing for society / individuals, I think of it more like a defensive mechanism of the system to keep it stable.
reply