Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would say that would be my use case for LLMs. These should be easily fixed by automation that can reason about documentation and could spit out code fixes.

Of course I assume there is documentation that is updated before new changes go live which might be too much to ask :)



It's interesting engineering problem, I wouldn't imagine LLMs as they currently are could work directly on the whole codebase without breaking it just as often as the APIs. But perhaps you could have it maintain connectors/interfaces for each individual API, such that it can get one very wrong and not ruin the whole program.

You could even have its success depend on a test suite, so that it iterates until the tests pass.


For “API list” that only has tests to be fixed, something like shifting endpoint should be fixable with tests + LLM.

So that’s the idea I proposed that single dev with automation and LLM should be able to maintain “API list” but maintaining any code that depends on the API I expect is above LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: