I feel so strongly about this topic that I recorded a podcast episode on it with my at the time business partner[1]. I think that maybe the only relevant reason to have a synchronous standup is to align on the biggest problem to solve in the next workday.
A standup of this model goes something like this: what is the goal for the day? What support is needed to make it happen? etc.
It's been 2.5 years since ChatGPT came out, and so many projects still don't allow for easy switching of the OPEN_AI_BASE_URL or affiliated parameters.
There are so many inferencing libraries that serve an OpenAI-compatible API that any new project being locked in to OpenAI only is a large red flag for me.
Thanks for the feedback! Totally hear you on the tight OpenAI coupling - we're aware and already working to make BYOM easier. Just to echo what Zecheng said earlier: broader model flexibility is definitely on the roadmap.
Appreciate you calling it out — helps us stay honest about the gaps.
Yes, there is a roadmap to support more models. For now there is a in progress PR to support Anthropic models https://github.com/traceroot-ai/traceroot/pull/21 (contributed by some active open source contributors) Feel free to let us know which (open source) model or framework (VLLM etc.) you want to use :)
Adding model provider abstraction would significantly improve adoption, especially for organizations with specific LLM preferences or air-gapped environments that can't use OpenAI.
Yep, you're spot on - and we're hearing this loud and clear across the thread. Model abstraction is on the roadmap, and we're already working on making BYOM smoother.
I sort of fell out of love with programming for a while, but lately I've been building a toy html engine in rust[1]. I hope to eventually build a full rust browser.
A standup of this model goes something like this: what is the goal for the day? What support is needed to make it happen? etc.
[1]: https://podcasts.apple.com/us/podcast/better-stand-ups/id163... -- skip to about 10:00 to hear the above.