Hacker Newsnew | past | comments | ask | show | jobs | submit | natoucs's commentslogin

It doesn't offer to keep access to the native frontends


That makes sense. But you can't access the native frontend if it is in a webapp


Why do you need native ChatGPT Frontend specifically?

There are apps that provide similar Frontend and use api keys from ChatGPT and Gemini and others to provide all models under one web interface.


Few reasons: keep access to the frontend features of each providers, have access to my chats I have in the individual frontend apps, to not have to trust a 3rd party provider, to not have to update the app each time a new model comes out


I would if I found a way to keep access to the frontends of each LLM provider while being the web


Very nice! Do you still get access to the frontend of the original LLM providers and do you have to insert API keys ?


You get access to similar UI like ChatGPT and you connect the models you want to use by providing API key.

Once configured you can choose between models of all providers you have connected in dropdown in chat.


iframe - you got it - it embeds the web apps


A better feature would be to select one pf the responses as the best one, and use it as the context for all LLMs, as if they were sent by each ome

But this would require API access instead of embedding web apps


good idea! And yes that's the issue


I think some people here are confused because they imagine financial/customer synthetic data where the pattern to simulate is unclear, instead of computer vision, where the pattern to replicate is obvious as we see it before our eyes. This company seems to be focused on specific use cases of computer vision synthetic data so makes sense imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: