Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Plugging a project of mine: I've been working on a similar idea for the era of LLMs: https://butterfi.sh.

It's much more bare-bones than Fig but perhaps useful if you're looking for an alternative! Send me feedback!



> Within Butterfish Shell you can send a ChatGPT prompt by just starting a command with a capital letter, for example:

This is a dangerous assumption. Not all commands are lowercase. Interaction with an external service should be a deliberate, discrete action on the user's part.


agree, nothing wrong with something like an `llm` prefix


I like that a lot! It would be awesome if the client running on goal mode had capabilities to request some search engine API + do some crawling. Imagine getting the info out of up to date github issues or directly from AWS docs.


Is this in any way related to the fish shell or is this just a very unfortunate name?


Just curious, do you have any intent on adding local model support?


I've experimented with it, the reason I haven't yet added it is that I want deployment to be seamless, and it's not trivial to ship a binary that would (without extra fuss or configuration) efficiently support Metal and CUDA, plus download the models in a graceful way. This is of course possible, but still hard, and not clear if it's the right place to spend energy. I'm curious how you think about it - is your primary desire to work offline or avoid sending data to OpenAI? Or both?


The latter mostly. It's also free, uncensored, and can never disappear from under me.

FWIW, from my understanding llama.cpp is pretty easy to integrate and is reasonably fast for being API agnostic. Ollama embeds it, for example. No pressure, just pointing it out :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: