Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Are companies really just YOLOing and plugging LLMs into everything

Look we still can't get companies to bother with real security and now every marketing/sales department on the planet is selling C level members on "IT WILL LET YOU FIRE EVERYONE!"

If you gave the same sales treatment to sticking a fork in a light socket the global power grid would go down overnight.

"AI"/LLM's are the perfect shitstorm of just good enough to catch the business eye while being a massive issue for the actual technical side.



> Look we still can't get companies to bother with real security and now every marketing/sales department on the planet is selling C level members on "IT WILL LET YOU FIRE EVERYONE!"

Just recently one of our C level people was in a discussion on Linkedin about AI and was asking: "How long until an AI can write full digital products?", meaning probably how long until we can fire the whole IT/Dev departments. It was quite funny and sad in the same time reading this.


The problem is that you cannot unteach it serving that shit. It's not like there is file you can delete. "It's a model, that's what it has learned..."


If you are implementing RAG - which you should be, because training or fine-tuning models to teach them new knowledge is actually very ineffective, then you absolutely can unteach them things - simply remove those documents from the RAG corpus.


I still don't understand the hype behind rag. Like yeah it's a natural language interface into whatever database is being integrated, but is that actually worth the billions being spent here? I've heard they still hallucinate even when you are using rag techniques.


Being able to ask a question in human language and get back an answer is the single most useful thing that LLMs have to offer.

The obvious challenge here is "how do I ensure it can answer questions about this information that wasn't included in its training data?"

RAG is the best answer we have to that. Done well it can work great.

(Actually doing it well is surprisingly difficult - getting a basic implementation of RAG up and running is a couple of hours of hacking, making it production ready against whatever weird things people might throw at it can take months.)


Being able to ask a question in human language and get back an answer is the single most useful thing that LLMs have to offer.

I’m gonna add:

- I think this thing can become a universal parser over time.


I recognize it's useful. I don't think it justifies the cost.


Of course, it doesn't. Most of those questions are better answered using SQL and those which are truly complex can't be answered by AI.


What cost? A few cents per question answered?


The billions spent on R&D, legal fees, and inference?


There's no global power grid. There are lots of local power grids.


There's also no mass marketing campaign for sticking forks in electrical sockets in case anyone was wondering.


Pedantically, yes, but it doesn't really matter to OP's real message: The problematic effect would be global in scope, as people everywhere would do stupid things to an arbitrary number of discrete grids or generation systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: