Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean it remains to be seen that the demand cant be satisfied by local AI.

Qwen + Your Laptop + 3 years is more interesting to me than offloading AI to some hyperscale datacenter. Yes efficiency gains can work for both, but theres a certain level below which you may as well just run the app on your own silicon. AI might not eventually meet the threshold for "apps on tap" if every user with an i7 and 32GB ram is ably served locally.



There is a real possibility that local, free models keep trailing a few years behind the frontier. If that does happen, then the math for building out all this capacity is that you need to make a profit in that window. Can they recoup a trillion dollar data center capex in 2-3 years?


Doesn't that require local hardware to trail a few years behind as well? I don't see consumer hardware being on the same level as a cluster of A100s for a very long time - mostly due to form factor. No one wants a laptop that's two inches thick and liquid cooled. :)


That’s true, but models have been getting more efficient over time as well.


I find the smaller Qwen models and waiting 30 seconds are actually quite tolerable.


You can't train new models on consumer hardware.


Right, so the consumer use of LLMs is really just covering for the commercial training of LLMs.

If training slowed down by 2/3rds would consumers be that much worse off?


20$ p/m for a timeshare on a high-end unit that includes electricity costs is really not a bad deal.

Buying hardware that covers 90% of my use pattern isn't going to pay itself back for 5 or 10 years. With the added benefit that i can change my setup every month.

I strongly believe we're in a bubble, but even just buying stocks with the money seems a better investment in my situation.


I agree except you probably dont do this for other apps. Like theres some threshold where if it runs locally you just do. I have been having lots of fun with models that fit in 16GB ram, and my next hardware will have 32GB just to be future proof. Worst case, I get 12 extra chrome tabs.


I saw a coworker using chatGPT the other day - they were in marketing, and trying to access a dashboard. One of the filters used a list of IDs, sperated by commas. I watched as she repeatedly copied a list of IDs from a spreadsheet into chat gpt, asking for them to be comma sperated, then into the filter.

There are loads of use cases like this that will most definitely be solved by local LLM.


I'm convinced that most office work exists for social reasons, not for practical reasons. Just like every other office technology before, AI is going to replace workflows like that (and this: https://xkcd.com/763/) but it won't have an end effect on the total number of office workers, because they weren't really needed in the first place, and ebb and flow due to cycles of elite overproduction, not productivity.


>I'm convinced that most office work exists for social reasons, not for practical reasons

Disagree. The receptionist recording something in excel manually, vs building an automation for it is a finance issue. Its opex vs capex.

When people make huge assumptions about the future of technology, they tend to miss that lots of people dont want to fork out the capital to buy robots or build new tools, when it works "just fine" having karen enter it manually.


> I'm convinced that most office work exists for social reasons

The powers that be are not competent/powerful enough to engineer a society-wide conspiracy like that.

A lot of waste (especially in big organizations, and the bigger the organization the more capacity for waste) happens, and the more the income is detached from production (e.g. governments get tax revenue regardless of how well they spend it) the less people care about efficiency.

But for the most part people get hired because somebody thought that makes business sense to do so.


I think it's the opposite. The powers that be are not competent/powerful enough to engineer a company that prioritizes productivity over relationships. It's human nature to prioritize relationships over everything else, and it would take a lot of micromanaging to do otherwise, except that micromanaging itself is susceptible to human nature, so managers who try end up micromanaging relationships, not productive work.


Scary. A person with awareness of how LLMs would work would not feel comfortable doing that. It can hallucinate things, forget entries. It is just plain silly.


If it's 95% accurate on average for tasks, then that's probably better than writing an excel formula that's right on the first try. Especially for a marketer.


Can we agree that 95% accurate for such tasks is really bad? I agree that we probably don't care all that much if marketing gets something wrong, but imagine the same applied to taxes or pensions.


The issue is that 95 x 95 x 95 etc. Error rates compound.


I run AI locally and it's a really useful tool, but I'm aware that it'll always be constrained to work on data embedded in the model, that I have locally, or that I can access via some sort of API integration. That makes it much less useful for a lot of tasks than a cloud-based tool that can give me insights into a pool of data that I can't see into directly. AI integrated into a SaaS app will usually have more value than a local model simply because specific and targeted information is better than general information.


Claude and ChatGPT can both search the web for you now. It’s actually quite handy as they can search for updated data on topics if prompted.

Though I agree and believe personal LLM agents with access to our personal data would be much more effective. Though perhaps we should give LLMs a few more years to mature and safeguards to be created before letting it say gamble your house on the newest meme coin. ;)


I can get a 90 percent solution on my local machine. I like the privacy and the cost. I'm still challenged to see wide adaptation of a paid for service at mass scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: