Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's nothing Mac specific about running LLMs locally, they just happen to be a convenient way to get a ton of VRAM in a single small power efficient package.

In Windows and Linux, yes you'll want at least 12GB of VRAM to have much of any utility but the beefiest consumer GPUs are still topping out at 24GB which is still pretty limiting.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: