Is on the binary available or is the source available? It is disingenuous to say it’s open source if that’s the case. How could this be supported into the future?
> Passage is built by the founding engineers behind Plaid one of the most trusted financial platforms in the world that powers apps like Venmo, Coinbase, Robinhood, Acorns, and more.
> any time our instinct says "don't build that, it's not worth the time" fire off a prompt anyway, in an asynchronous agent session where the worst that can happen is you check ten minutes later and find that it wasn't worth the tokens.
They are right about new habits needed. And this is where everyone should start. Sometimes a quick prompt has killed 5 hours of meetings to discuss if it were worth it.
No, they are also inconsistent: slack, vscode, zed, claude, chatgpt, figma, notion, zoom, docker desktop, to quote some that i use daily. They have all different UI patterns and design. The only thing they have in common is that are slow, laggy, difficult to use and don’t respond quickly to the Window manager.
Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.
Well put. What world are folks living in where it wouldn’t be the obvious choice.
Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.
> They are often laggy or unresponsive. They don’t integrate well with OS features.
> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land
Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.
Claude offer a CLI tool. Like what product manager would say no to electron in that situation.
This article makes no sense in context. The author surely gets that.
You can run it on consumer grade hardware right now, but it will be rather slow. NVMe SSDs these days have a read speed of 7 GB/s (EDIT: or even faster than that! Thank you @hedgehog for the update), so it will give you one token roughly every three seconds while crunching through the 32 billion active parameters, which are natively quantized to 4 bit each. If you want to run it faster, you have to spend more money.
High end consumer SSDs can do closer to 15 GB/s, though only with PCI-e gen 5. On a motherboard with two m.2 slots that's potentially around 30GB/s from disk.
Edit: How fast everything is depends on how much data needs to get loaded from disk which is not always everything on MoE models.
Yes, RAID 0 or 1 could both work in this case to combine the disks. You would want to check the bus topology for the specific motherboard to make sure the slots aren't on the other side of a hub or something like that.
You need 600gb of VRAM + MEMORY (+ DISK) to fit the model (full) or 240 for the 1b quantized model. Of course this will be slow.
Through moonshot api it is pretty fast (much much much faster than Gemini 3 pro and Claude sonnet, probably faster than Gemini flash), though. To get similar experience they say at least 4xH200.
If you don't mind running it super slow, you still need around 600gb of VRAM + fast RAM.
It's already possible to run 4xH200 in a domestic environment (it would be instantaneous for most tasks, unbelievable speed). It's just very very expensive and probably challenging for most users, manageable/easy for the average hacker news crowd.
Expensive AND hard to source high end GPUs, if you manage to source for the old prices around 200 thousand dollars to get maximum speed I guess, you could probably run decently on a bunch of high end machines, for let's say, 40k (slow).
Cursor opened in config/ + HomeAssistant MCP is exceptionally good.
I have blundered along with Home Assistant over the years, but it lit up with the above setup for me the other day.
For giggles, I had it set all the lights into a disco.
Next, we vibed a markdown file containing a to-do list of all my upstairs lights that are abstractly named by the different integrations.
I put an x against a name and it turned the light off.
Once I identified it, I wrote a better name next to it. It updated the system.
We vibed dashboards and routines.
The problem with Home Assistant is that once it works, you don't touch it for a year and are back to square one with the layers of concepts.
But I am left satisfied knowing I have backed up the conversation/context that we can pick up next year or whenever again.
Is on the binary available or is the source available? It is disingenuous to say it’s open source if that’s the case. How could this be supported into the future?
Hope I am wrong.
reply