Hacker Newsnew | past | comments | ask | show | jobs | submit | rc1's commentslogin

> Available via XR engine binary only

Is on the binary available or is the source available? It is disingenuous to say it’s open source if that’s the case. How could this be supported into the future?

Hope I am wrong.


Before writing it off..

> Passage is built by the founding engineers behind Plaid one of the most trusted financial platforms in the world that powers apps like Venmo, Coinbase, Robinhood, Acorns, and more.


> The whole thing cost about $1,100 in tokens.

I like this is called out.


> any time our instinct says "don't build that, it's not worth the time" fire off a prompt anyway, in an asynchronous agent session where the worst that can happen is you check ten minutes later and find that it wasn't worth the tokens.

They are right about new habits needed. And this is where everyone should start. Sometimes a quick prompt has killed 5 hours of meetings to discuss if it were worth it.


> have inconsistent style

You mean incongruent styles? As in, incongruent to the host OS.

There is no doubt electron apps allow the style to be consistent across platforms.


No, they are also inconsistent: slack, vscode, zed, claude, chatgpt, figma, notion, zoom, docker desktop, to quote some that i use daily. They have all different UI patterns and design. The only thing they have in common is that are slow, laggy, difficult to use and don’t respond quickly to the Window manager.

Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.


XCode and Pages are a delight in comparison to VSCode and Notion is certainly one of the takes of all time.

XCode is usually the first example that comes to mind of a terrible native app in comparison to the much nicer VSCode.


You missed my point. Electron apps are incongruent to native OS apps.

Electron apps look the same on each platform therefore they are consistent.

The meta point is the effort required to be consistent with the OS.

You listed MacOS only apps, emphasising the point.

To do a per OS consistent experience is N times the effort.


Well put. What world are folks living in where it wouldn’t be the obvious choice.

Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.

> They are often laggy or unresponsive. They don’t integrate well with OS features.

> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land

Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.

Claude offer a CLI tool. Like what product manager would say no to electron in that situation.

This article makes no sense in context. The author surely gets that.


Isn’t it still? Antidotally, I work with lots of creators who still prefer it because of its subjective qualities.


How long until this can be run on consumer grade hardware or a domestic electricity supply I wonder.

Anyone have a projection?


You can run it on consumer grade hardware right now, but it will be rather slow. NVMe SSDs these days have a read speed of 7 GB/s (EDIT: or even faster than that! Thank you @hedgehog for the update), so it will give you one token roughly every three seconds while crunching through the 32 billion active parameters, which are natively quantized to 4 bit each. If you want to run it faster, you have to spend more money.

Some people in the localllama subreddit have built systems which run large models at more decent speeds: https://www.reddit.com/r/LocalLLaMA/


High end consumer SSDs can do closer to 15 GB/s, though only with PCI-e gen 5. On a motherboard with two m.2 slots that's potentially around 30GB/s from disk. Edit: How fast everything is depends on how much data needs to get loaded from disk which is not always everything on MoE models.


Would RAID zero help here?


Yes, RAID 0 or 1 could both work in this case to combine the disks. You would want to check the bus topology for the specific motherboard to make sure the slots aren't on the other side of a hub or something like that.


You need 600gb of VRAM + MEMORY (+ DISK) to fit the model (full) or 240 for the 1b quantized model. Of course this will be slow.

Through moonshot api it is pretty fast (much much much faster than Gemini 3 pro and Claude sonnet, probably faster than Gemini flash), though. To get similar experience they say at least 4xH200.

If you don't mind running it super slow, you still need around 600gb of VRAM + fast RAM.

It's already possible to run 4xH200 in a domestic environment (it would be instantaneous for most tasks, unbelievable speed). It's just very very expensive and probably challenging for most users, manageable/easy for the average hacker news crowd.

Expensive AND hard to source high end GPUs, if you manage to source for the old prices around 200 thousand dollars to get maximum speed I guess, you could probably run decently on a bunch of high end machines, for let's say, 40k (slow).


You can run it on a mac studio with 512gb ram, that's the easiest way. I run it at home on a multi rig GPU with partial offload to ram.


I was wondering whether multiple GPUs make it go appreciably faster when limited by VRAM. Do you have some tokens/sec numbers for text generation?


The Oracle Org Chart by Manu Cornet springs to mind reading this: https://www.globalnerdy.com/2011/07/03/org-charts-of-the-big...


Cursor opened in config/ + HomeAssistant MCP is exceptionally good. I have blundered along with Home Assistant over the years, but it lit up with the above setup for me the other day.

For giggles, I had it set all the lights into a disco.

Next, we vibed a markdown file containing a to-do list of all my upstairs lights that are abstractly named by the different integrations. I put an x against a name and it turned the light off.

Once I identified it, I wrote a better name next to it. It updated the system.

We vibed dashboards and routines.

The problem with Home Assistant is that once it works, you don't touch it for a year and are back to square one with the layers of concepts. But I am left satisfied knowing I have backed up the conversation/context that we can pick up next year or whenever again.

A memorable computer experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: