Thanks. Have you used cursor or copilot (recently, tab completion has gotten better)? I'm curious how this compares in actual performance. Last time I used Zed, this was a showstopper as the completions were much worse (though if I configure it to use copilot as my source, I guess it should perform the same as VsCode?).
Personally, I don't like this autocomplete or tab-completion thing. I find it very distracting. I understand why someone might like it, but it's just not my thing.
I mostly use Claude (and Codex) through ACP in Zed. My colleagues use Cursor and VSCode, and I don't feel like I'm missing anything at all.
I am primarily using Claude as well, but I still have my old fix from Cursor, and as long as it is accurate like Cursor, I like it, if it falls below that level of accuracy, I find it annoying.
Yeah, I need cross platform, and GTK looks quite foreign on Windows/macOS IMO. I toyed with custom themes, but couldn't find any I liked for a cross platform look (wanted something closer to Fluent UI).
While unfortunate, to me this just says any user requested features aren't going to get merged anytime soon. As is, it already runs on windows/linux/mac, and will need to do so maturely for Zed to function. Therefore, to me, this isn't that big of a deal, and when they need things like web support (on their roadmap), they will then add that.
I'm curious... does anyone have any PRs or features that they feel need merging in order to use GPUI in their own projects? (other than web support)
Sadly it doesn't actually look like gpui-ce has any activity, the maintainer merged one pull request (literally, #1) and then stopped. They should've just added more community maintainers to the GPUI repo directly rather than having a fork.
Really? It seems better than ever to me now that we have gpui-component. That seems to finally open doors to have fully native guis that are polished enough for even commercial release. I haven't seen anything else that I would put in that category, but one choice is a start.
The problem is that Zed has understandably and transparently abandoned supporting GPUI as an open source endeavour except to the extent contributions align with its business mission.
I remember when that came out, but I'm not sure I understand the concern. They use GPUI, so therefore they MUST keep it working and supportable, even if updating it isn't their current priority. Or are you saying they have a closed source fork now?
Actually, this story is literally them changing their renderer on linux, so they are maintaining it.
> except to the extent contributions align with its business mission
Isn't that every single open source project that is tied to a commercial entity?
I don't know what the message means exactly, but I can't plan to build on GPUI with it out there, especially when crates that don't carry that caveat are suffering from being under-resourced.
They haven't. They are just heads down on other work. It wouldn't make sense for them to abandon it - they have no alternative. What that message was about was supporting _community_ prs and development of gpui.
Focus ebbs and flows at Zed, they'll be back on it before long.
I tried gpui recently and I found it to be very, very immature. Turns out even things like input components aren't in gpui, so if you want to display a dialog box with some text fields, you have to write it from scratch, including cursor, selection, clipboard etc. — Zed has all of that, but it's in their own internal crates.
Do you know how well gpui-component supports typical use cases like that? Edit boxes, buttons, scroll views, tables, checkbox/radio buttons, context menus, consistent native selection and clipboard support, etc. are table stakes for desktop apps.
I do think gpui needs a native input element (enough that I wrote one (https://github.com/zed-industries/zed/pull/43576) just before they stopped reviewing gpui prs) but outside of that I think it is pretty ok and cool that gpui just exports the tools to make whatever components you need.
I could see more components being shipped first party if the community took over gpui, or for some crazy reason a team was funded to develop gpui full time, but developing baseline components is an immense amount of work, both to create an maintain.
Buttons (any div can be a button), clipboard, scroll views (div, list, uniform_list) should all already be in gpui.
From the PR, it sounds like the switch to WGPU is only for linux. The team was reluctant to do the same for macOS/Windows since they felt their native renderer on those platforms was better and less memory intensive.
> This definitely would be worth some profiling. I don't think it's a given that their custom stacks are going to beat wgpu in a meaningful way.
They probably will for memory usage. Current wgpu seems to have a floor around ~100mb that isn't there with other rendering backends (and it was more like ~60mb with wgpu a few months / versions ago).
Not sure if this is fixable in wgpu, or do with spec compatibility (my guess would be that it's fixable, just not top priority for the team atm).
WGPU is just a layer over the top of the native APIs on any given platform so unless Zed's DirectX/Metal renderers were particularly bad it's unlikely WGPU will be better here.
I'm not saying it would be better, I'm saying it may not be particularly much worse. Which still might make it worth simplifying everything by settling on one rendering abstraction
WebGPU has some surprising performance problems (although I only checked Google's Dawn library, not Rust's wgpu), and the amount of code that's pulled into the project is massive. A well-made Metal renderer which only implements the needed features will easily be 100x smaller (in terms of linecount) and most likely faster.
There is also the issue that it is designed with JavaScript and browser sandbox in mind, thus the wrong abstraction level for native graphics middleware.
I am still curious how much uptake WebGPU will end up having on Android, or if Java/Kotlin folks will keep targeting OpenGL ES.
I don't think it would, but I don't think it's a given that their homegrown renderer is wildly more performant either - people tend to overestimate the performance of naive renderers
wgpu isn't a renderer though, it's an abstraction layer. It's honestly hard for me to imagine it ever being faster than writing directx or metal directly. It has many advantages, like that it runs in browsers and is memory safe (and in the case of dawn, has great error messages). But it's hard for it to ever be as fast as the native APIs it calls for you.
I think most non-trivial cross-platform graphics applications eventually end up with some kind of hardware abstraction layer. The interesting part is comparing how wgpu performs vs. something custom developed for that application, especially if their renderer is mostly GPU-bound anyway. wgpu definitely has some level of overhead, but so do all of the other custom abstraction layers out there.
If I recall, Arrow is more or less a standardized representation in memory of columnar data. It tends to not be used directly I believe, but as the foundation for higher level libraries (like Polars, etc.). That said, I'm not an expert here so might not have full info.
You can absolutely use it directly, but it is painful. The USP of Arrow ist that you can pass bits of memory between Polars, Datafusion, DuckDB, etc. without copying. It's Parquet but for memory.
This is true, and as a result IME the problem space is much smaller than Parquet, but it can be really powerful. The reality is most of us don't work in environments where Arrow is needed.
> If we look at segment 0800, we see the smoking gun: in and out instructions, meaning that the copy-protection routine is definitely here, and best of all, the entire code segment is a mere 0x90 bytes, which suggests that the entire routine should be pretty easy to unravel and understand. For some reason, Reko was not able to decompile this code into a C representation, but it still produced a disassembly, which will work just fine for our purposes. Maybe this was a primitive form of obfuscation from those early days, which is now confusing Reko and preventing it from associating this chunk of code with the rest of the program… who knows.
in/out instructions wouldn't have a C equivalent. My assumption would be it only translates instructions that a C compiler would typically create.
I would still hope for it to translate most of the code with a couple of asm blocks. But maybe the density of them was too high and some heuristic decided against it?
I feel like the title is a bit misleading. I think it should be something like "Using Rust's Standard Library from the GPU". The stdlib code doesn't execute on the GPU, it is just a remote function call, executed on the CPU, and then the response is returned. Very neat, but not the same as executing on the GPU itself as the title implies.
> For example, std::time::Instant is implemented on the GPU using a device timer
The code is running on the gpu there. It looks like remote calls are only for "IO", the compiled stdlib is generally running on gpu. (Going just from the post, haven't looked at any details)
Which is a generally valid implementation of IO. For instance on the Nintendo Wii, the support processor ran its own little microkernel OS and exposed an IO API that looked like a remote filesystem (including plan 9 esque network sockets as filesystem devices).
Flip on the pedantic switch. We have std::fs, std::time, some of std::io, and std::net(!). While the `libc` calls go to the host, all the `std` code in-between runs on the GPU.
I think it fits quite well. Kind of like the rust standard lib runs on the cpu this does partially run on the gpu. The post does say they fall back on syscalls but for others there a native calls on the gpu itself such as Instant. The same way the standard lib uses syscalls on the cou instead of doing everything in process
Exactly what I was thinking. I mean how can you produce something, esp. in bulk, when the exact ingredients and quantities aren't known? Assuming it is made in a typical factory, the machines would have to be programmed and that would typically mean someone has to know. I wonder if they split the knowledge over several different groups so a group only knows a single piece? Hmm....
This is how they do it. There was a documentary about coca-cola and they explained that they completely separated the supply pipeline. Operators manipulate unlabelled sources coming from separate parts of the company.
It's a myth that Coca-Cola is a closely held secret, though. Any food flavoring specialist can reconstruct the flavor of Coke almost exactly.
A few years ago I (not a specialist!) made lots of batches of OpenCola, which is based partly on the original Pemberton recipe, and it comes so close that nobody could realistically tell the difference. If anything, it tastes better, because I imagine Coke doesn't use fresh, expensive essential oils (like neroli) for everything.
The tricky piece that nobody else can do is the caffeine (edit: de-cocainized coca leaf extract) derived from coca leaves. Only Coke has the license to do this, and from what I gather, a tiny, tiny bit of the flavour does come from that.
> If anything, it tastes better, because I imagine Coke doesn't use fresh, expensive essential oils (like neroli) for everything.
I've not participated in Cola tasting, but assuming fresher tastes better isn't really a safe assumption. Lots of ingredients taste better or are better suited for recipies when they're aged. I've got pet chickens and their eggs are great, but you have to let them sit for many days if you want to hard boil them, and I'd guess baking with them may be tricky for sensitive recipies.
Anyway, even if it does taste better for whatever that means, that's not meeting the goal of tasting consistently the same as Coke, in whichever form. If you can't tell me if it's supposed to taste like Coke from a can, glass bottle, plastic bottle, or fountain, then you've told me all I need to know about how close you've replicated it.
I think my point flew past you: If I can make a 99% clone of Coke in my kitchen, any professional flavoring pro will do it 100%. The supposed secret recipe isn't why Coke is still around, it's the brand.
And by fresh I do mean: The OpenCola is full of natural essential oils (orange, neroli, cinnamon, lime, lavender, lemon, nutmeg), and real natural flavor oils have a certain potent freshness you don't get in a mass-produced product.
I'm merely making the point that there's nothing magical about the recipe. Anyone wanting to truly replicate it for mass production can simply use commodity flavor compounds.
Coca leaves contain various alkaloids, but not caffeine. Coca Cola gets its caffeine from (traditionally) kola nuts, and (today, presumedly) the usual industrial sources.
You had better luck than I did, I tried my hand at making Open Cola, put around $300 into it (between the carbonization rig and essential oils primarily), and while I'd say it was "leaning towards coke", I would also definitely say that nobody would mistake it for coke.
I noticed it was incredibly important to get the recipe mixture exactly right, because even a slight measurement error resulted in weirdly wrong flavors.
I did my OpenCola experiment in the company office together with a colleague, and we ended up hooking it up to a beer tap, with a canister of CO2. I'm proud to say the whole office really got into it.
Ive heard from others that this is how defense software engineering goes.
You write code for a certain part/spec that could go on a number of things (missle, airplane, etc). You dont know if your code will be used in a missile or not.
I am a pretty serious "Rustacean", but I like to think "for the right reasons". A rewrite in Rust of the main project would make very little sense, unless there is some objective the project wants that can't be met with C (see below). This person presents a well thought out case on why it makes little sense to rewrite, especially in the final section. Rust is great for many things, but when you have something old that is already so well tested and accessible on all sorts of obscure platforms, it makes little sense to rewrite, and the likely result would be more bugs, not less, at least in the short term.
Limbo and Turso's other tools seem interesting, but the listed limitation of “no multi process database access” is pretty huge.
If you don’t need that, then great—Turso/Limbo might be for you! But there are a ton of use cases out there that rely on SQLite for simultaneous multiprocess access. And I’m not even talking about things that use it from forking servers or for coordination (though those are surprisingly common as well)—instead, lots of processes that use SQLite 99.9% of the time from one process still need it to be multiprocess-authoritative for e.g. data exports, “can I open two copies of an app far enough to get a ‘one is already running’ error?”-type use cases, extensions/plugins, maintenance scripts, etc. not having to worry about cross-process lock files thanks to SQLite is a significant benefit for those.
This seems like the way. Why would Rustaceans bother to "argue their case" before an unwilling board if they can just do the rewrite themselves? Maybe it will succeed, maybe not, but you don't need SQLLite's blessing to test the proposition.
My hunch is that those aren't very serious Rustaceans. Even the original language developers developed Rust to interoperate with C as much as possible, and they often used to discourage the rewrite evangelism. If you are a serious Rustacean, you'd probably be worried about the safety tradeoffs in rewriting in Rust. As your parent commenter points out, a mature C codebase is already tested so well that rewriting it in Rust is likely to introduce more bugs - non-memory-safety bugs that are nevertheless serious enough. That's why I and many other Rustaceans don't recommend it. In fact, some Rustaceans even advise others to not rewrite Fortran 90 code (in order to preserve the performance advantage) and instead recommend integrating it with Rust using FFI.
Oh, SQLite (as a database) is easy compared to a client-server database, or an "embedded" database that runs in a separate process.
The issue is more of the object-relational impedance mismatch that happens when using any SQL database: ORMs can be slow / bloated, and hand-written SQL is time consuming.
I shipped a product on SQLite, and SQLite certainly lived up to its promise. What would have been more helpful was if it could index structured objects instead of rows, and serialize / deserialize the whole object. People are doing this now by putting JSON into SQLite. (Our competitors did it when I looked into their SQLite database.)
I could see it being useful for pure Rust projects once its completed. I mean in Java / Kotlin land, I prefer to use H2 in some cases over SQLite since its native to the platform and H2 is kind of nice altogether. I could see myself only using this in Rust in place of SQLite if its easy to integrate and use on the fly.
The equivalent "platform native" database for Clojure is Datalevin [0], which has a Datomic-like Datalog query language with SQLite-style semantics. Recommended.