For Golang, I highly recommend yzma to explore this surface. I’ve used it for embedding and summarization (with small models) and just mucking around with integrated LLM BubbleTea TUI idea (with bigger models).
Although a maintainer answered you, watch the video from the blog. There's a WASM demo at the end, which is great. It also has a good explainer for those confused about the HTTP decision.
And I appreciate that the Hannes still appreciates the magic of the WASM. [And I keep hearing quark which makes me hungry for tangy creamy German yogurt]
I've been playing with Golang and WASM lately; hands-on WASM was new to me.
I found that many dependencies in the ecosystem (especially older ones) do not support GOARCH=wasm nor GOOS=js / GOOS=wasip1. I've had to fork and add support and then do go.mod replace directives. It can get messy.
Golang build tags make it awesome to have different implementations for different systems.
In the browser, it's all single threaded, so goroutines starve each other. I had to put in "breaths" for interactivity.
There's no local filesystem, so you have to figure out other solutions. Some dependencies use the filesystem as an implementation detail or try to shell out. The program will build, but will error at runtime.
That said, it is pretty sweet when it works. You can make WASM games with ebitengine [1] and it emits instructions for a WebGPU renderer; very efficient and many interactivity concerns are handled for you. The NTCharts demo page [2] combines Zig (Ghostty), WASM+Typescript+GLSL (Ghostty Web), and Golang (booba/ntcharts). The WASM size for the demos there is ~5MB each.
My goal is to make tools for terminal remoting and simplify bringing TUIs to the browser.
[1] https://ebitengine.org
Really fun project! Dude, I spent the last week implementing Kitty Graphics and Clipboard protocols in ghostty-web in the Canvas render.
Then I added WebGL and WebGPU renderers [1], including support for Kitty.
Then I see this this project on a Monday morning... so now I have to implement Ratty Graphics Protocol?!?! [2].
ETA: I looked into this; Ghostty would need patched to support Ratty since Ghostty-Web now defers APC handling there. It would also require pulling in a 3D engine like three.js or otherwise implementing file parsing, lighting, etc. Finally, since local filenames are part of the protocol, a browser would need some file resolver helper, either to get the data over the APC channel or via a URL.
Glyph rendering in three.js, fully instanced and addressable and positionable instances. Handles tens of millions. Sample app loads up full GitHub repositories in the web in a few seconds.
That's pretty handy, thanks for the links. IDE is slick!
Given the structure, I think one could make a threejs backend on for ghostty-web. Makes sense if one will pull in more of three.js anyway. I'm adding it to my backlog to explore.
Second: I would love to offer any assistance during your perusal. Happy to share ideas, what I tried, point out parts of the code that are rough and tumble, whatever helps. I'm in a place where any outside feedback and prodding is precious, so thanks very much for taking a look and keeping it in mind!
While I felt this in 2025, I do not feel this in 2026. I use Claude and the rest with BubbleTea all the time.
But I will say... you have to know Golang. You have to have at least tried to make a BubbleTea app yourself and try to understand ELM architecture. You have to look at the code and increment with it.
It makes total sense for OP to switch to Rust and Ratatui if they don't know Golang well. But I don't think it's a better language for it. [Ratatui has brought me great inspiration though!]
Independent of framework, the LLMs get the spacial relationships. I say things like "the upper right panel's content is not wrapping inside and the panel's right edge should extend to the terminal edge" and the LLM will fix it. They can see the resultant text, I'm copy-pasting all the time.
TUI code is finicky; one mis-rendered component mucks everything up. The LLMs will decide themselves make little, temporary BubbleTea fixtures to help understand for itself when things aren't right.
The only real problem with LLMs and BubbleTea is that upon first prompt, they insist on using BubbleaTea v1 versus BubbleTea v2, released in December 2025. But then you just point it to the V2_UPGRADE.md and it gets back on track. That will improve as training cutoffs expand.
I vibe-coded this TUI for Mom's last night. I actually started with Grok (who started with v1) and then moved into Claude Code after some iteration:
I've been advocating heavily this approach since January for non-coding use. The important property is an editable, understandable (by LLMs and humans), and renderable source-of-truth that can be incrementally modified.
I talk to laypeople about their AI work -- I am constantly doing this, inserting myself into AI conversations on the street like an anthropologist when I encounter them...
HTML artifacts are the new browser URL bar, wherein some users have a mental model that that bar is actually Google.
Many people now talk about their "spreadsheet" or their "presentation" or "marketing tear sheet", or "slide show", "competitive analysis", "hvac system diagram" or whatever the thing they were working on and how lame it was working with ChatGPT or Claude Web.... and how miraculous Claude Code or OpenClaw is with creating these new documents...
I will ask them what the documents actually are and what the difference in experience was. It takes a lot of teasing (because they don't have the computing vocabulary yet) or having them show me, and it will always come down to that the artifact is HTML.
Their pleasant experience is that it is iterating on an HTML file (+CSS +images) living on a filesystem with high quality instant rendering; plus it can sprinkle JavaScript when it needs to. It might even revision control it without them knowing if there's a git system. [I suggest they checkpoint their work if they don't; revision control is the next stage of learning for the laypeople?]
Whereas the Web-embedded experiences are stabbing multiple times on a DOCX/PPTX/XLSX lingering in a context window and a vague notion of local storage (rendered as HTML anyway in a sidebar), etc. The HTML workflow also allows other media to be integrated much more easily.
So really all this presentation work is Vibe-Coding by the masses; they don't need to know about all the turtles underneath them. But if they are willing, they could crack it open and see and edit it; or easily hand it off to another agent.
Go figure that the system created for collaborative multimedia communication ends up being useful for the machine intelligence to help us communicate.
One non-obvious reason is that an important aspect of their community is to shepherd new contributors [1]. LLMs crushing everything would reduce that. More obvious is all the toil for maintainers dealing with LLM PRs (broadly it’s an issue). The Zig maintainers prefer to put their energy into improving people and fostering those relationship.
It's important that developers have an accurate mental model of how things work, are structured and why.
LLMs promote a decoupling of mental models and the actual codebase.
As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.
Exactly. This is what many ai-sloppers ignore. Mental models are crucial. Nothing substitutes for having the program itself in your brain and being able to "mentally debug" it when something breaks.
Well said! I don't think either party is really at fault here, but if Anthropic wanted to contribute non-negligible amounts of code over time then it's an absolute dealbreaker.
Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!
That's a solid reason to keep LLMs away from the kind of tasks that help with onboarding. But a patch series from a competent team that changes 3000 lines should probably be evaluated on its own merits. Or at least, the collaboration-based reasons to reject AI don't apply and the real reason would be something else.
(Though I don't know if this particular patch series would get accepted on its own merits.)
The recent article explained the bun patch would have been refused on technical merits as it's intrinsically incorrect, to be able to work properly it required some language changes.
I don't understand your suggestion. If you take an ugly patch series that changes 3000 lines and organize it into small quality changes, it's still a patch series that changes 3000 lines.
There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.
Yeah, I remember when the lazy bastards started writing programs using compilers instead of learning assembly language. Now I don’t have a single colleague who can write assembly. There’s whole generations now who can’t code assembly. Most don’t even know what a register is. Hope Zig holds against this latest attempt to make everyone stupid.
To add to the other commenters, loads of people don’t know assembly, which speaks to the quality of the average developer. The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.
I'd be very surprised if the "average" developer across the board was in fact not just a JavaScript / TypeScript only developer. I have no expectations or really even hope that the average developer I work with has ever written a line of assembly.
>The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.
That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.
Knowing assembly in this day and age is the result of being curious and wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.
The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.
Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.
> wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.
It's possible to know in general terms, how computers work, and what assembly is without "knowing assembly" in the sense of being familiar with using/debugging it as a programming language.
Knowing assembly doesn’t mean you would spend your time writing assembly (aka being familiar with opcodes and architecture optimizations). But in the process, you get familiar with the working of the computer hardware and the OS that sits on top of it. That is always useful knowledge especially when needing to deal with binary format and protocols or FFI.
Then it's sufficient to know assembly, but not necessary.
This is compatible with "[developers] that still understand assembly to this day tend to be better developers", but not with "[on developers who] don’t know assembly, which speaks to [their] quality".
The JavaScript developers are checking in JavaScript code that they ostensibly understand. That is not the same as prompting an LLM to generate Zig that they don't understand, and expecting someone to merge it.
ah, i see what you're saying. fair point! though the argument was that LLMs essentially are a yet higher level programming language (or, rather, let you write in a higher level language).
They do let you write in a higher-level language, but it's not really analogous to a higher-level programming language. The ambiguity and lack of determinism makes prompting fundamentally different from using a high level programming language.
That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic. That may be a problem but it’s still a higher level language, just a much higher level language than anything before.
I assume you're some sort of programmer and I genuinely wonder how in the world can someone in good faith downplay non-determinism and ambiguity when talking about a programming language.
High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.
The language specs may be, but an implementation is never ambiguous. When you encounter and undefined behavior in the specs, that’s when you look at your compiler/interpreter docs.
So by your logic all the PMs, managers and customers are programmers, right? After all, there’s a human compiler that takes their input and produces a program?
They are programmers when they write a prompt and get runnable code as a result, yes… but no if asking a human to write the code because if you have an intermediate, manual step between the text and the running code, you don’t have an automated process and hence it’s no longer even an application, let alone a “compiler”.
Why does it matter if a human or a machine is responsible for turning the prompt into code?
If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.
They've been back and we're all taking them further than ever before!
For the past few weeks I've been wrapping up Booba [1], which is developer tooling to combine BubbleTea and Ghostty in WASM deployments (using ghostty-web).
It provides for some interesting deployment patterns both locally, over network, and embedded in a web page. It's intended to be very easy to adopt; at the simplest, one just changes `tea.NewProgram` to `booba.NewProgram`.
I used Booba to make a demo page [2] for our NTCharts TUI library published to GitHub Pages. The repo READMEs have GIFs... this page is all embedded WASM.
There's also new Kitty-Graphics-supported widgets in there (picture, chartpicture); I updated Booba and Ghostty-Web to support it. Still getting the kinks out.
https://github.com/hybridgroup/yzma
And thank you antirez for using your rep and quality output to push this line of evangelism; it is even more important than the software itself.
reply