Hacker Newsnew | past | comments | ask | show | jobs | submit | m-schuetz's commentslogin

Depending on how bad it is for you, I can recommend thinking about turbinectomy. Had it done due to chronic, allergy related swellings, and it was life changing.

do you still have any symptoms? I had a turbinate reduction and septum surgery last year, it's helped but I still need sprays in the morning and night and pills for the allergies.

TIL! I have dust mite allergies, how will this help?

It basically means surgically removing parts of your chronically enlarged/swollen turbinates so that your airways are free again. Along with that I've had a nasal spur removed (slightly blocked airways); Septumplastic (had a slightly deviated septum that also inhibited airflow a bit); and while at it the ENT also recommended and did FESS (Opening/Widening some paths to the sinuses).

It was a pretty life changing surgery that finally allowed me to properly sleep again, and do exercises/run while breathing through the nose. For some people, the turbinates may become enlarged again after a while, but for me it's been great for two years already.


Have you considered just doing hyposensibilization therapy? No reason to go the way of surgery before trying that. Worked wonders for me and my array of allergies, dust mites among them.

(one who recommended surgery here)

I tried hyposensibilization therapy, and while it worked for seasonal birch pollen issues, it didn't work for dust mites, oral allergies, and chronically stuffed sinuses.


Evolution just needs people survive long enough to reproduce. If they get sick afterwards, it doesn't care.

Evolution happens both sides - you and the virus/bacteria trying to live off you.

One of the risks of an always on response, is if something evolves to evade it - you have nowhere to go.

It's why taking an antibiotic at breakfast everyday is not a good idea.


> It's why taking an antibiotic at breakfast everyday is not a good idea.

Eh, the main downside in the short run is that you are killing your gut fauna.

> One of the risks of an always on response, is if something evolves to evade it - you have nowhere to go.

Evolution can't look into the future. (And eg bats are pretty much always on with their immune system.)


Unless they are contributing to the survival of their offspring.

Which is one theory why grandmothers (post-menopausal women) are a thing

It can work the other way, too. Your offspring may be more likely to survive if you stop consuming resources once they become viable.

Are you sure that availability of resources was a limiting factor during a large part of human evolution?

ie what has driven human population growth - a fundamental change in availability of natural resources or a fundamental change in how humans exploited them?

I'd argue it's the latter, and that's driven by accumulated knowledge - and before writing - the key repository of that was - old people.


Humans have selective adaptations to reduce resource competition between older and younger members of populations - examples are menopause and testosterone levels.

Part of the reason it benefited us that some but not all people become old is because people require more attention during two phases of their lives. Our biological evolution has prioritized care for the very young over the very old, with respect to a limit on resources (like attention), effectively until the modern age. In some cultures, for instance, those with teeth must pre-chew food for those without, or expected members to engage in ritual suicide at a certain age.


I think it's a mistake ( common ) to view any organism at a point in time as perfectly adapted.

It's like saying cars pistons are designed to wear out - because they do and as the car is perfectly designed ( the mistake ) then it must be for a reason.

Also take menopause - it happens a female has all the oocytes ( eggs ) they will ever have already at birth. Menopause happens when they run out.

What you are arguing is that the number at birth is optimised with a very indirect feedback loop - as oppose to a very direct one of how much resources do you put aside for eggs in terms of maximising number of direct children versus resources used. Occams razor suggests the latter is going to be stronger.

If what you say is true - think about it - old people wouldn't gradually crumble due to wear and tear, they would have evolved some much more efficient death switch. ie Women don't suddenly die post menopause.


The vast majority of human evolution happened in non-humans

Sure - though the tuned behaviour around turning the innate immune system up and down is probably dominated by the more recent part of that long history.

Well, given that the biggest killer of humans throughout most of our history was starvation, I think there's a good chance that's true.

How much accumulated knowledge do hunter-gatherers have?


Except humans are a social species and the bands of humans who survived were the ones with the behaviors which kept elders around because of their benefits to our capacity for social learning.

I'm pretty allergic most of the time (lots of birch cross allergies and dust mites), but sometimes when I'm sick the allergic reactions appear to go down. Allergies can be pretty weird.

Well yes, as allergies mean the immune system is acting weird and sees harmles things as a threat.

That's weird, I switched away from ChatGPT because I mostly got superior results from Gemini and Claude.

give 5.4 a shot - its straneg but surprisingly good for once. speaking as a daily opus user.

Used codex cli (5.4) for the first time (had never used codex or gpt for coding before - was using Opus 4.5 for everything), and it seems quite good. One thing I like is it's very focused on tests. Like it will just start setting up units tests for specs without you asking (whereas Opus would never do that unless you asked)-- I like that and think it's generally good. One thing I don't like about GPT though is it pauses too much throughout tasks where the immediate plan and also the more outward plan are all extremely well defined already in agents.md, but it still pauses too much between tasks saying, next logical task is X, and I say yeah go ahead, instead of it just proceeding to the next task which Id rather it do. I suppose that is a preference that should be put in some document? (agents.md?)

well I have a running model (ha!) in my head about the frontier providers thats roughly like this:

- chatgpt is kinda autistic and must follow procedures no matter what and writes like some bland soulless but kinda correct style. great at research, horrible at creativity, slow at getting things done but at least getting there. good architect, mid builder, horrible designer/writer.

- claude is the sensitive diva that is able to really produce elegant code but has to be reminded of correctness checks and quality gates repeatedly, so it arrives at something good very fast (sometimes oneshot) but then loses time for correction loops and "those details". great overall balance, but permanent helicoptering needed or else it derails into weird loops.

- grok is the maker, super fast and on target, but doesn't think deeply as the others, its entirely goal/achievement focussed and does just enough things to get there. uniqiely it doesn't argue or self-monologue constantly about doubts or safety or ethics, but drives forward where other stuggles, and faster than others. cannot conenctrate for too long, but delivers fast. tons of quick edits? grok it is. "experimental" stuff that is not safe talking about... definitely grok.

- gemini is whatever you quickly need in your GSuite, plus looking at what others are doing and helping out with a sometimes different perspective, but beyond that worse than all the others on top.

- kimi: currently using it on the side, not bad at all so far, but also nothing distinct I crystallized in my head.


Tried using 5.4 xhigh/codex yesterday with very narrow direction to write bazel rules for something. This is a pretty boiler-plate-y task with specific requirements. All it had to do was produce a normal rule set s.t. one could write declarative statements to use them just like any other language integration. It gave back a dumpsterfire, just shoehorning specific imperative build scripts into starlark. Asked opus 4.6 and got a normal sane ruleset.

5.4 seems terrible at anything that's even somewhat out-of-distribution.


I got it to build a stereoscopic Metal raytracing renderer of a tesseract for the Vision Pro in less than half a day.

It surprisingly went at it progressively, starting with a basic CPU renderer, all the way to a basic special-purpose Metal shader. Now it’s trying its teeth at adding passthrough support. YMMV.


I liked using singletons back in the day, but now I simply make a struct with static members which serves the same purpose with less verbose code. Initialization order doesn't matter if you add one explicit (and also static) init function, or a lazy initialization check.

Yeah, I feel singletons are mostly a result of people learning globals are bad and wanting to pretend their global isn't a global.

A bit like how java people insisted on making naive getFoo() and setFoo() to pretend that was different from making foo public


> A bit like how java people insisted on making naive getFoo() and setFoo() to pretend that was different from making foo public

But it's absolutely different and sometimes it really matters.

I primarily work with C# which has the "property" member type which is essentially a first-class language feature for having a get and set method for a field on a type. What's nice about C# properties is that you don't have to manually create the backing field and implement the logic to get/set it, but you still have the option to do it at a later time if you want.

When you compile C# code (I expect Java is the essentially same) which accesses the member of another class, the generated IL/Bytecode is different depending on whether you're accessing a field, property or method.

This means that if you later find it would be useful to intercept gets or updates to a field and add some additional logic for some reason (e.g. you want to now do lazy initialization), if you naively change the field to a method/property (even with the same name), existing code compiled against your original class will now fail at runtime with something like a "member not found" exception. Consumers of your library will be forced to recompile their code against your latest version for things to work again.

By having getters and setters, you have the option of changing things without breaking existing consumers of your code. For certain libraries or platforms, this is the practical difference between being stuck with certain (now undesirable) behaviour forever or trivially being able to change it.


Adding lots of code for the common case to support consumers of the code not recompiling for some uncommon potential future corner-cases seems like a bad deal.

Recompiling isn't that hard usually.


In a product world where customers are building on your platform, requiring that they schedule time with their own developers to recompile everything in order to move to the latest version of your product is an opportunity to lose one or more of those paying customers.

These customers would also be quite rightfully annoyed when their devs report back to them that the extra work could have been entirely avoided if your own devs had done the industry norm of using setters/getters.

Maybe you're not a product but there are various other teams at your organization which use your library, now in order to go live you need to coordinate with various different teams that they also update their code so that things don't break. These teams will report to their PMs how this could have all been avoided if only you had used getters and setters, like the entire industry recommends.

Unless you're in a company with a single development team building a small system whose code would never be touched by anyone else, it's a good idea to do the setters/getters. And even then, what's true today might not be true years from now.

It's generally good practice for a reason.


This is way more cumbersome than mmap if you need to out-of-core process the file in non-sequential patterns. Way way more cumbersome, since you need to deal with intermediate staging buffers, and reuse them if you actually want to be fast. mmap, on the other hand, is absolutely trivial to use, like any regular buffer pointer. And at least on windows, the mmap counterpart can be faster when processing the file with multiple threads, compared to fread.

But I agree that it's a bizarre article since mmap is not a C standard, and relies on platform-dependend Operating System APIs.


> Isn't it time to throw the browser away, stop abusing HTML to make applications, and design something fit for purpose?

Not going to happen until gui frameworks are as comfortable and easy to set up and use as html. Entry barrier and ergonomics are among the biggest deciding factors of winning technologies.


Man, you never used Delphi or Lazarus then. That was comfortable and easy. Web by comparison is just a jarring mess of unfounded complexity.


There are cross platform concerns as well. If the option is to build 3-4 separate apps in different languages and with different UI toolkits to support all the major devices and operating systems, or use the web and be 80% there in terms of basic functionality, and also have better branding, I think the choice is not surprising.


In line with "the web was a mistake" I think the idea that you can create cross platform software is an equally big mistake.

You can do the core functionality of your product as cross platform, to some extend, but once you hit the interaction with the OS and especially the UI libraries of the OS, I think you'd get better software if you just accept that you'll need to write multiple application.

We see this on mobile, there's just two target platform really, yet companies don't even want to do that.

The choice isn't surprising, in a world where companies are more concerned with saving and branding, compared to creating good products.


>You can do the core functionality of your product as cross platform, to some extend, but once you hit the interaction with the OS and especially the UI libraries of the OS, I think you'd get better software if you just accept that you'll need to write multiple application.

Or you can use a VM, which is essentially what a modern browser is anyway. I wrote and maintained a Java app for many years with seamless cross platform development. The browser is the right architecture. It's the implementation that's painful, mostly for historical reasons.


But using a browser (or a VM) buys into the fallacy that your customers across different platforms (Windows, Mac, etc) want the same product. They’re already distinguished by choosing a different platform! They have different aesthetics, different usability expectations, different priorities around accessibility and discover ability. You can produce an application (or web app) that is mediocre for all of them, but to provide a good product requires taking advantage of these distinctions — a good application will be different for different platforms, whether or not the toolkit is different.

I've only done one platform gui work (python) but I'd guess this is stuff that is ripe for transpiling since a lot of gui code is just reusing the same boilerplate everyone is using to get the same ui patters everyone is using. Like if I make something in tkinter seems like it should be pretty straightforward to write a tool that can translate all my function calls as I've structured them into a chunk of Swift that would draw the same size window same buttons etc.


We get into transpiling and we essentially start to rebuild yet another cross platform framework. Starts with "read this filetype and turn it into this layout" and it ends up with "we'll make sure this can deploy on X,Y,Z,W..."

It'd be nice if companies could just play nice and agree on a standard interface. That's the one good thing the web managed to do. It's just stuck to what's ultimately 3 decades of tech debt from a prototype document reader made in a few weeks.


>It'd be nice if companies could just play nice and agree on a standard interface

They basically do though. Every cross platform native ported app I've used the GUI is the same layout. Well, except on macos the menu ribbon is on the topbar and windows it has its own ribbon layer in the application window. But that is it. All these frameworks already have feature parity with another. It is expected that they have these same functions and ui paradigms. Here's your button function. Here is where you specify window dimensions. This function opens a file browser. This one takes in user input to the textbox. I mean it is all pretty standardized and limited what you can expect to do in ui already.


There is a lot of stuff you can get done with the standard library alone of various languages that play nice on all major platforms. People tend to reach for whatever stack of dependencies is popular at the time, however.


I am not sure, it seems that cross platform Applciations are possible using something like python3/gtk/qt etc.


Cross platform GUI libraries suck. Ever used a GTK app under Windows? It looks terrible, renders terrible, doesn't support HiDPI. Qt Widgets still have weird bugs when you connect or disconnect displays it rerenders UIs twice the size. None of those kinds of bugs exist for apps written in Microsoft's UI frameworks and browsers.

The problem with cross platform UI is that it is antithetical to the purpose of an OS-native UI in its reason of existence. Cross platform tries to unify the UX while native UI tries to differentiate the UX. Native UI wants unique incompatible behavior.

So the cross platform UI frameworks that try to use the actual OS components always end up with terrible visual bugs due to unifying things that don't want to be unified. Or worse many "cross platform" UI frameworks try to mimic the its developer's favorite OS. I have seen way too many Android apps that has "cross platform" frameworks that draw iOS UI elements.

The best way to do cross platform applications with a GUI (I specifically avoid cross platform UI) is defining a yet another platform above a very basic common layer. This is what Web had done. What a browser asks from an OS is a rectangle (a graphics buffer) and the fonts to draw a webpage. Nothing else. Entire drawing functionality and the behavior is redefined from scratch. This is the advantage of Web and this is why Electron works so well for applications deployed in multiple OSes.


> Ever used a GTK app under Windows?

I have created and used them. They didn't look terrible on windows.

>What a browser asks from an OS is a rectangle (a graphics buffer) and the fonts to draw a webpage. Nothing else. Entire drawing functionality and the behavior is redefined from scratch. This is the advantage of Web..

I think that is exactly what Gtk does (and may be even Qt also) too..

I think it is just there there is not much funding going to those projects. Web on the other hand, being an ad-delivery platform, the sellers really want your browsers to work and look good...


There's loads of funding. But the ones funding Qt and GTK aren't parties interested in things like cohesion or design standards. They just needed a way to deliver their product to the user in a faster way than maintaining 2-3 OS platform apps. Wanting that shipping velocity by its nature sacrifices the above elements.

The remnants of the dotcom era for web definitely helped shape it in a more design contentious way, in comparison. Those standards are created and pushed a few layers above that in which cross platform UI's work in.


Here is Bleachbit, a GTK3-based disk cleanup utility. It is a blurry mess and GTK3 Window headers are completely out of style and behavior with Windows.

https://imgur.com/a/ruTGUaF#ntnfeCJ

https://imgur.com/yGhgkz2 -> Comparison with another open source app Notepad3 under Windows.

> I think that is exactly what Gtk does (and may be even Qt also) too..

The problem is they half-ass it. Qt only does it with QML. Qt Widgets is half-half and it is a mess.

Overall these do not invalidate my point though. If you want a truly cross-platform application GUI, you need to rewrite the GUI for each OS. Or you give up and write one GUI that's running on its own platform.

> I think it is just there there is not much funding going to those projects. Web on the other hand, being an ad-delivery platform, the sellers really want your browsers to work and look good...

Indeed, Google employs some of the smartest software developers and ones with really niche skills like Behdad Esfahbod who created the best or the second best font rendering library out there. However, Qt has a company behind (a very very incompetent one, not just the library but operating a business). I have seen many commercial libraries too, they are all various shades of terrible.


What you see as difference here is the GNOME3 look. You can create classical GUIs just fine with Gtk. Sure it's still not Win32, but they look and feel often more native then modern Microsoft apps.

I see your point. Thanks for the screenshots.

Visual Basic solved that. The web is in many ways a regression.


Visual Basic (and other 90s visual GUI builders) were great simple options for making GUI apps, but those GUIs were rather static and limited by today's standards. People have now gotten used to responsive GUIs that resize to any window size, easy dynamic hiding of controls, and dynamic lists in any part of the GUI; you won't get them to come back to a platform where their best bet at dynamic layout is `OnResize()` and `SubmitButton.Enabled = False`.


> Visual Basic (and other 90s visual GUI builders) were great simple options for making GUI apps

Yes, they were comfortable and easy to set up (and use), particularly when compared to web development.

> a platform where their best bet at dynamic layout is `OnResize()` and `SubmitButton.Enabled = False`

This is a great description of what web coding looked like for a very long time, _especially_ when it started replacing RAD tools like VB and Delphi. In fact, it still looks like this in many ways, except now you have a JSX property and React state for disabling the button, and a mess of complex tooling, setup and node modules just to get to that base level.

The web won not because of programmer convenience, but because it offered ease of distribution. Turns out everything else was secondary.


> This is a great description of what web coding looked like for a very long time

React is over a decade old, and as far as I remember, desktop apps using embedded browsers (Electron) started becoming dominant after it came out.

The ease-of-distribution advantage is huge, but web technologies are big outside the Web too, where it doesn't apply.

(Besides my main point, idiomatic web UIs don't implement resize handlers for positioning each element manually, but instead use CSS to declaratively create layouts. Modern GUI libraries with visual builders can also do this, but it was decidedly not the norm in the 90s. Also, modern dynamic GUIs generally don't use a static layout with disabled parts, but hide or add parts outright. That kind of dynamicity is hard to even conceptualise with a GUI builder.)


Microsoft invented AJAX when building Outlook for the web back in 2000. GMail was released in 2003 and Google Docs in 2006. Around this time, even enterprise giants like SAP started offering web UIs. This is the shift from RAD to web I'm talking about.

The current idiomatic way of doing web layouts was, back then, almost entirely theoretical. The reality was a cross-browser hell filled with onResize listeners, in turn calling code filled with browser-specific if statements. Entire JavaScript libraries were devoted to correctly identifying browsers in order for developers to take appropriate measures when writing UI code. Separate machines specifically devoted to running old versions of Internet Explorer had to be used during testing and development, in order to ensure end user compatibility.

In short: The web was not in any way, shape or form more convenient for developers than the RAD tools it replaced. But it was instant access multi-platform distribution which readily allowed for Cloud/SaaS subscription models.

Electron happened more as an afterthought, when the ease of distribution had already made web UIs, and hence web UI developers, hegemonic. Heck, even MS Office for the web predates React, Electron, and something as arcane as Internet Explorer 9.

Things have gotten much better, but we're still having to reinvent things that just existed natively in VB6 (DataGrid, anyone?) - and at the cost of increasingly complex toolchains and dependencies.


Are they not? Gui libraries are like button(function=myFunction). This isn't rocket surgery stuff here at least the gui tooling I've used.


Pretty much any non-web GUI framework I tried so far has either been terrible to set up, or terrible to deploy. Or both. Electron is stupidly simple.

ImGUI is the single exception that has been simple to set up, trivial to deploy (there is nothing to deploy, including it is all that's needed), and nice to use.


Tkinter is easy too.


Except ImGUI’s missing what I consider essential features for macOS: proper multitouch support (two finger panning, pinch to zoom).

Specifically for panning and zooming, doesn't the OS translate those inputs to mouse events, like Windows does by default? Otherwise it is simply a matter of performing this translation at the backend level.

I feel that flutter is the first right step for this, it felt like a breath of fresh air to work with compared to the webstack.

Some other words that are sorely missing from dictionaries: "Warm water", "hot water", "cold water", "dirty water"


As an idiomatic expression, "Hot water" = "trouble".

Are there idiomatic expressions for warm/cold/dirty water, which mean something other than a literal adjective describing the temperature or condition of water?


> hot water - n. a difficult or dangerous situation

https://www.merriam-webster.com/dictionary/hot%20water

> warm water - n. an ocean or sea not in the arctic or antarctic regions

https://www.merriam-webster.com/dictionary/warm%20water

> cold water - n. depreciation of something as being ill-advised, unwarranted, or worthless. e.g. threw cold water on our hopes

https://www.merriam-webster.com/dictionary/cold%20water

Seems that what makes sense to be in dictionaries is already there.


> dirty water

Depending on the context you got sewage, slush, runoff, murk, waste etc.


Not just recently. Google image search was semi-useless for years due to being spammed by login-walled pinterest images.


Also Gemini works absolutely fantastic right now. I find it provides better results for coding tasks compared to ChatGPT


Don't want to sound rude, but anytime anyone says this I assume they haven't tried using agentic coding tools and are still copy pasting coding questions into a web input box

I would be really curious to know what tools you've tried and are using where gemini feels better to use


It's good enough if you don't go wild and allow LLMs to produce 5k+ lines in one session.

In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.

Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.


My experience is that on large codebases that get tricky problems, you eventually get an answer quicker if you can send _all_ the context to a relevant large model to crunch on it for a long period of time.

Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling.

I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI).

I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it.

They took a good while thinking.

But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not.

I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible.

So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages.


You need to stick Gemini in a straightjacket; I've been using https://github.com/ClavixDev/Clavix. When using something like that, even something like Gemini 3 Flash becomes usable. If not, it more often than not just loses the plot.


Conversely, I have yet to see agentic coding tools produce anything I’d be willing to ship.


Every time I've tried to use agentic coding tools it's failed so hard I'm convinced the entire concept is a bamboozle to get customers to spend more tokens.


Gemini is a generalist model and works better than all existing models at generalist problems.

Coding has been vastly improved in 3.0 and 3.1, but Google won't give us the full juice as Google usually does.


My guess is that Google has teams working on catching up with Claude Code, and I wouldn't be surprised if they manage to close the gap significantly or even surpass it.

Google has the datasets, the expertise, and the motivation.


I've had the same experience with editing shaders. ChatGPT has absolutely no clue what's going on and it seems like it randomly edits shader code. It's never given me anything remotely usable. Gemini has been able to edit shaders and get me a result that's not perfect, but fairly close to what I want.


have you compared it with Claude Code at all? Is there a similar subscription model for Gemini as Claude? Does it have an agent like Claude Code or ChatGPT Codex? what are you using it for? How does it do with large contexts? (Claude AI Code has a 1 million token context).


I tried Claude Opus but at least for my tasks, Gemini provided better results. Both were way better than ChatGPT. Haven't done any agents yet, waiting on that until they mature a bit more.


- yes, pretty close to opus performance

- yes

- yes (not quite as good as CC/Codex but you can swap the API instead of using gemini-cli)

- same stuff as them

- better than others, google got long (1mm) context right before anyone else and doesn't charge two kidneys, an arm, and a leg like anthropic


thanks for these answers.


it's nowhere near claude opus

but claude and claude code are different things


My take has been...

Gemini 3.1 (and Gemini 3) are a lot smarter than Claude Opus 4.6

But...

Gemini 3 series are both mediocre at best in agentic coding.

Single shot question(s) about a code problem vs "build this feature autonomously".

Gemini's CLI harness is just not very good and Gemini's approach to agentic coding leaves a lot to be desired. It doesn't perform the double-checking that Codex does, it's slower than Claude, it runs off and does things without asking and not clearly explaining why.


(Claude Code now runs claude opus, so they're not so different.)

>it's [Gemini] nowhere near claude opus

Could you be a bit more specific, because your sibling reply says "pretty close to opus performance" so it would help if you gave additional information about how you use it and how you feel the two compare. Thanks.


ChatGTP isn't even meant for coding anymore, nor Gemini. It's OpenAI Codex vs Claude Code. Gemini doesn't even have an offering.


https://antigravity.google/

On top of every version of Gemini, you also get both Claude models and GPT-OSS 120B. If you're doing webdev, it'll even launch a (self-contained) Chrome to "see" the result of its changes.

I haven't played around Codex, but it blows Claude Code's finicky terminal interface out of the water in my experience.


opencode + gemini is pretty nicely working


And yet I got better results with Gemini than with Claude Opus.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: