You're correct, Gemini chat limits are a joke at their chapest paid tier compared to both Claude and GPT. Especially crazy when you consider Gemini 3 Pro is more than twice as cheap as Opus 4.6 on the API. It's hard to run into pure chat limits on Claude even if you only use Opus on the cheapest tier, whereas with Gemini it's easy to hit.
Not sure about coding usage, Google being weird about these things I could see that quota being separate.
I’m not sure what A/B test you’re part of but on Claude Code Pro, I hit every single one of my quotas without exception. If you analyze/process images it’s even worse: I hit rate limits first and if I use separate sessions, I hit my quotas too. I use up so many tokens that Jensen should hire me.
I pray the benchmark figures are true so I can stop paying Anthropic after screwing me over this quarter by dumbing down their models, making usage quotas ridiculously small, and demanding KYC paperwork.
Absolutely. Thing is, I'd actually rather take a worse model than Anthropic, so long as it's consistent. Like, a model that can successfully do well for 80% of tasks is much better than Anthropic that some days will be 90% other 60%.
When you have a consistent model, you can incorporate fixes/prompts into your workflow to make it behave better. But this, always having to guess if Anthropic has quantised the model today, wastes so much time and effort.
This should be so easy to prove if it were true. Yet there is none of it, just vibes.
Still, your other two points are completely valid. The opaqueness of usage quotas is a scam, within a single month for a single model it can differ by more than 2x. And this indeed has been proven.
Criticism of Israel and its agents will be outlawed by all means necessary and anybody who questions it will be black bagged. That is the end goal. This is total war.
You failed to establish the link between giving up and getting bad grades from hanging out with the funny kids and how any of that is even remotely caused by stereotyping.
reply