Hacker Newsnew | past | comments | ask | show | jobs | submit | georgemcbay's commentslogin

From what I understand they have... or will have... 2 data centers, the original one ("Colossus 1", the one that is already poisoning Memphis, TN) is the one Anthropic will be using, but they are also building a new one, Colossus 2 for Grok's own use.

All that said, I wasn't aware Grok really had any ground to lose, the only time I ever heard about it is twitter memes and when its getting in trouble for claiming itself "Mecha Hitler" or serving sexualized content.


> it's been clear for a long time that the people running AI companies--Altman et al.--are not acting either ethically or sincerely.

One could argue the same is true of many religious leaders so I'm not sure what to do with the news of this meeting other than shrug.

(this is not meant as any kind of defense of the AI company executives, who should continually be called out for their unethical behavior)


> The history of the last 250 years is inventing new professions as old ones are automated away.

Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.

The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).

If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.


> lawmakers don't typically understand them in depth and get a lot of their explanations from utility lobbyists or the regulatory agency itself, if they even get involved or pay attention.

I think you are being far too charitable here and in most cases it is weaponized ignorance at best.

Why dig into the minutia of the actual rules when you can just have the people donating money to you while benefiting from you not really fixing anything just tell you what you should do...?


I was being diplomatic for sure, but the regulators are often also pretty much working for the utility companies, sometimes quite illegally if you look at the HB6 scandal in Ohio where the head of the PUCO Sam Randazzo took massive bribes. He never faced the charges in court because he ended his life.

> It's basically proof how well AI works these days. Give it a few months so they can scale and it'll get better. Remember Twitter fail whale? Growth pains that can and will be solved.

GitHub's problems can technically be solved, but that doesn't mean they can be solved in a way where the economics still work out.

If AI use is 10x-ing the amount of infrastructure costs for GitHub but not 10x-ing the amount of money Microsoft brings in from GitHub then there is certainly no guarantee they will bother to solve these issues adequately.

And I'd be shocked if the revenue side of things isn't lagging way behind the extra usage post-AI-era, both because a lot of the new use is probably on the GitHub free tier, and because even on the paid tier most usage (other than CI/Actions, AFAIK) are on a fixed subscription cost per user regardless of how much you are slamming their servers and it is unclear how much they can raise that price without current enterprise users fleeing.

Twitter had a clearer goal that aligned with the financials... support more people stably, show more ads. Things are less clear with GitHub's business model where the free tier is a loss leader for the paid tier but the expansion in usage is likely to balloon the free tier usage at a far faster rate than the paid tier usage.

Also (and this part is admittedly far more speculative) if AI labs are to be believed this is still early days for AI usage and we'll still see massive usage growth over the next few years. If GitHub is already having existential trouble at the beginning of the curve, what hope do they have to scale up with their current business model if AI usage actually does ramp up exponentially?


> And I'd be shocked if the revenue side of things isn't lagging way behind the extra usage post-AI-era, both because a lot of the new use is probably on the GitHub free tier, and because even on the paid tier most usage (other than CI/Actions, AFAIK) are on a fixed subscription cost per user regardless of how much you are slamming their servers and it is unclear how much they can raise that price without current enterprise users fleeing.

I'd guess most of the costs incurred to GitHub outside of Actions as part of the enterprise flat-rate tier are a fraction of what enterprises are paying for AI in order to incur those costs in the first place.

If a company has to pay $5 extra to GitHub for every $100 of extra AI spend due to that AI use creating disproportionate load, I've got a hard time imaging that GitHub will be the thing that gets fled from.

As far as the free tier goes, it seems like there should be a path to making prohibitively-cost-incurring usage models high-friction. (e.g. limit the free Actions minutes that you get to a certain number per month.) As long as the limits are roughly proportional to the actual costs incurred, there's not too much risk of people fleeing to a competing service, because the only way a competing service would be able to undercut the costs is by taking steep losses themselves, which isn't much of a business model in order to attract people's code repositories.


Yah, the monitization bit is challanging. I'll ask my agent to click some of the ads GitHub serves it ;-)

But getting this infrastructure right is crucial for a future where most of the code is AI generated. GitHub puts microsoft in a good position to experiment and learn how to optimize GitHub (enterprise) for the future.

Nate b Jones on youtube, https://youtu.be/FDkvRl1RlT0?si=AEYlUchm_oalMSzf, argues that Atlassian might be an interesting acquisition for Anthropic, as it provide most of the context AI at enterprises need. When executed well, GitHub enterprise, can offer microsoft the same value: the context AI needs in the future.


> But getting this infrastructure right is crucial for a future where most of the code is AI generated.

That's not the problem. The revenue model they have is based on a certain amount of usage from the people who do not pay (you, for example), and a certain amount of usage from the people who do pay (enterprises).

If you 100x you usage, then they need 100x the infra, which means they need 100x the revenue.

At that sort of usage enterprises would rather self-host, and github would be left with only the free users, who are almost all like you now - hammering their servers but not paying for it.

If you self-host, for $5/m you can have your own VPS, but doesn't really solve the problem as much as you'd think - those are all vCPUs and shared, so you can't hammer them all the time either because then the provider has to increase their infra as well so fewer accounts share a single CPU.

Either way, if you want to generate code with AI at the speed that an agent can, you'll have to pay for it one way or another.


Also, one thing the numbers they published show is that the bits that are growing 10x YoY (and which they expect to get “worse”) are all the things that you get “unlimited” mileage off (even if you're a paying customer): repos, commits, PRs.

Things that have “usage based billing” (like action monites) grow closer to 2x YoY.

When there's a dollar amount attached, people don't 10x, because it's not worth it. They splurge when it's cheap, and unlimited.


Well either Microsoft finds a way, or Anthropic will. I'm sure they'd love to host all these projects with all the source and context. Maybe they should buy GitLab, or Atlassian.

> Well either Microsoft finds a way, or Anthropic will.

Just what sort of nonsense is this? Neither of them are going to operate at a loss.

Why are you so convinced that they'd be happy to continue spending money on you and getting none in return?


> But getting this infrastructure right is crucial for a future where most of the code is AI generated.

If that is the future, then source code hosting will be the least of our worries. The entire industry will collapse because the software will stop working.


> Got an idea that you'd need assembly language for - now you can do it instead of.....

Nobody actually needs a web server built in assembly language, it serves no practical purpose. And I say that as someone who learned to program 6502 assembly language in 1983 and has sporadically used assembly of various architectures since.

The absurdity of building it would have been the curiosity draw pre-LLMs, but when it existing is just a series of prompts away it really loses all of its meaning.

But yeah... hooray for AI. Can't wait until we learn to harness it to supercharge the most important and valuable thing we do as a human society in modern times: stuff increasingly intrusive ads in front of everyone at all times.


> Can't wait until we learn to harness it to supercharge the most important and valuable thing we do as a human society in modern times: stuff increasingly intrusive ads in front of everyone at all times.

Wasn’t it used for that before anything else? Google invented transformers and had LLMs internally before chatgpt got released. Presumably they were using them for ads, because their public demos were insane things like talking to the moon.


> Wasn’t it used for that before anything else? Google invented transformers and had LLMs internally before chatgpt got released.

According to friends who worked at Google (no direct knowledge myself, so don't know exactly how true it is), they mostly sat on the tech. Google News had internal prototypes of using them to expand/contract/summarise and/or add details/context to news articles and translate them to different languages, but it was never fully productised.

Then after ChatGPT got popular, sudden panic to start using them in products company-wide.


> The best PR is not being an asshole. I wonder if he's thought about it.

There are a lot of people in the world who lack basic human empathy to such an extent that it is nearly impossible for them to just not be an asshole.

I don't know for sure if this applies to Mark Zuckerberg but based on all the second-hand anecdotal information I've heard about him "empathy" as he understands it is a product branding feature rather than a human emotion.


hard to do anything about when it’s in your genetics. it’s a form of neurodivergence just like any other. and to deny it is just furthering the stigma against people with high cognitive empathy and low affective empathy

Then perhaps people like that shouldn't be in charge of a company like Meta.

I share the author's love of coding and thus don't use AI for my own personal for-fun projects.

When it comes to employment and other people paying you to code, though, not using AI is increasingly a non-starter for most of us.


Related fun-fact:

This real announcement (with some edited visuals to make it look like he was delivering it inside the White House press room) was used in the movie Contact to seem related to the more extraordinary discovery of alien intelligence that was portrayed in that movie.

https://www.youtube.com/watch?v=obrBARvWtiA

The White House objected to this use at the time, but never took any sort of legal action to have it removed or anything AFAIK.


> If you have a UUID collide, your chance of winning the lottery is exactly the same as it was before the UUID collision.

True, but only if you were already going to play the lottery anyway.

If you don't normally play the lottery and the UUID collision combined with superstition is what enticed you to play, then the UUID collision will have raised your chances of winning the lottery from 0% to slightly higher than 0%.


Colloquially, when I say "your chance of winning the lottery" what I mean is "your chance of winning the lottery given that you enter." And I think you probably know this. But I've updated my post to be clear.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: