> I assume anything Apple is cooking is using Nvidia in the server room already
I wouldn't be so quick at assuming this. Apple already ship ML-capable chips in consumer products, and they've designed and built revolutionary CPUs in modern time. I'm of course not sure about it, but I have a feeling they are gonna introduce something that kicks up the notch on the ML side sooner or later, the foundation for doing something like that is already in place.
> Apple already ship ML-capable chips in consumer products, and they've designed and built revolutionary CPUs in modern time.
Has Nvidia not done that too? They shipped ML-capable consumer hardware before Apple, and have revolutionary SOCs of their own. On top of that, they have a working relationship with the server/datacenter market (something Apple burned) and a team of researchers that basically wrote the rulebook on modern text and image generation. Then you factor in CUDA's ubiquity - it runs in cars, your desktop, your server, your Nintendo Switch - Nvidia is terrifying right now.
If the rest of your argument is a feeling that Apple will turn the tables, I'm not sure I can entertain that polemic. Apple straight-up doesn't compete in the same market segment as Nvidia anymore. They cannot release something that seriously threatens their bottom line.
> They cannot release something that seriously threatens their bottom line.
If they manage to move a significant part of ML compute from datacenter to on-device, and if others follow, that might hurt Nvidia's bottom line. Big if at this point, but not unthinkable.
There are a lot of problems here though. First of all being that inferencing isn't hard to do - iPhones were capable of running LLMs before LLaMa and even before it was accelerated. Anyone can inference a model if they have enough memory, I think Nvidia is banking on that part.
Then there's the issue of model size. You can fit some pruned models on an iPhone, but it's safe to say the majority of research and development is going to happen on easily provisionable hardware running something standard like Linux or FreeBSD.
And all this is ignoring the little things, too; training will still happen in-server, and the CDN required to distribute these models to a hundred million iPhone users is not priced attractively. I stand by what I said - Apple forced themselves into a different lane, and now Nvidia is taking advantage of it. Unless they intend to reverse their stance on FOSS and patch up their burned bridges with the community, Apple will get booted out of the datacenter like they did with Xserve.
I'm not against a decent Nvidia competitor (AMD is amazing) but the game is on lock right now. It would take a fundamental shift in computing to unseat them, and AI is the shift Nvidia's prepared for.
why wouldn't they build a relatively small cluster for training tasks using Nvidia hardware? It's simply the industry standard, every researcher is familiar with it and writing a custom back-end for pytorch that scales to hundreds of nodes is no small task.
I doubt Apple cares about spending a few hundred million dollars on A100s as long as they make sure the resulting models run on billions of apple silicone chips.
Apple has no present experience in building big servers (they had experience at one point, but all those people surely moved on)
Mac Minis don't count
Sure, they are super rich and could just buy their way into the space...but so far they are really far behind in all things AI with Siri being a punchline at this point
if anything, Apple proves that money alone isn't enough
I'm no Apple fan-boy at all (closer to the opposite) so it pains me a bit to say, but they have a proven track-record of having zero experience in something, then releasing something really good in that industry.
The iPhone was their first phone, and it really kicked in the smartphone race into high gear. Same for the Apple Silicon processor. And those are just two relatively recent examples.
To be fair, Apple released their iPhone after building iPods for 6 years. So, it's not like they had zero experience with handheld devices at the time.
Also, while Apple did create their first chip (at least of their current families) in 2007, they did acquire 150 or so engineers when they bought PA Semi in 2008. So, that gave them a leg up compared to building a chip team completely from scratch.
Right, the fact they have a habit of releasing industry redefining first products in a category is they do the hard work to make it happen. It’s not by accident.
The iPhone had a lot of prehistory in Apple, from Newton to iPod. Apple Silicon alo has a long history, starting with the humble beginnings as the Apple A4 in 2010, which relied on Samsung's Hummingbird for the CPU and PowerVR for the GPU (plus they acquired PA Semi in 2008).
So both are not very good examples, because they build up experience over long periods.
> So both are not very good examples, because they build up experience over long periods.
They are examples of something they could similarly do for the Apple Neural Engine but in a bigger scale in the future. They have experience deploying it in a smaller scale/different versions, they would just have to apply it in bigger scale in order to be able to compete with NVIDIA.
Apple's money can buy relationships to line up the ducks in a row but true genius is willing to work for $1 a year and be rewarded for the upside when it comes, see SJ.
I wouldn't be so quick at assuming this. Apple already ship ML-capable chips in consumer products, and they've designed and built revolutionary CPUs in modern time. I'm of course not sure about it, but I have a feeling they are gonna introduce something that kicks up the notch on the ML side sooner or later, the foundation for doing something like that is already in place.