I really appreciate this analysis and the straightforward top line number 134e6/mm^2. The usual "node" figure is utterly meaningless; an electrical engineer couldn't care less about "feature" size (whatever that means.) What is the count of discrete components in a given area? There are 40 billion 5nm (the supposed "node" of these chips) squares in a millimeter of area. That's two orders of magnitude more dense than 134 million.
The meaningful achievement is how many discrete electrical components are composed into a given area. Not some arbitrary dimension of some cherry picked subset of these components.
> The meaningful achievement is how many discrete electrical components are composed into a given area. Not some arbitrary dimension of some cherry picked subset of these components.
I disagree. The meaningful achievement is how power-efficient, fast, and cheap you can make a given chip. (Secondarily, how small and how durable wrt cosmic rays; but for most purposes these are not super important.)
If that follows as a result of many discrete electrical components being packed into a small area, great; but the latter isn't intrinsically interesting.
The issue with power efficiency, speed, and price is that it's even harder to measure than transistor density. Furthermore, I think metrics that measure the technological progress of the silicon manufacturing are a useful tool for enriching comparisons of chips. Yes, numbers like cache sizes (or transistor density) aren't what people ultimately need/want, they want speed, but it still helps them compare chips. Improvements of the underlying process alone could lead to improvements of power efficiency, speed, and price.
> Efficiency, performance and cost are strongly related to density. Price less so; that's a function of supply and demand.
They are related, of course, but the important stuff can be measured more directly by looking at how well programs work on a given computer. I think it's just a little odd to praise one cherry-picked, arbitrary metric for being less game-able than another cherry-picked, arbitrary metric. Especially when we have metrics to hand that are a lot closer to what real people care about in a computer. I certainly have never shopped for a CPU based on the number of transistors, but I have made purchasing decisions based on things like cinebench and passmark scores, which try to get at what ultimately matters to me (i.e. how many FPS a CPU will drive in the games I play).
That magazines, sites and fans started using density for bragging rights, just as they used to do with MHz, certainly isn’t fully the fault of manufacturers.
Manufacturers use “x nm”, yield (a metric correlated with price, but also completely uninteresting for consumers), etc. because they tell chip designers what they need to know.
They avoid benchmark scores because they bring the CPU design in the picture as a variable.
It's a story from a site that focuses on fabrication technology and the semiconductor market place. The topic is device density. This is of interest, even if it doesn't interest you.
These things aren't _really_ two-dimensional. They're not really three-dimensional either, but they are objects built out of layers of two-dimensional things. When you measure number of transistors per unit area you will inevitably see something more dense than the "number of 5nm square in a millimeter of area".
It's the silicon equivalent of measuring one's BMI.
Video games used to call their pseudo-3d display 2.5D, or if they were feeling fancy, isomorphic.
There is a little freedom in the Z axis, but not very much. But if speed of light matters to performance, then a chip design that increases the z axis decreases the euclidean distance between any two gates, which should (or at least could) matter to performance, right?
It barely matters. Gate delays and thermal limits outweigh distance by a huge factor. If you need to go further distances then you can wait one cycle and cover a relatively huge length.
A cycle or two ago I asked this same question. Some people seemed to think that such a number would be gamed, but if that's a valid barrier to entry then we will never accomplish anything again, really.
As long as the measure is in mm^2, not mm^3, I think (hope?) that would avoid any perverse incentives against breakthroughs that allow you to add more layers to a chip and still maintain yields.
> The meaningful achievement is how many discrete electrical components are composed into a given area.
Is average over an area the meaningful achievement or is the meaningful achievement the smallest individual gate length? Neither are super useful without additional context.
Let's say I made a tiny transistor but to avoid leakage/interference or melting the transistor, I had to surround it by a bunch of empty silicon. There are lots and lots of cases where this could not be called a success.
I hesitate to say there are none, because I'm sure some task that is highly sensitive to latency might potentially benefit, but if you build the whole chip that way, it would actually be a regression.
Sure but, despite the extra space around any individual transistor, you're still shoving 49% more transistors into the same overall amount of space. Then everything benefits and no one cares about the node name.
> you're still shoving 49% more transistors into the same overall amount of space
What I fear is that we've hit the point where this is no longer a safe assumption. That we will be having people chase feature size numbers that don't actually result in a proportional increase in chip density.
Transistors per square millimeter is closer to a measure we actually care about (speed of light and clock speed) instead of a bench number that doesn't measure anything except perhaps instructions per watt.
What from the article makes you think that's no longer a safe assumption? The 49% gain is exactly what happened going to 5nm.
I don't think anyone actually cares about transistors/mm^2 at all, I think what we care about is perf or perf/watt for our specific workloads. I don't care if the chip is built with vacuum tubes if it is fast, efficient (per dollar and watt), and physically fits in the device I want it in.
First, because TSMC was aiming for 1.8x and Apple only saw 1.49, and I expect that not to improve going forward.
Second, one of us is reading that number wrong. They said 1.49x, not 49%.
In any other conversation, 2x is reducing feature size by 50%. 3x is 1/3 of the original, or reduced by 2/3. That means 1.8 is 45% smaller, and 1.49 is 32.9%.
Similarly, if you cut the pitch of a circuit in half you should see 4x as many transistors. If you could keep shrinking the space between transistors while shrinking the transistor, then going from "7" to "5" node should have been a 1.96x factor for areal density, not the 1.8x they claim, or the 1.49x Apple achieved.
I'm not saying they screwed up. As soon as nodes stopped measuring literal transistor size, it wouldn't take long for the names to be aspirational instead of descriptive. It's something they can name the project early on when the set of potential tech has been selected and some estimates have been made. For building a team it's fine. But I'm not on that team, I'm a customer (and current or former shareholder).
I think the consumer cares about the transistors per mm^2 (after the voltage and the instructions per second), not the node number. Especially when each foundry uses the same number to describe different densities. I shouldn't have to keep remembering that TSMC-7 = INTC-10. Numbers that are actual numbers, please.
"As of 2019, the highest transistor count in any IC chip is Samsung's 1 TB eUFS (3D-stacked) V-NAND flash memory chip, with 2 trillion floating-gate MOSFETs (4 bits per transistor)." [1]
The funny thing is is this isn't really Apple's achievement. It's TSMC's achievement. It's also Intel's failure in that they failed to get a similar process working on schedule.
I think you are using black and white thinking: both deserve praise.
Delivering a CPU on a new node process is not a trivial achievement, along with the significant design changes required to achieve the potential gains of the node. Neither is Apple’s business ability to lock in x months of exclusive use of that node. Apple have done an amazing job here, hand in hand with TSMC.
And the problem there is? Any one of those companies are all long term TSMC customers and I have no doubt have solid working relationships with them. They would have had similar opportunity to book that fab capacity too.
The problem is the assertion that "It's Apple's engineering and design". Engineering & design don't buy you the entire world's supply of the best in class fab capacity for some months: money does.
It seems to me apple's marketing and product strategy has more to do with bringing in that money than actual engineering and design, contradicting the original assertion.
Apple's engineering and design success is a huge part of why they have the resources to likely fund some of the 5 nm fab deployment costs and capacity. Money comes from success which came from great engineering and design coupled with great marketing.
Marketing might get you there for a short time, but maintaining that growth and long term high customer satisfaction doesn't come without great engineering and design.
Apple wouldn't have shelled out the money to have first crack at 5nm if their design team couldn't commit to having a CPU design done in time. It's a huge R&D achievement to have such a complex design ready to go on a new node as the node launches. It speaks to extremely tight timelines and extremely close collaboration between TSMC and Apple.
I'm not saying apple's engineering isn't good, just that it doesn't seem to be that exceptional. If the other companies couldn't have a design ready for 5nm then why would apple pay for the exclusivity rights?
It is exceptional according to this reasoning.
They pay because there is not enough of this 5nm fab. Business does not go against engineering, that’s all.
At the risk of sounding cheeky, Apple's engineering and design are the sole source of the money involved.
Apple's involvement with TSMC also provides TSMC with an opportunity to learn from Apple. Nobody books an entire pure-play foundry without taking the time to figure out what they want to do with that manufacturing capacity in the first place.
Yeah, I'm tempted to say something like: it takes a while in your career before you figure out the enormous difference between companies who do engineering and design and companies who do design and engineering.
I see them as Apple paid to get to the front of the line and get 5nm built faster, they'll have the capacity for now. But other companies well benefit in the coming years as they can use the 5nm process sooner, more bugs ironed out and at a lower cost. Like what's happened to the older 32nm fabs.
Comments like these and the ensuing arguments are funny. Especially considering the supposed audience.
What makes you think coordinating something like this is an easy task for anyone? As if you could ever distil this down to the efforts of a single organisation.
The meaningful achievement is how many discrete electrical components are composed into a given area. Not some arbitrary dimension of some cherry picked subset of these components.