Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is where I believe the author goes astray: "Despite TSMC claiming a 1.8x shrink for N5, Apple only achieves a 1.49x shrink."

TSMC does NOT say they have a 1.8x shrink for N5, they say for LOGIC you can get that, but for SRAM and Analog the results are 1.35x and 1.2x. Had they summed that together for a "typical SOC", which they also discuss (and one presumes that Apple makes typical SOCs) then the "theoretical" shrink is 1.57x for SoCs.

The challenge is that whereas at one time the node size was that of a "gate" (which could be 4 transistors), in a marketing race for smaller numbers fabs started emphasizing "feature" size.

Because of this change, the "theoretical shrink" is a function of what kinds of circuits you're putting down. Pure logic? You get one number, two gates connected together for a flip-flop, you get another number, a voltage regulator, or ADC filter, you get another number.

So doing the analysis the author claims to do, can ONLY be done if you know what percentage of the part you are making on the new process is what. I am under the impression that they missed that.



Wait, when node sizes where based off of a gate that meant logic gate? I always thought it was the transistor gate width.


My experience with the marketing speak around process nodes.

1) In the way back times, (think Intel 8080A) the complexity of the chips was advertised in "logic gates". More gates = more impressive chip.

2) But logic gates weren't equivalent from one process to another, and so it switched from "logic gates" to "transistors." More transistors => more impressive chip. (this is when I left Intel for Sun Microsystems)

3) But not all transistors are created equal, and there were things (like copper metal layers) that made chips better even it it meant you couldn't fit as many transistors so "line size" was what was important. Smaller line size => more impressive chip.

4) But now people had redesigned transistors so that they could be packed more densely and the limiting factor was how much silicon you needed for the gate (NMOS/CMOS) and since that wasn't a whole transistor, it was just a "feature" of the transistor, "feature size" became the new marketing term. Feature size was measured in nanometers and so the smaller nanometers implied more features per unit area.

It has all evolved over time so that it is harder and harder for any sort of comparative analysis between processes seems to make any sense at all these days.

These days, much like the TSMC presentation that is excerpted in the original article, semiconductor fabs rely on comparative measures like "same stuff would be size <x> on this process vs size <y> on the previous process." All the really interesting parameters to me are things like how that effects leakage (thus idle power) and voltage thresholds (thus idle power and maximum frequencies).

I'd love it if there was some sort of SI unit you could demand which would give you a better comparison metric but I don't think we'll see that. Everybody wants to be "the best" and that is most easily achieved when you can dynamically define the metric for "best."


Thank you for this elucidating comment.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: