Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

AMD proved that Apple skipping 4nm chips isn't a big deal

AMD's pushing into 4nm processes for chips, temporarily beating Apple's use of 5nm.

AMD said at CES that its use of a 4nm process helps its new chips beat two-year old Apple Silicon, but in doing so, it revealed that a jump to 4nm was not a big deal for Apple's future chip releases.

On January 5, AMD used its CES presentation to promote its own line of processors and chips. As to be expected of such a promotional opportunity, the chip maker wheeled out announcements for its inbound chip releases, and took the time to hype it as better than its rival silicon producers.

For this year, the big attention was on the AMD Ryzen 7040HS, a series of chips that are intended for use in "ultrathin PC laptops" and other mobile devices. They were declared to be powerful, both in terms of CPU and GPU performance, and they also perform better than other similar chips.

Being one of the major mobile chips on the market, AMD decided that it would compare its top-end Ryzen 9 7940HS against Apple Silicon.

AMD's CES 2023 presentation boasted about its 4nm prowess. AMD's CES 2023 presentation boasted about its 4nm prowess.

In on-stage slides, AMD proclaimed that the chip was up to 34% faster in terms of multithreaded performance over the M1 Pro. Moreover, it was also able to run at "up to +20% on AI processing" against Apple's M2.

There's a lot you can read into the presentation, such as AMD deciding to go against the M1 Pro instead of an M1 Max or M1 Ultra with its chip performance comparison. Cherry-picking benchmarks isn't anything new in the industry, but there are known chips from Apple that are more powerful than what was looked at.

However, the main point from the presentation that needs addressing is that AMD says this is due to its use of a 4-nanometer production process. By using a 4nm process, AMD says it can achieve "massive horsepower" from its chip designs.

(Small) size matters

Die Shrinks, or the ability to use a chip production process that runs at a smaller scale than previous generations, are a great marketing point for chip makers.

From shrinking the size of the die, chip producers can create chips with more transistors and intricate designs on the same area of silicon. More stuff in a smaller space equals more performance with minimal, space concerns.

An example of Intel's die shrinking progress over a four-year period. An example of Intel's die shrinking progress over a four-year period.

Going smaller also provides other benefits as well, specifically relating to power and heat. With smaller components on a circuit, less power may be needed to accomplish a task compared to the same circuit using a physically larger process.

The smaller scale also helps with heat, since the lower mass means it isn't generating as much. It can also benefit from dissipating heat through cooling systems at a better rate, too.

Naturally, going down to smaller scales can introduce obstacles that need to be overcome, but to those who manage it, there are considerable rewards in chip design and performance on offer.

4nm not for Apple?

Cherry-picking notwithstanding, AMD's claims of using 4nm for its chips does give the company a material advantage over Apple and others who are using 5nm and larger processes.

For Apple's more immediate chip launches, AMD will still most likely be able to crow about its process usage. Given the M2 is made on a 5nm process, it is almost certain that Apple will keep using it for the other chips in the generation, such as the M2 Pro, M2 Max, and M2 Ultra.

Apple is more likely to make these sorts of major changes between generations, not mid-generation. Feasibly, that would mean the M3 generation could use it, or even something smaller.

Apple could've used 4nm for the initial A16 Bionic prototypes. Apple could've used 4nm for the initial A16 Bionic prototypes.

Apple was predicted to use TSMC's 4nm process for its A16 Bionic SoC, but instead it stuck to the more established 5nm version. Sure, the A16 was supposed to offer a major generational leap, but early prototypes drew more power than predicted.

Battery life would've been affected, as well as the temperature of the iPhone, with it potentially running hotter than deemed acceptable.

We don't know for sure if the prototype version was using a 4nm process, but whatever it was, Apple didn't think it was good enough in that state.

Leapfrogging to 3nm

As the march of progress continues, so does the race to get chips to become even smaller and more power efficient. Naturally, that also means Apple chip partner TSMC has been working on just such a topic.

Reports going back to mid-2022 has Apple signing up to use TSMC's 3-nanometer process. The natural devices to use it being the M3 generation for Mac, despite the insistence that a mid-generational production process change could happen.

In September, Apple apparently became the first in the eyes of anonymous sources that will use the technology. By December, TSMC apparently started mass production of 3-nanometer chips for the Mac and iPhone.

A move to 3nm gives Apple a 15% boost in speed at comparative power levels to a 5nm counterpart, or a 35% reduction in power consumption for comparable performance.

Remember, that's all from changing from 5nm to 3nm. That doesn't take into account any other changes that Apple could bring in to increase clock speed, core counts, or whatever other technical voodoo they come up with in future.

That 15% speed boost could easily become 20%, or even 30%, if Apple plays its cards right.

With such a performance improvement, it makes sense for Apple to concentrate on making a good 3nm chip than to increase the workload by shifting to 4nm then to 3nm. Doing so reduces R&D efforts on its part, as well as any time wasted sticking to 4nm.

Given the supposed woes of the A16's prototype, it could well be that Apple saw there was a problem and decided that skipping was the best course of action overall.

That it could give Apple another considerable performance lead down the road is also a nice benefit to have.

AMD's posturing shouldn't worry Apple

Going back to AMD's decision to compare against the M1 Pro, the whole thing seems a bit unusual, especially when you're going against the 700-pound gorilla that is Apple.

Poking the beast by claiming you're using 4nm instead of 5nm and then using a chip released over a year ago isn't going to win plaudits when there are other more powerful comparisons to be made.

Sure, M1 Pro is powerful, but AMD could've easily tried to take on the M1 Max or M1 Ultra if it were being serious. Sure, M1 Pro is powerful, but AMD could've easily tried to take on the M1 Max or M1 Ultra if it were being serious.

When third-party benchmarks of AMD's chip finally surface, the performance may not necessarily be as massively improved over others in Apple's arsenal. Especially with the prospect of 3nm chips on the horizon.

One argument is that AMD is only going on stage and smack-talking Apple's M1 Pro because it's the most popular or approachable option for consumers looking to buy a MacBook Pro. Except that Apple's not really AMD's target.

By showing it can be in the same ballpark as Apple, AMD has laid down the challenge to Intel, threatening to take PC customers away with promises of performance and battery life. After all, if Apple can worry Intel, and AMD is vaguely in the same area, it stands to reason that AMD should be able to worry Intel too.

If AMD can eat Intel's lunch by posturing against Apple, then that's a win in AMD's book against its long-standing rival.

AMD is a good chip designer. It's kept Intel on its toes for years.

We've said it before. Intel, Apple, AMD: pick one, and competition from the other two have made it stronger.

Operationally, Apple won't care about AMD's claims, especially with a loyal customer base and the prospect of 3nm chips on the horizon. It can easily hold back and speak softly for the moment because it knows a metaphorical big stick is on the way.

By 2025, that could turn into 2nm chip production. Less stick, more tree trunk.



9 Comments

lkrupp 19 Years · 10521 comments

How often does it have to be repeated that ‘cherrypicking' and ‘crowing’ is for spec monkeys and benchmark junkies that populate tech blogs. But interesting that AMD chose Apple Silicon instead of Intel to attempt to smear, a la Samsung. At least we know Apple Silicon has everyone’s attention now, don’t we.

tenthousandthings 17 Years · 1060 comments

I realize the forums are a tiny aspect of Apple Insider, and it's a giant pain in the ass to moderate them, but this article/editorial contains two factual errors that have already been addressed repeatedly in the comments on earlier articles that made the same mistake—if the author had read those comments, he could have made a stronger argument here.

First and foremost, the A16 is 4nm. Apple stated that outright. It is the N4 process. The linked The Information article AI covered on December 23 may have some elements of truth in it. After all, chipmaking is hard, or everyone would be doing it. Especially high-end graphics. The quote from Ian Cutress therein says all that needs to be said. But it's Apple Insider who makes the leap there to say that the A16 stayed on 5nm and didn't go to 4nm because of these challenges. That is just wrong, wrong, wrong. The A16 did go to 4nm (N4). Apple touted this in its presentation. I find this insistence otherwise, in multiple articles by two different members of the AI staff (Wesley twice and now Malcolm), to be just inexplicable.

The second error is more of a detail, but it's an important one if you're going to be editorializing about Apple and chipmaking. TSMC's so-called "4nm" is the third generation of its 5nm (N5) FinFET platform. It's not a "die shrink," to reference Malcolm's 2019 article where he laid out some of the factors driving Apple's A-series chip production. It uses the same design library. N4 is a second "Tock" not a "Tick," to use the same terms Malcolm used in 2019. That's why Apple could easily revert to the A15 graphics designs for A16 (while staying with N4 for A16, instead of N5P used for the A15), as rumored/leaked in the aforementioned The Information article.

TSMC provides a definitive English-language source of information about how these “process technologies” relate to one another: https://www.tsmc.com/english/dedicatedFoundry/technology/logic

The N5 FinFET platform is comprised of N5, N5P, N4, N4P, and N4X processes. The next platform is N3, and it is more "flexible." The first two generations of it are N3 and N3E. See: https://n3.tsmc.com/english/dedicatedFoundry/technology/N3.htm

N2 (due in 2025) was recently announced, but it's not clear how it is related to N3—I'll guess that means the relationship between N3 and N2 is similar to the relationship between N5 and N4, that is, N2 will use the N3 FinFlex design library.

netling 21 Years · 77 comments

@tenthousandthings 

great reply, assuming your right and I have referenced your sources, have you considered writing for AI, as they are hiring for writers. 



I realize the forums are a tiny aspect of Apple Insider, and it's a giant pain in the ass to moderate them, but this article/editorial contains two factual errors that have already been addressed repeatedly in the comments on earlier articles that made the same mistake—if the author had read those comments, he could have made a stronger argument here.

First and foremost, the A16 is 4nm. Apple stated that outright. It is the N4 process. The linked The Information article AI covered on December 23 may have some elements of truth in it. After all, chipmaking is hard, or everyone would be doing it. Especially high-end graphics. The quote from Ian Cutress therein says all that needs to be said. But it's Apple Insider who makes the leap there to say that the A16 stayed on 5nm and didn't go to 4nm because of these challenges. That is just wrong, wrong, wrong. The A16 did go to 4nm (N4). Apple touted this in its presentation. I find this insistence otherwise, in multiple articles by two different members of the AI staff (Wesley twice and now Malcolm), to be just inexplicable.

The second error is more of a detail, but it's an important one if you're going to be editorializing about Apple and chipmaking. TSMC's so-called "4nm" is the third generation of its 5nm (N5) FinFET platform. It's not a "die shrink," to reference Malcolm's 2019 article where he laid out some of the factors driving Apple's A-series chip production. It uses the same design library. N4 is a second "Tock" not a "Tick," to use the same terms Malcolm used in 2019. That's why Apple could easily revert to the A15 graphics designs for A16 (while staying with N4 for A16, instead of N5P used for the A15), as rumored/leaked in the aforementioned The Information article.

TSMC provides a definitive English-language source of information about how these “process technologies” relate to one another: https://www.tsmc.com/english/dedicatedFoundry/technology/logic

The N5 FinFET platform is comprised of N5, N5P, N4, N4P, and N4X processes. The next platform is N3, and it is more "flexible." The first two generations of it are N3 and N3E. See: https://n3.tsmc.com/english/dedicatedFoundry/technology/N3.htm

N2 (due in 2025) was recently announced, but it's not clear how it is related to N3—I'll guess that means the relationship between N3 and N2 is similar to the relationship between N5 and N4, that is, N2 will use the N3 FinFlex design library.

baka-dubbs 13 Years · 179 comments

lkrupp said:
How often does it have to be repeated that ‘cherrypicking' and ‘crowing’ is for spec monkeys and benchmark junkies that populate tech blogs. But interesting that AMD chose Apple Silicon instead of Intel to attempt to smear, a la Samsung. At least we know Apple Silicon has everyone’s attention now, don’t we.

They compared themselves to both Intel and Apple, specifically the 1280P from Intel and both the M1 Pro in Cinebench and the M2 for the AI engine performance.  And yeah, I think its a sign of respect for Apple Silicon that they are being compared, Apple has big head start in leveraging AI in desktop hardware.

Every company cherry picks at their presentations to highlight their performance advantages.   Apple, Intel, Qualcomm and AMD all do this.  The article mentions(strangely) that they don't compare themselves to the M1 Pro Max or the M1 Ultra.  This is because the 7040 is not in the same pricing category(and the Ultra is a desktop chip...).  The 7040 is not their high end chip, its a thin and light chip.  They have a separate Dragon Range chip for high performance laptops that is a 16 core Zen 4 chip that runs at a much higher TDP.  And obviously the M1 Ultra should more rightfully be compared the desktop 7950x or a Threadripper chip, especially when factoring in costs.