Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple Silicon might get used for AI chips in server farms


A new rumor claims that Apple will use TSMC's 3nm technology for an AI server processor that it is designing alongside its iPhone and Mac chips.

It's already known that TSMC has been developing 3 nanometer processors, and believed that Apple has bought out its entire production capacity for it. The presumption was that Apple would use the process for latest versions of its A-series iPhone chips, and M-series Mac ones.

Now, however, a leaker named "Phone Chip Expert" on Chinese social media site Weibo, claims that Apple is designing a specific, bespoke artificial intelligence processor. As first spotted by MacRumors, the leaker says it will go into mass production in the second half of 2025.

If correct, this suggests that Apple is not exclusively focusing on AI being run locally on device. It's been presumed that Apple prefers AI running on the iPhone because of privacy, but so far AI has required greater processing capacity, and storage, than is always available on device.

To that end, Apple has recently acquired firms in both Canada and France that work on compressing AI requirements. Apple has also published a research paper specifically about how it can tackle "the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM."

Even if, as previously expected, the iPhone will perform most of its AI on device, there may yet be more intensive processing that requires being offloaded to a data center. It makes sense for Apple to use TSMC and its technologies to make that be as fast and efficient as possible.

However, it's also the case that TSMC is an existing partner and is already producing iPhone, iPad, and Mac processors. If Apple were to work with another manufacturer, it would necessarily have to reveal its plans to them for best optimization and coordination.

The "Phone Chip Expert" does not have a current Apple rumor track record, but they do have an accurate one from about five years ago. The claim of Apple needing AI server farms and designing their processors with TSMC is plausible, and a logical assumption based on what the company will need.

Apple is expected to announce AI features as part of its iOS 18 launch at WWDC in June. At least one analyst, though, predicts that hardware-assisted AI won't be in the iPhone until 2025's iPhone 17.



18 Comments

tht 23 Years · 5654 comments

This has been one of those obvious moves for long while now.

Upcoming Nvidia server hardware is now running at up to 1000 W, requires liquid cooling and cost a whole lot of money. If Apple can produce equivalent performance at 250 to 500 W and having equivalent LLM performance at lower memory footprints, it may actually make LLM chatbots, searches, and services profitable. And they may get a two-for if that hardware can be run in a Mac Pro.

I have not seen whether MS Co-pilot subscriptions make any profits yet. The non-MS LLM services? I think they are all operating at loss. Still early for Google to see if their LLM services will be profitable.

Apple? All part of their ecosystem train and can be amortized across multiple products, especially if their LLM services sell hardware.

avon b7 20 Years · 8046 comments

tht said:
This has been one of those obvious moves for long while now.

Upcoming Nvidia server hardware is now running at up to 1000 W, requires liquid cooling and cost a whole lot of money. If Apple can produce equivalent performance at 250 to 500 W and having equivalent LLM performance at lower memory footprints, it may actually make LLM chatbots, searches, and services profitable. And they may get a two-for if that hardware can be run in a Mac Pro.

I have not seen whether MS Co-pilot subscriptions make any profits yet. The non-MS LLM services? I think they are all operating at loss. Still early for Google to see if their LLM services will be profitable.

Apple? All part of their ecosystem train and can be amortized across multiple products, especially if their LLM services sell hardware.

Yes. If true it's been a long time coming but a welcome addition. 

The only doubt I have is that processors themselves aren't enough. They would need a full stack solution and a lot of interconnect and software frameworks to make it all work.

A good strategic move if they do go down that route. 

9secondkox2 8 Years · 3148 comments

Hope not. Apple should go all-in with on-device. 

Even if it takes longer, that’s fine. 

About as long as it takes to run a photoshop action. 

No problem. 

avon b7 20 Years · 8046 comments

Hope not. Apple should go all-in with on-device. 
Even if it takes longer, that’s fine. 

About as long as it takes to run a photoshop action. 

No problem. 

Solutions need to be 'trained' and then deployed as models.

It is possible that these servers are for some kind of training but either way, not everything can (or needs to) be done locally. 

blastdoor 15 Years · 3594 comments

Hope not. Apple should go all-in with on-device. 
Even if it takes longer, that’s fine. 

About as long as it takes to run a photoshop action. 

No problem. 

You can’t train an LLM on device