Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Apple wants faster RAM in the iPhone to help speed up Apple Intelligence

Apple Intelligence on an iPhone will benefit from faster memory


Apple is working with Samsung to change how RAM is packaged for the iPhone, with the aim of widening the bandwidth to help in AI tasks.

While most smartphones have their RAM on the same package as the processor, Apple is looking to go against the norm. In order to allow for more RAM and also speed up memory access for Apple Intelligence tasks, it now wants to put the RAM and processor on separate chips.

According to The Elec, Apple has asked Samsung to begin researching how best to package the DRAM memory used in iPhones. The issue is that being on the same package as the processor is only quicker up to a point.

To make faster RAM, Apple wants a larger DRAM package. But there are only so many connectors on the processor side, and consequently only so much RAM that can be on the same chip.

Samsung has therefore been tasked with how to create a larger DRAM package, and connect it back to the processor in the fastest way. Being a separate package will also help with heat, as on-device AI is an intensive process that makes the chips hot.

Reportedly, Apple could have instead chosen to use the kind of high-bandwidth memory (HB) that is often in servers. But this appears to have been rejected, as there are difficulties getting it small enough to fit a phone, and to also become sufficiently low power to work with a phone's battery.

The physical constraints of an iPhone are also an issue using a separate or discrete package for the RAM. Apple may have to reduce the size of its System on a Chip (SoC) processor and even also the battery, to fit the separate RAM in.

While Samsung is said to have only just begun its research, Apple is believed to be planning to use the new method in 2026's iPhone 18 range.

The Elec is a decent source of information from within Apple's supply chain. It is less accurate in predictions it forms about what Apple will do.



5 Comments

y2an 16 Years · 231 comments

This would be surprising, considering the space constraints inside an iPhone. I’m more inclined to think that it’s somehow related to switching to Apple modems giving other design considerations that we don’t yet understand.

1 Like · 0 Dislikes
tht 24 Years · 5671 comments

While most smartphones have their RAM on the same package as the processor, Apple is looking to go against the norm. In order to allow for more RAM and also speed up memory access for Apple Intelligence tasks, it now wants to put the RAM and processor on separate chips.
The article reads like a jumbled mess.

The LPDDR RAM and the logic chip in an iPhone SoC package are separate chips. It's a package-on-package architecture where the LPDDR chip package is stacked on top of the logic chip (eg, the A18 silicon chip). The whole thing is then encased in a ceramic or resin compound and appears as "one" package with many media folks calling it a "chip".

Apple already has multiple ways to increase RAM quantities and RAM bandwidth. In the M4 (or M3, M2, M1, A12X, etc), they don't stack the RAM on top of the logic chip. They placed them adjacent to the logic chip. There are 2 RAM packages in the M4 constituting 2 LPDDR interfaces, which doubles the bandwidth from the A18 package. The M4 Pro quadruples, and the M4 Max octuples. Obviously, there are space constraints with the bigger chips and higher memory bandwidths, and they are not going into a phone.

Dollars to donuts, Apple wants to use commodity LPDDR memory. Hard to believe that would go with a custom memory for something like the iPhone, so the rumor score really should be something like unlikely, poor, or low possibility. For the A19, A20, they are just going to use PoP with 8, 12, or 16 GByte memory packages and call it a day. Memory bandwidth probably goes up to something like 75 GByte/s, or perhaps whatever is for LPDDR6.

Apple did work with Sk Hynix to create a low-latency wide I/O RAM interface for the Vision Pro R1 co-processor. It's a 128 MByte chip with a 512 bit memory interface, with about double the bandwidth of LPDDR5, and supposedly better latency. There's 8x as many pinouts in the R1 memory interface, with less bandwidth per pin, compared to LPDDR. Would they do something like this with 8, 12, 16 GByte of memory? Only if it costs about as much as using LPDDR after x millions of units made, or perhaps if it is more power efficient and offers a marketable feature or improvement.

3 Likes · 0 Dislikes
cpsro 15 Years · 3243 comments

Maybe the processor/RAM won't overheat as much, too. I'm rather tired of this happening especially when shooting video.

1 Like · 0 Dislikes
command_f 15 Years · 429 comments

tht said:
While most smartphones have their RAM on the same package as the processor, Apple is looking to go against the norm. In order to allow for more RAM and also speed up memory access for Apple Intelligence tasks, it now wants to put the RAM and processor on separate chips.
...
Apple already has multiple ways to increase RAM quantities and RAM bandwidth. In the M4 (or M3, M2, M1, A12X, etc), they don't stack the RAM on top of the logic chip. They placed them adjacent to the logic chip. ...

 Thanks for that explanation. My first thought was of the M-series (Mac/iPad) packages, the bus interface (electrical) seems the obvious way to go and all that work has already been done. Putting the devices adjacent rather than stacked also has an obvious advantage on thermals in a phone-shaped device (and presumably a larger Pro phone can more easily tolerate the increased footprint).

[Edited for layout]

1 Like · 0 Dislikes
tht 24 Years · 5671 comments

command_f said:
tht said:
While most smartphones have their RAM on the same package as the processor, Apple is looking to go against the norm. In order to allow for more RAM and also speed up memory access for Apple Intelligence tasks, it now wants to put the RAM and processor on separate chips.
...
Apple already has multiple ways to increase RAM quantities and RAM bandwidth. In the M4 (or M3, M2, M1, A12X, etc), they don't stack the RAM on top of the logic chip. They placed them adjacent to the logic chip. ...

 Thanks for that explanation. My first thought was of the M-series (Mac/iPad) packages, the bus interface (electrical) seems the obvious way to go and all that work has already been done. Putting the devices adjacent rather than stacked also has an obvious advantage on thermals in a phone-shaped device (and presumably a larger Pro phone can more easily tolerate the increased footprint).

[Edited for layout]

There isn't room in a phone to put RAM memory chips adjacent to the logic chip. Stacking is the way to go. The only question is the memory bus architecture. It will be commodity LPDDR, and if Apple is sporty and is beneficial, they may go with a custom memory bus architecture, but skeptical on that too.

The reasoning in the article for needing more memory bandwidth seems pure bunk (AI features). Apple isn't stressing memory component supplies and they don't want to be on the bleeding edge either. They have designed all of their memory architectures around commonly available LPDDR. They barely even use bleeding edge high density LPDDR either. AI features won't change that.

Even a realtime AI feature that constantly monitors what is happen on the display, remembers what's happened over the past minutes to hours, and offers actions and help to the user along way, I'm not so sure it needs anything more that LPDDR. Needs to be power efficient, and may be an entirely separate and private processor, but that isn't memory bandwidth constrained by LPDDR I think.

1 Like · 0 Dislikes