Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

AMD to unveil Radeon RX 6000 GPU family on Oct. 28

Credit: AMD

Last updated

AMD is set to announce its new line of Radeon RX 6000 graphics chips at an event on Oct. 28, the chipmaker said Thursday.

The new GPUs will be based on AMD's new RDNA 2 architecture, which could bring a 50% performance-per-watt boost and "uncompromising 4K gaming."

Exact details and specifications for the upcoming chips are still sparse. However, AMD said that the Oct. 28 event will let users learn more about RDNA 2, Radeon RX 6000 chips, and the company's "deep collaboration with game developers and ecosystem partners."

Along with increased performance and power efficiency, the GPUs will also feature ray-tracing capabilities and variable-rate shading. That'll bring the AMD chips more in-line with main rival Nvidia. Rumors from earlier in 2020 also suggest that the RDNA 2 cards could come equipped with up to 16GB of GDDR6 video memory, a 256-bit bus, and more fans for additional cooling.

Leaker @coreteks has also indicated that AMD's goal may be to undercut Nvidia's pricing, though it isn't clear how much the Radeon RX 6000 series will retail for.

Apple presently has drivers in macOS for the Radeon RX 5700, Radeon VII, Vega 64, Vega 56, and most of the 400 and 500 series PCI-E cards. While Mac drivers don't typically arrive day and date with the cards' releases, they do arrive within a few months of unveiling in a macOS update.

AMD cards are the only PCI-E cards that Mac Pro or Mac-based Thunderbolt 3 eGPU users can use internally at the moment. Apple has ditched Nvidia GPU support in favor of Radeon cards, and there are no signs of it returning any time soon.

The AMD announcement event will kick off at 12 p.m. Eastern Time (9 a.m. Pacific) on Wednesday, Oct. 28.



30 Comments

bsbeamer 16 Years · 77 comments

Many are ASSUMING Apple will eventually support the next AMD series/line, but with the ARM switch it is not a guarantee.  Apple just recently moved to "fully" adopt the 5XXX series across their machines and easily could drag those on through EOL of Intel processors, as long as the lingering driver issues are addressed.

tht 23 Years · 5654 comments

I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 

zimmie 9 Years · 651 comments

tht said:
I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 

AMD's internal codenames for things are pretty rough, yeah. They have a set of codenames for the instruction family, a different set of codenames for the chip family, and a third set of codenames for the individual chips.

The Graphics Core Next (GCN) 4 instruction family is composed of the Polaris chip family. The codename for individual chips is from the "Arctic Islands" family.

Polaris 10 - RX 470, RX 480
Polaris 11 - RX 460
Polaris 12 - RX 540, RX 550

Polaris 20 - RX 570, RX 580
Polaris 21 - RX 560
Polaris 22 - RX Vega M GH, RX Vega M L

Polaris 30 - RX 590

After that came the GCN 5 instruction set and the Vega chip family:

Vega 10 - RX Vega 56, RX Vega 64
Vega 12 - Pro Vega 16, Pro Vega 20
Vega 20 - Pro Vega II, Radeon VII

After GCN5 comes the RDNA 1 instruction set and the Navi chip family:

Navi 10 - RX 5600, RX 5700
Navi 14 - RX 5300, RX 5500
Yes, it's all gratuitously confusing. The higher the number within a given chip family, the lower the performance.

melgross 20 Years · 33622 comments

tht said:
I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 

Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to,have, for the first time, enough oomf to make it useful.

‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we
ll see.

tht 23 Years · 5654 comments

melgross said:
tht said:
I'm betting that the A14 GPUs will have hardware support for raytracing. So, the iPhone will be the first Apple device to have hardware raytracing support, not Macs with AMD GPUs. ;)

Anyways, anyone want to clarify AMD GPU codenames? Both the chip and graphics API support? Getting confusing out there. Navi, Navi 14, Navi 12, RDNA, RDNA2, so on and so forth. 
Having support doesn’t mean having effective support. Nvidia has had support for ray tracing in their top GPUs for two generations, but despite having fast memory, and a lot of it, they weren’t really usable for that. Their new generation is supposed to have for the first time, enough oomf to make it useful.

‘how Apple could duplicate that in an IG graphics SoC is something that I doubt right now, even assuming it’s something they’re looking at. Ray tracing is one of the most difficult things to do in real-time. The number of calculations is immense. Apple also uses a small amount of shared RAM, that’s not considered to be the best way to power graphics hardware, so we [will] see.

Yeah, they may not have enough transistors to do it. If AR is going to be a thing though, having some properly rendered shadows for the virtual object will be really really nice to see! And, they appear to be investing rather heavily on VR, AR glasses that will be powered by iPhones. Seemingly. So, I think there is a driving need for it for some of their upcoming products. Hence, raytracing hardware will be in the SoCs sooner or later.

As far as iGPUs versus dGPUs, our common distinctions for the difference between the two will become or are gradually becoming more and less distinct. If you look at the Xbox X series SoC, it's basically a 12 TFLOP GPU with a vestigial CPU (an 8 core Zen 2) in terms of the amount of area the GPU and CPU occupy. And this is fabbed on TSMC 7nm. TSMC 5nm yields 70% more transistors. So, I can see a TSMC 5nm SoC with raytracing hardware in it. Intel with Tiger Lake is getting closer to this with its iGPU taking about 40% of the chip area. This will drive dGPUs to higher performance niches, and commensurate higher Watt tiers. It's going to be interesting to see how the dGPU market shakes out. Maybe, server GPUs will be the vast majority sales of dGPUs in a few years as iGPUs start to take even more of the mid range of the gaming market for GPUs.

It's still a very big question on how much power Apple is willing to burn on the GPU and how they get more perf/Watt out of the GPU than competitors. The obvious way is to run the iGPU at low clocks, have a lot of GPU cores, and have a high bandwidth memory subsystem to feed it. This will require a lot of chip area and transistors, which is something Apple may have the luxury for, as they don't have to have 50% margin on their chips, and they are on the densest fab.