Info Big Navi - Radeon 5950 XT specs leak.

Hans de Vries

Senior member
May 2, 2008
321
1,018
136
www.chip-architect.com
Seems this might be a valid leak from inside partner SK Hynix..

Twitter

Big Navi - Radeon 5950XT: Twice the compute units as Navi 10.

Shading units: 5120
TMUs: 320
Compute units: 80
ROPs: 96
L2 cache: 12MB
Memory: 24GB (4 x HBM2e, 3 die)
Memory bus: 4096 bits
Band Width: 2048 GB/s

ergzz28uuaaa2wf-jpg.17447



All these Big Navi numbers are perfectly consistent. The 96 ROPs are tiny pieces of logic at the edge of the memory tiles of the 12MB L2 cache which explanes the factor 3 in these numbers. An old example here https://bjorn3d.com/2010/01/nvidia-gf100-fermi-gpu/

SK Hynix will make 6GB HBM2e stacks with 3 dies per stack for consumer applications on request. SK Hynix recently announced HBM2e at 512GB/s at ISSCC-2020. Samsung went a step further with 640GB/s HBM2e (5 Gb/s/pin)

20200224_075007.jpg
 

Attachments

  • ERgZZ28UUAAA2Wf.jpg
    ERgZZ28UUAAA2Wf.jpg
    651.3 KB · Views: 1,526

uzzi38

Platinum Member
Oct 16, 2019
2,566
5,575
146
Seems this might be a valid leak from inside partner SK Hynix..

Twitter

Big Navi - Radeon 5950XT: Twice the compute units as Navi 10.

Shading units: 5120
TMUs: 320
Compute units: 80
ROPs: 96
L2 cache: 12MB
Memory: 24GB (4 x HBM2e, 3 die)
Memory bus: 4096 bits
Band Width: 2048 GB/s

ergzz28uuaaa2wf-jpg.17447



All these Big Navi numbers are perfectly consistent. The 96 ROPs are tiny pieces of logic at the edge of the memory tiles of the 12MB L2 cache which explanes the factor 3 in these numbers. An old example here https://bjorn3d.com/2010/01/nvidia-gf100-fermi-gpu/

SK Hynix will make 6GB HBM2e stacks with 3 dies per stack for consumer applications on request. SK Hynix recently announced HBM2e at 512GB/s at ISSCC-2020. Samsung went a step further with 640GB/s HBM2e (5 Gb/s/pin)

View attachment 17448
Fake. Though we've discussed it over Twitter already :p

I mean, if nothing else, I don't think any consumer-facing Navi dies will be using HBM2 of any kind. Especially not 4 stacks of 6GB HBM2, which doesn't actually exists - meaning AMD would have to request for it specifically.

AMD. The same company that it ordering as little N7 wafers as possible to prevent oversupply.
 
Last edited:

Hans de Vries

Senior member
May 2, 2008
321
1,018
136
www.chip-architect.com
Fake. Though we've discussed it over Twitter already :p

I mean, if nothing else, I don't think any consumer-facing Navi dies will be using HBM2 of any kind. Especially not 4 stacks of 6GB HBM2, which doesn't actually exists - meaning AMD would have to request for it specifically.

AMD. The same company that it ordering as little N7 wafers as possible to prevent oversupply.

Well, you are entitled to your own gut-feeling.

AMD always used HBM for it's top-level consumer graphics cards during the last 5 years: Fuji and Vega64/56.

I can't see anything wrong, neither can I guarantee anything, so there's no way telling if this is fake or not.
 

Glo.

Diamond Member
Apr 25, 2015
5,662
4,421
136
AMD used HBM2 for top-end cards because they were professional/HPC chips that were repurposed for consumer market.

Vega II, RDNA and RDNA2 based GPUs have different purposes. There will not be large chip for consumer market with HBM2, unless suddenly coronavirus has made HBM2 packaging viable from manufacturing point of view for consumer market.

The only thing that is correct in that slide is CU count, at least when it goes for large RDNA2 based next gen GPU.

Anything else is pretty much wrong.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,683
1,218
136
6 RBE + 10 WGP => L1 => 3 MB L2 partition
6 RBE + 10 WGP => L1 => 3 MB L2 partition
6 RBE + 10 WGP => L1 => 3 MB L2 partition
6 RBE + 10 WGP => L1 => 3 MB L2 partition
(6*4)*4 = 96 ROPs
(10*2)*4 => 80 CUs

Compared to Navi 10;
4 RBE + 5 WGP => 128 KB L1 => 1 MB L2 partition
4 RBE + 5 WGP => 128 KB L1 => 1 MB L2 partition
4 RBE + 5 WGP => 128 KB L1 => 1 MB L2 partition
4 RBE + 5 WGP => 128 KB L1 => 1 MB L2 partition
(4*4)*4 = 64 ROPs
(5*2)*4 => 40 CUs
 
Last edited:

lifeblood

Senior member
Oct 17, 2001
999
88
91
The CU specs are consistent with past rumor, as is the use of HBM2 memory, but the 24GB of vram just sounds excessive. 16GB is absolutely reasonable and is actually what I'm expecting. I really cant see a card with 24GB, at least not this generation. Of course AMD may have decided to go big or go home, so who knows...
 

Krteq

Senior member
May 22, 2015
991
671
136
Does SK Hynix have 3-Hi HBM2E stacks in portfolio?

24GBs over 4096-bit should be 4x 3-Hi (6GB) 1024-bit stacks

They announced that they can go up to 12-Hi (24GB) stacks before
Not to be left behind, SK Hynix is now also readying their own HBM2E memory. In terms of performance, SK Hynix says that their memory will be able to clock at up to 3.6 Gbps/pin, which would give a full 1024-pin stack a total of 460GB/sec of memory bandwidth, and in the process the lead for HBM2E memory speeds. And for more advanced devices which employ multiple stacks (e.g. server GPUs), this means a 4-stack processor could be paired up with as much as 1.84TB/sec of memory bandwidth, a massive amount by any measure. Meanwhile their capacity is doubling, from 8 Gb/layer to 16 Gb/layer, allowing a full 8-Hi stack to reach a total of 16GB. It’s worth noting that the revised HBM2 standard actually allows for 12-Hi stacks, for a total of 24GB/stack, however we’ve yet to see anyone announce memory quite that dense.
AnandTech - SK Hynix Announces 3.6 Gbps HBM2E Memory For 2020: 1.8 TB/sec For Next-Gen Accelerators
 
Last edited:

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
It''s fake. AMD isn't going to roll out a new high end offering for the 5000 series. This year is RDNA 2, which would be RX 6000 series. AMD also doesn't use xx50 for current gen Radeon stuff. They use xx00 and xx00 XT. A web page is pretty easily manipulated. I can take a picture of AMD's website listing the exact same specs.
 
  • Like
Reactions: Glo. and uzzi38

Guru

Senior member
May 5, 2017
830
361
106
All of the specs are legit, except the 24GB hbm2e. Its just giant overkill on a consumer level GPU. Maybe this is a professional GPU, MI200?
 

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
WIth graphics memory, when a manufacturer "announces" a certain 'speed grade', don't expect it to be immediately utilized. Maybe in two years. Example is HBM2 2.4 GT/s: announced in Jan 2018, featured in products (NVIDIA Tesla V100S, Intel Nervana NNP-T) only in Nov 2019.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,092
1,065
136
AMD's biggest problem is the the next Nvidia cards on 7nm. It would have made sense for AMD to get Big Navi out early. That way they could briefly take the top performance crown from Nvidia. Look at the benefits AMD has from 7nm in Ryzen and Navi already.
 
  • Like
Reactions: Zstream

Det0x

Golden Member
Sep 11, 2014
1,027
2,953
136
All of the specs are legit, except the 24GB hbm2e. Its just giant overkill on a consumer level GPU. Maybe this is a professional GPU, MI200?

Pretty sure this is Radeon Instinct MI100, the specs pretty much match up, other then the memory amount..

VideoCardz said:
Today TechPowerUP confirmed that they have the first BIOS in their database for a yet unreleased Radeon MI100 from the Instinct series... The MI100 is believed to be a 32GB graphics card featuring HBM2 memory, TPU confirms it supports both Samsung (KHA884901X) and Hynix memory (H5VR64ESA8H)... Lastly, the Instinct MI100 is likely not based on the exact same Big Navi we expect in Radeon RX series

Techpowerup said:
Arcturus' debut as a Radeon Instinct product follows the pattern of AMD debuting new big GPUs as low-volume/high-margin AI-ML accelerators first, followed by Radeon Pro and finally Radeon client graphics products. Arcturus is not "big Navi," rather it seems to be much closer to Vega than to Navi, which makes perfect sense given its target market. AMD's Linux sources mention "It's because Arcturus has not 3D engine", which could hint at what AMD did with this chip: take Vega and remove all 3D raster graphics ability, which shaves a few billion transistors off the silicon, freeing up space for more CUs. For gamers, AMD is planning a new line of Navi 20-series chips leveraging 7 nm EUV for launch throughout 2020.

Also think Arcturus is based on the "VEGA2" architecture and not meant for gaming.

I'm happy to be surprised :)

*edit* Added some more links
 
Last edited:

DiogoDX

Senior member
Oct 11, 2012
746
277
136
I think this maybe be true just because of the 96 ROPs. One would expect 128 ROPs for 80CU based on the 40CU 64ROPs of the 5700XT but AMD in this past years is being cheap on the ROP increase.

290X had 64ROPs in 2013 and is still the max that they did.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
Also think Arcturus is based on the "VEGA2" architecture and not meant for gaming.

Arcturus sounds like an interesting take as it means AMD would split gaming/professional as well. However that begs the question how much of these they can actually sell? For what would companies buy these cards? If I'm doing ML (AI) I would due to CUDA simply go with NV. No time to tinker to get the stuff working on AMD.