[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 97 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Glo.

Diamond Member
Apr 25, 2015
5,710
4,553
136
Hmm, older article but:

http://monitorinsider.com/GDDR6.html

"Up to now, the storage capacity of all memory chips has been a nice, clean power fo two, if you exclude error detection or correction bits.

GDDR6 breaks with that tradition and offers in-between options. The standard allows a capacity of 8 to 32 Gbit, but 12 Gb and 24 Gb are possible as well. This will probably make GPU makers happy since it will increase the ability to segment the market based on the amount of memory.

Today, a GPU with a 256-bit bus can only cleanly support 4GB, 8GB or 16GB. With GDDR6, they will also be able to support 12GB, while still maintaining a full balanced load with identical sized memories connected to each controller."

6GB should also be possible then using the right memory chip size. Of course if the memory manufacturers never bothered to make these then it's all moot.
Why don't you ask about what is actually in production?

ONLY 1 and 2 GB GDDR6 chips with 32 Bit bus. There might be 4 GB ones, in future.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,813
7,169
136
If there was a big Navi chip with 80 CUs, it might actually be able to take the performance crown. That would probably require a 512-bit memory bus, but if they do grab the crown they could charge well over $1,000.

If AMD could afford it they would probably want to do a flagship chip with 80 CUs/512-bit bus and a lesser enthusiast chip with 60 CUs/384-bit bus. The latter would clearly beat TU104 but would fall short of TU102.

-Assuming AMD has a 300w cap to work with, AMD has (roughly speaking) 33% power headroom to work with.

This translates to (again, this is napkin math ne plus ultra here):

- 330 mm^2 die size
- ~ 50cu configuration
- Performance roughly between 2080 (120% on TPU) and 2080ti (146% on TPU) for 1440p assuming a oerfect 33% scaling).

Now if PCI 4.0 ups the power cap from 300w, or AMD pulls the ole razzle dazzle where they use best case scenario power use, then who knows...
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
With lower clocks they can have a chip much bigger than 50CU.

Make the 5700 your comparison baseline, not the 5700 XT. 60 CUs (3840 ALUs) @ RX 5700 speeds with 96 ROPs and 384-bit will fit with 300W. With lower clocks than that they can go even bigger than 60 CUs. There's a reason the 2080 Ti is clocks much lower than the smaller chips.

If AMD's 2304 ALUs card can have lower power consumption than Nvidia's 2304 ALUs card, then AMD can get a 4352 ALUs (68 CUs) card to have lower power consumption than the 2080 Ti if they clock low enough.
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
Are people expecting amd to give us 2080ti competitor for $700? Because that won't happen. It will cost only slightly less than Nvidia.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,813
7,169
136
Are people expecting amd to give us 2080ti competitor for $700? Because that won't happen. It will cost only slightly less than Nvidia.

-I don't think anyone in their right mind expects anything from AMD on the GPU front, but speculation is fun none-the-less.

A wide but "slow" Navi from AMD @ roughly 400mm^2 would be a thing to behold.

Anyone know when AT will publish a deep drive article on the Navi Arch? Or are we looking at another GTX 960 review situation?
 

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
-I don't think anyone in their right mind expects anything from AMD on the GPU front, but speculation is fun none-the-less.

A wide but "slow" Navi from AMD @ roughly 400mm^2 would be a thing to behold.

Anyone know when AT will publish a deep drive article on the Navi Arch? Or are we looking at another GTX 960 review situation?
Ya I'm still waiting for 960 review..
 

Glo.

Diamond Member
Apr 25, 2015
5,710
4,553
136
Are people expecting amd to give us 2080ti competitor for $700? Because that won't happen. It will cost only slightly less than Nvidia.
I think people should expect something between RTX 2080 and 2080 Ti with RDNA2 architecture, for the price of RTX 2080.

That was always the philosophy of AMD. Sweet spot between manufacturing costs, performance, and volume. And market for dGPUs is shrinking, remeber this.
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
I think people should expect something between RTX 2080 and 2080 Ti with RDNA2 architecture, for the price of RTX 2080.

That was always the philosophy of AMD. Sweet spot between manufacturing costs, performance, and volume. And market for dGPUs is shrinking, remeber this.
I don't understand the shrinking market reference. As far as AMD is concerned, there is a whole lot of marketshare to capture, even with a stagnant to slowly shrinking TAM.
 
  • Like
Reactions: KompuKare

Glo.

Diamond Member
Apr 25, 2015
5,710
4,553
136
I don't understand the shrinking market reference. As far as AMD is concerned, there is a whole lot of marketshare to capture, even with a stagnant to slowly shrinking TAM.
There is no growth in dGPUs for consumers. And they will grow marketshare with Navi products.

But this is not the point I was making, at all.
 
  • Like
Reactions: guachi

exquisitechar

Senior member
Apr 18, 2017
657
871
136
I think people should expect something between RTX 2080 and 2080 Ti with RDNA2 architecture, for the price of RTX 2080.

That was always the philosophy of AMD. Sweet spot between manufacturing costs, performance, and volume. And market for dGPUs is shrinking, remeber this.
Hasn't David Wang said something like "AMD wants to compete with the best Nvidia has to offer"? Now, they have a very good chance of doing just that. Then again, what you're saying makes sense too.
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
There is no growth in dGPUs for consumers. And they will grow marketshare with Navi products.

But this is not the point I was making, at all.
That's what I was asking. What is the point? Unclear.
 

mattiasnyc

Senior member
Mar 30, 2017
356
337
136
Wouldn't AMD eventually make RDNA the new architecture in the more compute-focused lines like Instinct? If so then it makes sense that it would "trickle down" from there just like VII did.

Even if it's still mostly a matter of boosting memory bandwidth and using more memory on the card that's still a very valid and reasonable 'upgrade' over the current cards for actual work.
 

Glo.

Diamond Member
Apr 25, 2015
5,710
4,553
136
That's what I was asking. What is the point? Unclear.
I wish I would find logic in believing that large, powerful, expensive to make GPUs, that are bought by nieche will increase the MARKETSHARE ;).

Marketshare is increased by GPU lines like small and Mid-Range Navi cards. The ones everybody talked about is large, expensive to make, and bought by nieches in a market in which there is no growth, and soon there will be bigger competition even, with third player in it. And we are talking about a company which VERY rarely, if ever goes for that nieche marketshare. From business perspective, its completely and uttterly stupid idea, to release a product to beat Nvidia, just for the bragging rights sake, and not because it is a viable product from technological, manufacturing and marketing side.

Big GPUs would have to be good at compute, to justify the sake of designing it. And RDNA architecture is... mediocre to say the least in compute stuff.
Hasn't David Wang said something like "AMD wants to compete with the best Nvidia has to offer"? Now, they have a very good chance of doing just that.
Aren't they doing it right now in specific price/performance bracket? Which is what most likely David Wang may have had in mind...
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
I wish I would find logic in believing that large, powerful, expensive to make GPUs, that are bought by nieche will increase the MARKETSHARE ;).

Marketshare is increased by GPU lines like small and Mid-Range Navi cards. The ones everybody talked about is large, expensive to make, and bought by nieches in a market in which there is no growth, and soon there will be bigger competition even, with third player in it. And we are talking about a company which VERY rarely, if ever goes for that nieche marketshare. From business perspective, its completely and uttterly stupid idea, to release a product to beat Nvidia, just for the bragging rights sake, and not because it is a viable product from technological, manufacturing and marketing side.

"Small and mid-range" Navi cards are part of that exact same market (Consumer dGPU) with no growth prospects and will be far more affected by streaming services due to the nature of their customer base. Large, powerful gpus with absolute performance as a priority are the segment most resistant by streaming in the near term. Given that AMD currently has ~0% share in that segment now, any gain would be an improvement, not to mention trickle down to increased sales in the mid-range sectors.
 

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
I wish I would find logic in believing that large, powerful, expensive to make GPUs, that are bought by nieche will increase the MARKETSHARE ;).

Marketshare is increased by GPU lines like small and Mid-Range Navi cards. The ones everybody talked about is large, expensive to make, and bought by nieches in a market in which there is no growth, and soon there will be bigger competition even, with third player in it. And we are talking about a company which VERY rarely, if ever goes for that nieche marketshare. From business perspective, its completely and uttterly stupid idea, to release a product to beat Nvidia, just for the bragging rights sake, and not because it is a viable product from technological, manufacturing and marketing side.

Big GPUs would have to be good at compute, to justify the sake of designing it. And RDNA architecture is... mediocre to say the least in compute stuff.

Aren't they doing it right now in specific price/performance bracket? Which is what most likely David Wang may have had in mind...
You mean like what they did with Polaris 10, 11 & 12? Sure didn't work that great.

With regards to Wang's statement, I'll just say this. That is a very unique interpretation of what he said.
 

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Ya I'm still waiting for 960 review..

I got a 960 review for you considering i am using one as a hold me over till it goes into my spare computer when my 2070 arrives.

Pros of my 960 are

does 4k/60hz via hdmi or displayport
4gb vram
$40 used but in like in new condition
handles 1080p resolution well
did i mention it was cheap?
handles overwatch at 4k if you adjust a few settings nothing major though and pulls a nice 60+

cons of my 960 are

Barely handles 1440p in even titles as old as BF4
forget any and all 4k minus overwatch or some e sport titles
inefficient compared to a gtx 1650 for example

I hope you enjoyed my review and have a nice day!
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,813
7,169
136
I got a 960 review for you considering i am using one as a hold me over till it goes into my spare computer when my 2070 arrives.

Pros of my 960 are

does 4k/60hz via hdmi or displayport
4gb vram
$40 used but in like in new condition
handles 1080p resolution well
did i mention it was cheap?
handles overwatch at 4k if you adjust a few settings nothing major though and pulls a nice 60+

cons of my 960 are

Barely handles 1440p in even titles as old as BF4
forget any and all 4k minus overwatch or some e sport titles
inefficient compared to a gtx 1650 for example

I hope you enjoyed my review and have a nice day!


- And you got it to us before Anandtech did!
 

Glo.

Diamond Member
Apr 25, 2015
5,710
4,553
136
"Small and mid-range" Navi cards are part of that exact same market (Consumer dGPU) with no growth prospects and will be far more affected by streaming services due to the nature of their customer base. Large, powerful gpus with absolute performance as a priority are the segment most resistant by streaming in the near term. Given that AMD currently has ~0% share in that segment now, any gain would be an improvement, not to mention trickle down to increased sales in the mid-range sectors.
Do you think, considering manufacturing and design costs it will be viable for AMD to put out such large GPU, just for the sake of bragging rights?
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
I wish I would find logic in believing that large, powerful, expensive to make GPUs, that are bought by nieche will increase the MARKETSHARE ;).

Marketing actually has very little to do with logic. It's all about emotion.

The fact is that if AMD takes the performance crown, that will result in much bigger mindshare and will result (irrationally) in sales of lower-range GPUs in the same line, even if those lower-range GPUs are not nearly as good, or even if they're worse than competing products. We've seen this happen in the past; the GTX 960 was a pretty terrible card (unless you really needed hardware HEVC decode), but people bought it because of the halo effect from the 980 Ti/Titan X.
 
  • Like
Reactions: MangoX and crisium

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Do you think, considering manufacturing and design costs it will be viable for AMD to put out such large GPU, just for the sake of bragging rights?

Maybe, maybe not.

But it's probably better than ignoring the high-end/enthusiast market completely, focusing hard on performance/$, and then acting all surprised when marketshare and profits stall for the 100th time. After all, that's basically what AMD has been doing in GPUs for at least the last half-decade and look where that's gotten it.
 

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
- And you got it to us before Anandtech did!

To be honest its the closest thing to a 960 from here you will get. Its a start in a collection of reviews i will do that i like to call "The who gives a damn reviews" . The reviews will be about older hardware that no one uses,recommends or really even gives a damn about anymore. Stay tuned for my FX8350 review!
 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
To be honest its the closest thing to a 960 from here you will get. Its a start in a collection of reviews i will do that i like to call "The who gives a damn reviews" . The reviews will be about older hardware that no one uses,recommends or really even gives a damn about anymore. Stay tuned for my FX8350 review!
Can you review the ASUS AC55-BT wifi+bluetooth PCI card?
 

tajoh111

Senior member
Mar 28, 2005
298
312
136
-I don't think anyone in their right mind expects anything from AMD on the GPU front, but speculation is fun none-the-less.

A wide but "slow" Navi from AMD @ roughly 400mm^2 would be a thing to behold.

Anyone know when AT will publish a deep drive article on the Navi Arch? Or are we looking at another GTX 960 review situation?

The thing with the large GPU market is with 7nm cost of development, it needs a piggy back market of some sort to make the cost of developing the GPU worth it.

This means adding back the compute. From what I have seen in terms of luxmarks and other programs that test compute, Navi seems to be 2 steps forward for gaming architecture but one or two steps back as far as compute.

This makes sense because Navi seems to be a pure gaming architecture with the consoles in mind and now has IPC similar to Nvidia turing which is very impressive. It seems to be Maxwell + Async in the behavior of it's shaders which makes it no surprise that it IPC is similar to Turing now. However it appears to have taken a step back as far as compute ability goes

https://www.hardwareluxx.de/index.p...0-und-radeon-rx-5700-xt-im-test.html?start=11

To make a A 400mm, 96 rop, 4096 shader navi that pure gaming is possible but it wouldn't have the compute to be competitive with Nvidia next gen 7nm products and would mostly be more inline with Nvidia's volta or less if we look beyond pure tflops(double precision). This would reduce it's TAM in a very fast growing market which is less desirable for AMD in the long run.

I think AMD's next enthusiast card will have the compute added back on and AMD will try to replicate what Nvidia does where they have compute specific cards on one side of the fence and gaming cards one one side of the fence. I didn't think this was possible before with AMD budget, but with the shifting of labor to China for GPU development, some reverse engineering(Navi performs too similarly in regards to IPC in the same games as maxwell and Turing),the piggy backing of R and D development from the PS5 and Xbox next, it looks like AMD managed to do it.

In terms of beyond, AMD will need to look not at the present but to the future for the design of its next gen card. Looking at Nvidia's financials, half of their revenue is now provided by professional and business markets which means Nvidia's professional cards are going to be strong.

I could see coming from Nvidia for their first compute product to succeed Volta being a 600mm-700mm2, 6144 shader card clocks at a conservative(for 7nm standards)1800mhz. This translates into a 22.1tflops card. This is probably Nvidia's target since GV100 already does 14.7 tflops, so roughly a 50% increase is expected with a new node. This will be followed by a 104 series with 0.666 of the specs(This pattern has been repeated since maxwell). So in this case 4096 shaders with a 102 series being the same 6144 shaders but higher clocks. In higher clocks I am guessing 2.2 ghz for the 104 and 2.1ghz for the gv102 since these clocks can be achieved on air/water today on turing and typically new nodes improve on these clocks and become the new default clock. Assuming no changes in architecture and simply a moving of Turing to a new node, AMD will need something with more than 4096 shaders to compete with Nvidia's next gen.

So what specs will AMD need to compete with a 18tflop card(104) or a 25.8tflop card(100/102) using the above specs.

Assuming the clocks don't improve(and the chance of it improving is not good since the wider specs will eat into the clocks, along with compute and the possibility of double precision), this would mean AMD would need something with 5120 shaders to be competitive with Nvidia and perhaps beat the 104.

5120*2*1.8ghz = 18.4 tflops. This becomes tricky for AMD because such a card is a mammoth for AMD in terms of die size consider the space requirements of 2560 shaders at 251mm2. Add in the likely compute ability that needs to come in and AMD is looking at a 500mm2 plus card. With Nvidia, we are likely going to see a growth of die sizes compared to pascal because of how big turing cards are today, I am estimating ga104 being 350mm to 400mm, ga102 being 525 to 600mm and GA100 being600mm- 700mm2.

This playing around with die sizes shows that although RDNA was a good step in the right direction, AMD competitiveness is coming mostly as a result of the move to a new node vs Nvidia. AMD problem with RDNA is scalability and power consumption and this is why Nvidia is not in panic mode.

As is currently, power consumption reveals problems with scalability.

Nvidia's next gen ga106, should be 2036 shader(looking at trends with maxwell, pascal and turing). At 2.25ghz(these cards are clocked a tad higher because of small die and less leakage), we are looking at a 9.16 tflop card. Things look good when the 5700xt produces 9tflops so it initially looks like a competitive race. That is until you look at power consumption. There's a good chance GA106 will consume 120 watts, like the GTX 960, 1060 and 1660 ti before it. It also shows the scalability of the architecture. If 9.1tflops of power can produced for 120watts, there is the possibility of 18tflops at 200-225watts and 25.8tflops at 300watts.

With RDNA there are concerns when 9 tflops already takes 225watts. Where is the scalability to get 18 or 25.8tflops? The 7.6tflops at 180watts isn't much better from the 5700.

If Navi was a 150watt card at 9tflops(don't use a youtube video where someone undervolts, can't guarantee stability across hundreds of thousands of samples across all uses cases), AMD would be in a much more competitive position for scaling but at 225 watts, the outlook does not look good for scaling upward.
 
Last edited:

Det0x

Golden Member
Sep 11, 2014
1,030
2,957
136
The thing with the large GPU market is with 7nm cost of development, it needs a piggy back market of some sort to make the cost of developing the GPU worth it.

This means adding back the compute. From what I have seen in terms of luxmarks and other programs that test compute, Navi seems to be 2 steps forward for gaming architecture but one or two steps back as far as compute.

This makes sense because Navi seems to be a pure gaming architecture with the consoles in mind and now has IPC similar to Nvidia turing which is very impressive. It seems to be Maxwell + Async in the behavior of it's shaders which makes it no surprise that it IPC is similar to Turing now. However it appears to have taken a step back as far as compute ability goes

https://www.hardwareluxx.de/index.p...0-und-radeon-rx-5700-xt-im-test.html?start=11

To make a A 400mm, 96 rop, 4096 shader navi that pure gaming is possible but it wouldn't have the compute to be competitive with Nvidia next gen 7nm products and would mostly be more inline with Nvidia's volta or less if we look beyond pure tflops(double precision). This would reduce it's TAM in a very fast growing market which is less desirable for AMD in the long run.

I think AMD's next enthusiast card will have the compute added back on and AMD will try to replicate what Nvidia does where they have compute specific cards on one side of the fence and gaming cards one one side of the fence. I didn't think this was possible before with AMD budget, but with the shifting of labor to China for GPU development, some reverse engineering(Navi performs too similarly in regards to IPC in the same games as maxwell and Turing),the piggy backing of R and D development from the PS5 and Xbox next, it looks like AMD managed to do it.

In terms of beyond, AMD will need to look not at the present but to the future for the design of its next gen card. Looking at Nvidia's financials, half of their revenue is now provided by professional and business markets which means Nvidia's professional cards are going to be strong.

I could see coming from Nvidia for their first compute product to succeed Volta being a 600mm-700mm2, 6144 shader card clocks at a conservative(for 7nm standards)1800mhz. This translates into a 22.1tflops card. This is probably Nvidia's target since GV100 already does 14.7 tflops, so roughly a 50% increase is expected with a new node. This will be followed by a 104 series with 0.666 of the specs(This pattern has been repeated since maxwell). So in this case 4096 shaders with a 102 series being the same 6144 shaders but higher clocks. In higher clocks I am guessing 2.2 ghz for the 104 and 2.1ghz for the gv102 since these clocks can be achieved on air/water today on turing and typically new nodes improve on these clocks and become the new default clock. Assuming no changes in architecture and simply a moving of Turing to a new node, AMD will need something with more than 4096 shaders to compete with Nvidia's next gen.

So what specs will AMD need to compete with a 18tflop card(104) or a 25.8tflop card(100/102) using the above specs.

Assuming the clocks don't improve(and the chance of it improving is not good since the wider specs will eat into the clocks, along with compute and the possibility of double precision), this would mean AMD would need something with 5120 shaders to be competitive with Nvidia and perhaps beat the 104.

5120*2*1.8ghz = 18.4 tflops. This becomes tricky for AMD because such a card is a mammoth for AMD in terms of die size consider the space requirements of 2560 shaders at 251mm2. Add in the likely compute ability that needs to come in and AMD is looking at a 500mm2 plus card. With Nvidia, we are likely going to see a growth of die sizes compared to pascal because of how big turing cards are today, I am estimating ga104 being 350mm to 400mm, ga102 being 525 to 600mm and GA100 being600mm- 700mm2.

This playing around with die sizes shows that although RDNA was a good step in the right direction, AMD competitiveness is coming mostly as a result of the move to a new node vs Nvidia. AMD problem with RDNA is scalability and power consumption and this is why Nvidia is not in panic mode.

As is currently, power consumption reveals problems with scalability.

Nvidia's next gen ga106, should be 2036 shader(looking at trends with maxwell, pascal and turing). At 2.25ghz(these cards are clocked a tad higher because of small die and less leakage), we are looking at a 9.16 tflop card. Things look good when the 5700xt produces 9tflops so it initially looks like a competitive race. That is until you look at power consumption. There's a good chance GA106 will consume 120 watts, like the GTX 960, 1060 and 1660 ti before it. It also shows the scalability of the architecture. If 9.1tflops of power can produced for 120watts, there is the possibility of 18tflops at 200-225watts and 25.8tflops at 300watts.

With RDNA there are concerns when 9 tflops already takes 225watts. Where is the scalability to get 18 or 25.8tflops? The 7.6tflops at 180watts isn't much better from the 5700.

If Navi was a 150watt card at 9tflops(don't use a youtube video where someone undervolts, can't guarantee stability across hundreds of thousands of samples across all uses cases), AMD would be in a much more competitive position for scaling but at 225 watts, the outlook does not look good for scaling upward.


Tomshadware: AMD Arcturus Is Probably a Vega-Based Professional GPU

Amd will separate their graphic-card architectures with Arcturus for compute and RDNA (2) for gaming.

Vega20 is already doing very well in comparison with Turing in pure compute benchmarks, and with RDNA they are finally matching Nvidia gaming "IPC" (shaders/clocks), all with a smaller die-size* thanks to 7nm.
I would say its many years, if ever, the future have looked this bright for AMD in both the CPU and GPU departments. :eek:

* = This is also pretty interesting in regards to Turing die-size in regards for pure gaming

ToTTenTranz said:
While it seemed interesting at first (as a super advanced upscaler better than checkerboard), people have now been looking at DLSS for what it actually is:

- nvidia's attempt at justifying turing's tensor cores as a relevant feature for games.

Turns out nvidia's engineers can't for the love of their life use the tensor cores for anything in games, other than performing regular non-matrix FP16 calculations.
DLSS takes months to train for each game, and they must to do so for every resolution because ultrawide takes another handful of months to implement on top of the previous, and apparently they aren't even using the tensor cores for that.

And now that AMD's Contrast Adaptive Sharpening is better, open source and has already been ported to GPU-agnostic tools, I'm guessing DLSS will die very soon.
I wouldn't be surprised if DLSS doesn't actually get implemented in more than the 6 already released games, since its implementation is such a waste of resources.
 
Last edited:
  • Like
Reactions: DisEnchantment
Status
Not open for further replies.