[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Unfortunately whilst my hope is 48CU, my thinking is we are looking at 40CU part that is clocked wayyy past its efficiency window to compete with Turing.
I think Turing caught AMD by surprise, just like pascal did in perf.
I think Vega was more of a gaming turkey than they envisaged, and I also suspect N7 didn't hit the clocks that they were expecting...indeed AMD revised the performance benefit of 7nm down from 35% to 25% if I remember correctly.

Navi was designed under raja Kudori, we know he didn't execute very well and we also know Navi was delayed, I suspect Lisa Su pulled it in for a redesign, pulling in some features from next year's uarch and thereby enabling AMD to market a clean break from GCN.
I think next year will be the true next gen uarch from top to bottom, whilst Navi will take some features from it.

If this was a 48CU on new uarch and 7nm I feel we would be getting better than 2070 perf for 200+w.
Seems like they are having to clock the transistors off of it.
You guys must stop dreaming about good prices, performance increases, or brand new architectures on schedule, on smaller nodes.

Because of the process, and the economy of them, and how market is saturated, and how it will become even more saturated when Intel will come late to the party, the companies are not willing to make a lot of designs, on smaller nodes. There is a good reason why Nvidia went with 12 nm process, instead of jumping to 7 nm.

There is no growth in client GPUs anymore. Where is the incentive for them to build hardware for DiY markets? To design a lot of chips, considering design costs, excluding manufacturing costs are north of 250 000 000$?

Growth in this market was one thing that was still allowing AMD and Nvidia price their hardware at sane levels. Because there is no growth, and design costs are as high as they are there is really no viability for companies to make GPUs.
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
It's a cleen sheet design! You can't even imagine what's in it! It's all new! ;)
Could you imagine what was in Ryzen architecture, before the Release?

Also, about the leaks on Navi. Why hasn't the name of RDNA leaked before? Why hasn't the SKUs leaked before, even tho some claimed to know a thing or two about Navi? Maybe because no one knew **** about it?
 
  • Like
Reactions: lightmanek

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
RX480 didn't use 100W more than the GTX 1060. It did use about 50W more, depending on which versions of each card were being used. And which BIOS the 480 was using. I agree that Vega was a huge power sucker.

Stock GTX 1060 used about 120W under maximum load, give or take. At the time of release, it was roughly 10% faster than RX 480, which was supposed to fit in a 150W power envelope but violated that by about 15W.

If AMD catches up to Nvidia architecturally with Navi (a big if), they should be able to substantially beat them in performance-per-watt, since they are on 7nm while Nvidia is still on 12nm. If they can't substantially beat RTX 2070 in perf/watt even with that node advantage, it's a sign that the architecture is still lacking.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Stock GTX 1060 used about 120W under maximum load, give or take. At the time of release, it was roughly 10% faster than RX 480, which was supposed to fit in a 150W power envelope but violated that by about 15W.

If AMD catches up to Nvidia architecturally with Navi (a big if), they should be able to substantially beat them in performance-per-watt, since they are on 7nm while Nvidia is still on 12nm. If they can't substantially beat RTX 2070 in perf/watt even with that node advantage, it's a sign that the architecture is still lacking.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2070_Founders_Edition/33.html

AMD will have to DOUBLE the perf/w of the entire board, which means the GPU itself will need to have more than a 2x increase in efficiency before factoring in the components of the PCB and vram. I don't think that has ever happened in the modern era of tracking perf/w. Not even anything close to that.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2070_Founders_Edition/33.html

AMD will have to DOUBLE the perf/w of the entire board, which means the GPU itself will need to have more than a 2x increase in efficiency before factoring in the components of the PCB and vram. I don't think that has ever happened in the modern era of tracking perf/w. Not even anything close to that.
I'm pretty sure someone did the math and Navi is 20% better performance/watt than Vega. I believe Touring is 40% better than Vega.
Touring's 12nm architecture is more efficient than Navi's 7nm.
2070 is 180 watts vs Navi at 225 watts in the same performance envelope is what I'm hearing.
If Nvidia uses faster memory and raises there clocks on their cards they could match Navi's performance per watt and be even faster. This is what they will announce in the coming weeks.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
AMD will have to DOUBLE the perf/w of the entire board, which means the GPU itself will need to have more than a 2x increase in efficiency before factoring in the components of the PCB and vram. I don't think that has ever happened in the modern era of tracking perf/w. Not even anything close to that.

As I noted before, this was pretty much what happened if you compare Pitcairn (R9 270X) with Polaris 10 (RX 480). Performance nearly doubled at the same power consumption level. With a new architecture plus a full node shrink, we should expect this level of performance gain and it would be disappointing if RTG didn't deliver.

RTX 2070 maxes out at about 200W. This is on a "12nm" process (actually refined 16nm). This means that even if Ampere is basically a die shrink with few architectural improvements, we should see that performance level at 120W-150W from Nvidia in 2021. AMD needs to be prepared to match this, since there isn't going to be another die-shrink to keep them ahead.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
As I noted before, this was pretty much what happened if you compare Pitcairn (R9 270X) with Polaris 10 (RX 480). Performance nearly doubled at the same power consumption level. With a new architecture plus a full node shrink, we should expect this level of performance gain and it would be disappointing if RTG didn't deliver.

RTX 2070 maxes out at about 200W. This is on a "12nm" process (actually refined 16nm). This means that even if Ampere is basically a die shrink with few architectural improvements, we should see that performance level at 120W-150W from Nvidia in 2021. AMD needs to be prepared to match this, since there isn't going to be another die-shrink to keep them ahead.

AMD did essentially double performance between Pitcairn and Polaris, but they did NOT double perf/w. Polaris was less than 50% more efficient than Pitcairn. How high is AMD going to raise the TDP on a 250mm2 "mid-range" chip?!
 

french toast

Senior member
Feb 22, 2017
988
825
136
You guys must stop dreaming about good prices, performance increases, or brand new architectures on schedule, on smaller nodes.

Because of the process, and the economy of them, and how market is saturated, and how it will become even more saturated when Intel will come late to the party, the companies are not willing to make a lot of designs, on smaller nodes. There is a good reason why Nvidia went with 12 nm process, instead of jumping to 7 nm.

There is no growth in client GPUs anymore. Where is the incentive for them to build hardware for DiY markets? To design a lot of chips, considering design costs, excluding manufacturing costs are north of 250 000 000$?

Growth in this market was one thing that was still allowing AMD and Nvidia price their hardware at sane levels. Because there is no growth, and design costs are as high as they are there is really no viability for companies to make GPUs.
Nice Post, but you are missing the crux of what I am saying, loads of different designs isn't relevant to what I'm saying, they have gone with one or two Navi designs for the middle of the market. It is about getting the design choice right, then suddenly there is much more sales and therefore profit in GPUs for AMD.
Some of us think maybe AMD got the CU balance wrong for Navi 10 and will have to clock it out of it's comfort zone to make ends meat.. Just like polaris and vega.
I'm sorry but a 'clean sheet' design on a more mature 7nm at ~250-280mm² should be able to duke it out with a 2080 on perf/watt on rasterization. and win!.
Stock GTX 1060 used about 120W under maximum load, give or take. At the time of release, it was roughly 10% faster than RX 480, which was supposed to fit in a 150W power envelope but violated that by about 15W.

If AMD catches up to Nvidia architecturally with Navi (a big if), they should be able to substantially beat them in performance-per-watt, since they are on 7nm while Nvidia is still on 12nm. If they can't substantially beat RTX 2070 in perf/watt even with that node advantage, it's a sign that the architecture is still lacking.
They should be able to beat nvidia in performance per watt yes, but AMD doesn't bin their chips quite like Nvidia does for one, 2) AMD probably has had Navi in design for 12-18 months longer than planned, probably with some major additions/revisions since the vega debacle and Raja leaving, as such they probably never planned for it to go up against Turing 2070 performance, so they probably have under equipped the CU and was going to rely on clocks and geometry improvements to carry Navi through.
The delays/feature bump/redesign now means they have to push it way past it's intended efficiency point to make ends meat.

Almost certainly we are not going to see the best Navi potential here, I think if they could see the landscape now and the delays when they designed Navi they probably would have layed down more CU for the job and beat the 2080 by 5-10% for $549 and same for 2070 for $399.

I think this whole RDNA thing was a late addition, I think Lisa su pulled Navi back in and pulled some features in from the next uarch, allowing them to move away from gcn much sooner and market this RDNA stuff, I believe next year will be the proper new uarch from top to bottom.
 
  • Like
Reactions: prtskg

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
As @alcoholbob pointed out, Strange Brigade is VERY AMD friendly. On TPU's latest Turing review, dated April 14th, Vega 7 is a whopping 25% faster than the RTX 2080 in Strange Brigade at 4k and a RX 590 matches a GTX 1070 on older reviews. Given that Vega 7 is about 5-10% slower than an RTX 2080 overall given very recent reviews by both TPU and techspot, WE ARE LOOKING AT GTX 2060 PERFORMANCE FOR BIG NAVI.


Correction, SB is friendly to GCN because it utilizes many of the architectures cabalities. Obviously all bets are off for the new RDNA architecture, its as true to say performance could be as good or better across all games. Luckily they are going to reveal a lot of details at E3.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
As a guess this is the next gen shown on AMD's roadmap and they branded it as Navi while renaming the original Navi to the Radeon VII. 'Next Gen' is a 2019 part on the roadmap and they have multiple design teams so it's kind of surprising that it was such a surprise.
As for performance, AMD basically already told us. 1.5x perf/watt increase. BTW she said in her keynote that it isn't up to 1.5x it is at least 1.5x.
 
  • Like
Reactions: french toast

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Nice Post, but you are missing the crux of what I am saying, loads of different designs isn't relevant to what I'm saying, they have gone with one or two Navi designs for the middle of the market. It is about getting the design choice right, then suddenly there is much more sales and therefore profit in GPUs for AMD.
Some of us think maybe AMD got the CU balance wrong for Navi 10 and will have to clock it out of it's comfort zone to make ends meat.. Just like polaris and vega.
I'm sorry but a 'clean sheet' design on a more mature 7nm at ~250-280mm² should be able to duke it out with a 2080 on perf/watt on rasterization. and win!.
All you guys have to say is that "It should be able". Exactly the problem. There is no "should".

You guys better stop dreaming about efficient architectures down the road. There is a very good reason why AMD said that TSMC's process is worse than they thought about the efficiency of their process, and it offers only 40% reduction in power consumption for the same design and clocks. There is a very good reason why Nvidia did not used 7 nm process for Turing.

And down the road, while we scale down with nm, things will only get worse. There is a reason why everybody are getting in touch with reality that at some point, we might be forced to use 1kW(!) chips in future. What is worse: GPUs have 3-4 nodes left for them.

Don't expect miracles in power efficiency from Nvidia Either. Unless they will once again bloat the hell out of the die sizes of their chips, we won't see miracles from them.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
As a guess this is the next gen shown on AMD's roadmap and they branded it as Navi while renaming the original Navi to the Radeon VII. 'Next Gen' is a 2019 part on the roadmap and they have multiple design teams so it's kind of surprising that it was such a surprise.
As for performance, AMD basically already told us. 1.5x perf/watt increase. BTW she said in her keynote that it isn't up to 1.5x it is at least 1.5x.

No, Radeon VII is definitely a die shrunk Vega intended for heavy compute work loads. Navi was never intended for that use case.
 

crazzy.heartz

Member
Sep 13, 2010
183
26
81
All you guys have to say is that "It should be able". Exactly the problem. There is no "should".

You guys better stop dreaming about efficient architectures down the road. There is a very good reason why AMD said that TSMC's process is worse than they thought about the efficiency of their process, and it offers only 40% reduction in power consumption for the same design and clocks. There is a very good reason why Nvidia did not used 7 nm process for Turing.

And down the road, while we scale down with nm, things will only get worse. There is a reason why everybody are getting in touch with reality that at some point, we might be forced to use 1kW(!) chips in future. What is worse: GPUs have 3-4 nodes left for them.

Don't expect miracles in power efficiency from Nvidia Either. Unless they will once again bloat the hell out of the die sizes of their chips, we won't see miracles from them.

If not for Nvidia's overzealous attempt to include dedicated RT hardware in RTX chips, they would have left team RTG in the dust.

Look at 1660Tis power efficiency. Now imagine scaled up GTX 1700 & 1800 parts (just regular Turing shaders / sans RT HW), similar to 1660Ti. Would've been absolute beasts with perfectly tuned Clocks/TBP ratios. That's as power efficient an architecture as one could hope, right now.

What 7nm does is give AMD an opportunity to correct GCN's imbalances (which is inline with the rumors we've heard so far) & to include more CUs/shaders (being a smaller node) with increased clocks and keep these chips at a sane TDP levels. AMD won't be able to catch up to Nvidia's efficiency with a single uArch jump, but they appear to be making strides in the right direction..

Lower nodes/increased density presents cooling challenges but the same can be addressed with 2-3 fan setups.. Challenges inspire innovations.. They're engineers remember, they just LOVE solving problems :)
 
  • Like
Reactions: french toast

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
If not for Nvidia's overzealous attempt to include dedicated RT hardware in RTX chips, they would have left team RTG in the dust.

Look at 1660Tis power efficiency. Now imagine scaled up GTX 1700 & 1800 parts (just regular Turing shaders / sans RT HW), similar to 1660Ti. Would've been absolute beasts with perfectly tuned Clocks/TBP ratios. That's as power efficient an architecture as one could hope, right now.

What 7nm does is give AMD an opportunity to correct GCN's imbalances (which is inline with the rumors we've heard so far) & to include more CUs/shaders (being a smaller node) with increased clocks and keep these chips at a sane TDP levels. AMD won't be able to catch up to Nvidia's efficiency with a single uArch jump, but they appear to be making strides in the right direction..

Lower nodes/increased density presents cooling challenges but the same can be addressed with 2-3 fan setups.. Challenges inspire innovations.. They're engineers remember, they just LOVE solving problems :)
Every physical design stands on three pillars. Power/Performance/die area.

You can sacrifice one of them, for the other two. With Turing, and previously with Volta, Nvidia sacrificed die area.

AMD did not. One of the reasons why is that the 7 nm did not turned up as good as everyone thought and hoped. Navi is relatively small GPU. In order to increase clocks, and pack as much architectural changes AMD burned a lot of X-tors. AGAIN. But that would require redesign, and would result in much higher manufacturing costs for Navi. Which is not good idea, because of what I wrote in one of previous posts: There is no growth in client GPUs. Client GPUs are doomed, and you better get used to this. Go back to the posts in which I talked about the reality of process nodes(high design costs, no growth, etc). It directly relates to this.

When the dies came out of bakery and AMD find out that 7 nm process was not as good as everyone hoped for, they went back and sacrificed power for performance, and high margins on GPUs. This is reality that we will face in future.

I exclude even the ridiculous expectation people have had(150W, RTX 2070 competitor for 250$, otherwise its a fail!!!11oneonetwo!), which actually show what brand prerception AMD has, and what brand perception Nvidia has(one of you here posted that RTX 2070 is 180W GPU. Apart from it is 215W GPU, actually, so thats that).

If what you say about process nodes would be true, why Intel haven't solved their 10 nm Issues? Why we will have Ice Lake only on nieche products, and no real volume parts? Why Nvidia haven't used 7 nm process for Turing? The paint is on the wall, already, guys.
 

crazzy.heartz

Member
Sep 13, 2010
183
26
81
RX480 didn't use 100W more than the GTX 1060. It did use about 50W more, depending on which versions of each card were being used. And which BIOS the 480 was using. I agree that Vega was a huge power sucker.

We will have to wait and see what navi uses. Its very possible the boards displayed were development boards, which almost always have 2x8 pin connectors on them, nVidia included.

It started out as mere 10-20 Watt difference in the very beginning but only grew with each newer variation of Polaris & reached cataclysmic proportions with the 590.

Yes, those preview cards are running on Polaris Chips. In any case 180 Watts for a Rx2060 counter and 225 Watts for an Rx2070 competitor is excellent. That lower SKU might launch with a 6Pin + 8Pin though..


They have gone with one or two Navi designs for the middle of the market. It is about getting the design choice right, then suddenly there is much more sales and therefore profit in GPUs for AMD.
Some of us think maybe AMD got the CU balance wrong for Navi 10 and will have to clock it out of it's comfort zone to make ends meat.. Just like polaris and vega.
I'm sorry but a 'clean sheet' design on a more mature 7nm at ~250-280mm² should be able to duke it out with a 2080 on perf/watt on rasterization. and win!.

The delays/feature bump/redesign now means they have to push it way past it's intended efficiency point to make ends meat.

On the contrary, i'm thinking AMD is having a GTX 680 moment right now and they've bumped this chip up the hierarchical ladder. As Navi production started a looong time ago, they would have planned it in line with Nvidia's usual strategy which is adding more core s with each new uArc for performance bumps in respective tiers. Last couple of jumps were spectacular in terms of performance and AMD would have anticipated something similar of sorts with GTX 1060 / 1070 successors and included corresponding number of shaders in their counter cards.

We know how things turned out with RTX Turings which might provide AMD an oppertunity to bump up their X70 class chip to take on corresponding X70 series from Nvidia.. This might make them tune up the clocks a little more, outside the power/performance sweet spot. Compared to an RX590, this would pale in comparison.

Hence, I am skeptical about this chip having just 40CUs. Sounds a lot less. Or maybe, they've made quite a breakthrough with these Gaming specific adjustments in their RebalancedDNA.

It's in line with Navi being at lower density compared to Vega, to enable higher clocks, accommodate rest of the features. Maybe that's why a ~256mm^2 chip would house only 40CUs, as against 48CUs on a similar 7nm part.
 
  • Like
Reactions: french toast

crazzy.heartz

Member
Sep 13, 2010
183
26
81
Every physical design stands on three pillars. Power/Performance/die area.

You can sacrifice one of them, for the other two. With Turing, and previously with Volta, Nvidia sacrificed die area.

AMD did not. One of the reasons why is that the 7 nm did not turned up as good as everyone thought and hoped. Navi is relatively small GPU. In order to increase clocks, and pack as much architectural changes AMD burned a lot of X-tors. AGAIN. But that would require redesign, and would result in much higher manufacturing costs for Navi. Which is not good idea, because of what I wrote in one of previous posts: There is no growth in client GPUs. Client GPUs are doomed, and you better get used to this. Go back to the posts in which I talked about the reality of process nodes(high design costs, no growth, etc). It directly relates to this.

I agree with you that Client GPUs and node jumps in general are experiencing stagnation; however, am quite positive about AMD having a better handle on how Navi should turn out as they had another (bigger) 7nm part in production for atleast 6+ months. Maybe TSMC let them down and they were unable to meet performance targets with this particular variation of Navi (which now appears happened as they've made significant changes to chip design/ different uArc)

However, the X-factor for AMD is consoles. Sony bore the brunt of Navi development cost and having both console manufactures in the bag, in addition to what appears to be the cloudfront as well (MS Azure & Stadia) most of their RnD for chip design is already done. All they could do is port the base design onto consumer parts. Since both Sony and MS, in order to leg up on the other, opt for chips in different performance tiers; AMD gets free RnD for their PC parts. Last couple of chips from Sony have been near replicas of their PC parts.

If AMD was able to release consumer chips back when they were cash strapped and layoffs we happening left and right, they are certainly in a better position to do so now. Nvidia won't ever stop producing desktop chips as long as there are consoles (& they're not powering them).

When the dies came out of bakery and AMD find out that 7 nm process was not as good as everyone hoped for, they went back and sacrificed power for performance, and high margins on GPUs. This is reality that we will face in future.

I exclude even the ridiculous expectation people have had(150W, RTX 2070 competitor for 250$, otherwise its a fail!!!11oneonetwo!), which actually show what brand prerception AMD has, and what brand perception Nvidia has(one of you here posted that RTX 2070 is 180W GPU. Apart from it is 215W GPU, actually, so thats that).

AMD need to prove themselves, again, on the GPU front; as they've done with the CPUs. I remember how let down people were to find out that multi core, super high Ghz bulldozer was actually slow compared to intel's with half the cores/lower clocks. As a result, even now, when a relatively uninformed buyer goes in the market and someone suggests that the AMD part has more cores and speed and actually perform better, people still buy intel.

It's a similar ditch they've dug up for GPUs by overvolting & giving us 250/300 Watt GPUs for last couple of generations (that don't beat their counterparts).

If what you say about process nodes would be true, why Intel haven't solved their 10 nm Issues? Why we will have Ice Lake only on nieche products, and no real volume parts? Why Nvidia haven't used 7 nm process for Turing? The paint is on the wall, already, guys.

I used to follow the whole 10mn debacle up until a few years back (with some semi accurate information that turned out to be true). However, the moment I saw Ryzen benchmarks and understood their future strategy, intel became irrelevant, fast.

Also, these are business decisions. How they turn out depends on respective companies and their strategy, which isn't always correct.. As we've seen with AMD's powerhogs, intel's eternal pursuit of 10nm & Nvidia's attempt to revolutionize the market with RT goodness.

However, we will have ultra powerful consoles pretty soon which would necessitate faster desktop hardware and I'm pretty sure there will be capable products we can purchase from either camp. Now, if we need to tune them ourselves for lower power consumption, that remains to be seen. Masses vote with their wallets, & when they don't pay, companies change strategies.. or ownership..
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
I agree with you that Client GPUs and node jumps in general are experiencing stagnation; however, am quite positive about AMD having a better handle on how Navi should turn out as they had another (bigger) 7nm part in production for atleast 6+ months. Maybe TSMC let them down and they were unable to meet performance targets with this particular variation of Navi (which now appears happened as they've made significant changes to chip design/ different uArc)
I can't read your post now, but this popped to my eyes.

On what in the world you base this idea? Navi is made on Beta Silicon. Alpha silicon was great, and everyone thought Beta silicon will be even better. But it wasn't much better. Radeon 7 is made on Alpha Silicon.

Navi 20 is based on different silicon, different process, possibly with EUV. How could it compare to N7? And how do we know it is so much better(it isn't).

And why is it bloody competition, for companies "to win"?
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
No, Radeon VII is definitely a die shrunk Vega intended for heavy compute work loads. Navi was never intended for that use case.

I'm not completely convinced, it appears to have some changes which is inline with normal architecture iterations. It has fewer CUs and more performance which the clock increase doesn't account for. I don't know of any deep dive on the architecture but i'm willing to bet it isn't a direct die shrink. I think it would make a lot of sense for them to have approached it that way.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I'm not completely convinced, it appears to have some changes which is inline with normal architecture iterations. It has fewer CUs and more performance which the clock increase doesn't account for. I don't know of any deep dive on the architecture but i'm willing to bet it isn't a direct die shrink. I think it would make a lot of sense for them to have approached it that way.

Radeon VII is a consumer Instinct Mi50 which uses a cutdown Vega20. Radeon VII also has double the memory bandwidth of Vega64.

Anandtech has an overview of its architecture here so I don't need to repeat it: https://www.anandtech.com/show/13923/the-amd-radeon-vii-review
 

DrMrLordX

Lifer
Apr 27, 2000
21,637
10,855
136
I exclude even the ridiculous expectation people have had(150W, RTX 2070 competitor for 250$, otherwise its a fail!!!11oneonetwo!), which actually show what brand prerception AMD has, and what brand perception Nvidia has(one of you here posted that RTX 2070 is 180W GPU. Apart from it is 215W GPU, actually, so thats that).

Do you consider an RTX 2060 competitor, maybe +5-15%, at $279 to be ridiculous? That's what AMD should have brought to the table.

In light of some other revelations from other posters, it certainly appears possible that AMD is attempting to produce a bunch of 56CU dice, and they're selling harvested 40CU "failures" as a 2070 competitor this year. So if every die on Navi10 (at least) is really a 56CU die, then that might explain the cost side of thing, a bit. And if that's true, then we'll be seeing 56CU Navi next year . . . not on 7nm+.

No idea how they are cutting Navi14 out of the wafers, mind you, or what the BoM is on those cards. I'd love to see an analysis of that (people did pretty good analysis of the BoM for Radeon VII, mostly concluding that the HBM2 alone ran over $200).
 

Glo.

Diamond Member
Apr 25, 2015
5,711
4,559
136
Do you consider an RTX 2060 competitor, maybe +5-15%, at $279 to be ridiculous? That's what AMD should have brought to the table.

In light of some other revelations from other posters, it certainly appears possible that AMD is attempting to produce a bunch of 56CU dice, and they're selling harvested 40CU "failures" as a 2070 competitor this year. So if every die on Navi10 (at least) is really a 56CU die, then that might explain the cost side of thing, a bit. And if that's true, then we'll be seeing 56CU Navi next year . . . not on 7nm+.

No idea how they are cutting Navi14 out of the wafers, mind you, or what the BoM is on those cards. I'd love to see an analysis of that (people did pretty good analysis of the BoM for Radeon VII, mostly concluding that the HBM2 alone ran over $200).
Navi 10 is 40 CU chip. Navi 14 is 20 CU chip. THATS IT. A lot of die area has been thrown at new CU's, new GPU layout, caches, and... Higher clocks of the GPU. We are talking about a GPU that has clock in 2-2.2 GHz range.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
No idea how they are cutting Navi14 out of the wafers, mind you, or what the BoM is on those cards. I'd love to see an analysis of that (people did pretty good analysis of the BoM for Radeon VII, mostly concluding that the HBM2 alone ran over $200).

The BOM pricing for the HBM2 however was taken from a rumor site where they just doubled the price of what memory cost for Vega64. Which is an entirely inaccurate way to determine prices.
 
Status
Not open for further replies.