[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 74 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
In all of those anylysis's about manufacturing costs, not once I have heard anything about GDDR6 costs.

What if it is around 10$ per GDDR6 chip? What if it is part of the price creep of Nvidia GPUs?

A good question, worth investigating.

However, let's take a look at what's on the market. Nvidia launched a 200mm2 GTX 1060 for $250 with 6GB of GDDR5. They then launched a 284mm2 GTX 1660 Ti for $280 with 6GB of GDDR6. Perhaps they are willing to sacrifice margins to win the sub $300 market. But, I think it demonstrates that memory price increase isn't too significant.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
A good question, worth investigating.

However, let's take a look at what's on the market. Nvidia launched a 200mm2 GTX 1060 for $250 with 6GB of GDDR5. They then launched a 284mm2 GTX 1660 Ti for $280 with 6GB of GDDR6. Perhaps they are willing to sacrifice margins to win the sub $300 market. But, I think it demonstrates that memory price increase isn't too significant.
12 nm wafers, are mature enough to yield at over 90%, possibly at 95% rate, and are cheap enough to give buyout option of 4500$ per wafer. Those are pennies compared to 7 nm design and manufacturing costs.
 
  • Like
Reactions: DarthKyrie

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
I'd wait until a comprehensive round of undervolting tests have been made on Navi before making that claim.

Depending on how far 1750/1905 is up the bell curve for the chip, they could be leaving alot of performance per watt on the table in order to match RTX 2070 with that single die package.

Albeit its extremely unlikely that AMD would come out with a dual gpu package (dual gpu package, not dual gpu card) with this generation, I wouldn't put it past them to have it waiting, even if it only benefits mGPU enabled games and compute (hint, RT is a great parallel scaling compute workload).

Undervolting applies to NVIDIA as well. Most Pascal and Turing cards I've played with have at least a 125mhz buffer at any point on the voltage curve.
 
  • Like
Reactions: tviceman

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
12 nm wafers, are mature enough to yield at over 90%, possibly at 95% rate, and are cheap enough to give buyout option of 4500$ per wafer. Those are pennies compared to 7 nm design and manufacturing costs.

Let's make sure we are talking about both node price point and memory price point.

AdoredTV calculated the 7nm adds ~$35 to the 5700XT manufacturing cost (compared to Polaris) using a Silicon Cost Calculator with worst case scenario (go to 27:30 in the video). They then rounded it to $50 with a (albeit random) addition for GDDR6 costs.

The GTX 1660 Ti is much larger than the GTX 1060. Admittedly I don't know if 12nm is cheaper than 16nm (very possible to be sure). But I'm unsure how you are fusing die size and memory cost in your analysis. 1660 Ti is 42% larger die than the GTX 1060 and despite having more expensive GDDR6, it's only 12% more expensive than the GTX 1060. If one were to make an analysis, and even assume 12nm is cheaper than 16nm without any sources, they'd still be hard pressed to state that GDDR6 price increase would be significant.
 

soresu

Diamond Member
Dec 19, 2014
4,252
3,756
136
I live in a metro area with ~ 2 million people. I pay $120/month for ~ 250 down, 10 up, and a 1TB data cap. It's the best available here. Stadia simply isn't viable in many places and likely won't be for quite a long time and that's before discussing the the latency issues.
Are data caps still a thing for landlines??!!!
In the UK they've been little more than a talking point for some time, the only caps they really care about now are mobile/cell broadband data caps - the ones where they price gouge for every GB over a tiny limit.
 
  • Like
Reactions: lightmanek

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Are data caps still a thing for landlines??!!!
In the UK they've been little more than a talking point for some time, the only caps they really care about now are mobile/cell broadband data caps - the ones where they price gouge for every GB over a tiny limit.

Yup. Extremely common for cable internet like comcast and cox in the U.S.. The U.S. has horrible internet...
 
  • Like
Reactions: DarthKyrie

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Let's make sure we are talking about both node price point and memory price point.

AdoredTV calculated the 7nm adds ~$35 to the 5700XT manufacturing cost (compared to Polaris) using a Silicon Cost Calculator with worst case scenario (go to 27:30 in the video). They then rounded it to $50 with a (albeit random) addition for GDDR6 costs.

The GTX 1660 Ti is much larger than the GTX 1060. Admittedly I don't know if 12nm is cheaper than 16nm (very possible to be sure). But I'm unsure how you are fusing die size and memory cost in your analysis. 1660 Ti is 42% larger die than the GTX 1060 and despite having more expensive GDDR6, it's only 12% more expensive than the GTX 1060. If one were to make an analysis, and even assume 12nm is cheaper than 16nm without any sources, they'd still be hard pressed to state that GDDR6 price increase would be significant.
Yes, because random guy on Internet used basic armchair logic to calculate yield and worst case scenario for 7 nm process it must be true!
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
AdoredTV (not a random guy) used a calculator for the die size: https://caly-technologies.com/die-yield-calculator/

I'm certain it's not perfect. Are you certain it's outright dismissable, as you appear to be arguing?

Perhaps it is wrong. But he used a 100% price increase over 14nm, which, btw, is worse than what AMD themselves caluclated, see 27:20. As AdoredTV said, it's basically worst case scenario based on the calc.

The GDDR6 was random, as there are no sources available that (AdoredTV, you or I) could find. But looking at GTX 1660 Ti and GTX 1060 (as already discussed), I have doubts it is sigificantly priced different.

You criticize a source, without offering your own?

What makes you think 284mm2 12nm is cheaper than 200mm2 16nm with equal amounts of memory? If you're correct, you're keeping your cards close to your chest for some reason. Let's see.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
AdoredTV (not a random guy) used a calculator for the die size: https://caly-technologies.com/die-yield-calculator/

I'm certain it's not perfect. Are you certain it's outright dismissable, as you appear to be arguing?

Perhaps it is wrong. But he used a 100% price increase over 14nm, which, btw, is worse than what AMD themselves caluclated, see 27:20. As AdoredTV said, it's basically worst case scenario based on the calc.

The GDDR6 was random, as there are no sources available that (AdoredTV, you or I) could find. But looking at GTX 1660 Ti and GTX 1060 (as already discussed), I have doubts it is sigificantly priced different.

You criticize a source, without offering your own?

What makes you think 284mm2 12nm is cheaper than 200mm2 16nm with equal amounts of memory? If you're correct, you're keeping your cards close to your chest for some reason. Let's see.
Im not talking about bloody GTX 1660 Ti, but Navi's manufacturing costs.

100% price increase has nothing to do with:
A) bigger die size, than Polaris.
B) Not mature process, like N7 is, which affects yields.
C) almost 3 times higher wafer costs.

All of this added makes real possibility that manufacturing costs, solely, increased not by 100% but more. Much more.

And this does not even account for the prices of GDDR6. How much those chips cost? GN last year said that at least 20% more than GDDR5.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
A) 232 vs 251 is negligible. Furthermore, AMD slides indicate the cost difference is less than 100%. AdoredTV used 100%, which makes up for the minor die size difference.
B) The estimate was 70% yields, including the defective yields for the cut-down variant. I have more reason to believe that than your estimate of nothing.
C) 3x? AMD says less than 100% more for 7nm for 16nm/14nm on 250mm2 dies, so how do you figure?

"Im not talking about bloody GTX 1660 Ti, but Navi's manufacturing costs."

You say, that but then you say:

"And this does not even account for the prices of GDDR6. How much those chips cost? GN last year said that at least 20% more than GDDR5. "

Do you need me to again repeat how the GTX 1660 Ti is very, very relevant to GDDR6 prices when compared to the 1060 and GDDR5? Again, unless you think 284mm2 12mn is significantly cheaper than 200mm2 16nm, which seems incredible to believe.

You don't have to fervently defend $450. Even I initally was OK with it on performance-per-dollar levels, until I saw the die size. It's ok to be critical on both companies.
 
  • Like
Reactions: itsmydamnation

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
A) 232 vs 251 is negligible. Furthermore, AMD slides indicate the cost difference is less than 100%.
AMD does not say MANUFACTURING COSTS, but DESIGN COSTS. They have jumped from 150 mln per design on 14 nm to 270 mln per design on 7 nm.

Which is below 100% ratio.

But it has NOTHING TO DO WITH MANUFACTURING COSTS.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136

Greater than 2 is less than 100% of less than 4. Clearly says "yields" not design.

That's an AMD provided slide.
Which on the other hand pretty much suggests that manufacturing costs for 16 and 14 nm were not as low as people thought they were.

Why? Because of the reasns I mentioned: new, not mature process, large die sizes, and 12500 USD per wafer. If it is only 2x increase over 14 nm - it means the manufacturing costs of Nvidia GPUs are pretty steep even for old, mature process.
 

soresu

Diamond Member
Dec 19, 2014
4,252
3,756
136
Guys, guys, you are arguing costs based off of announced prices alone, before nVidia even has their answer in the market yet, let alone accounting for the previous gen Vega and Polaris product in the channel they are probably trying to shift by making it seem greater value by comparison - once the RX 5700 cards hit the market, I would expect significant price drops for both product lines, if they havent started to already.

Give it 6 months and the price landscape will shift quite a bit I would imagine.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,520
9,954
136
Its weird that "AMD wants to make money" is somehow a contestable statement.

It isn't criminal or amoral to charge what someone is willing to pay, and early adopters of any technology are willing to pay more and put up with more than the average consumer (In this case the tech enthusiasts and the #BetterRed crowd). Why let that cash go? Once those folks are tapped out, then its time to start bringing the prices down (and getting issues resolved) so the average Joe gets something that's affordable and "just works".

Its basically the entire MO of the Titan vs xx80Ti for the last several cycles.
 

DrMrLordX

Lifer
Apr 27, 2000
23,227
13,308
136
Stadia supports 4k streaming. However, if you do, you can consume 1 TERABYTE of data in just 3 days. Meaning data caps will be a major issue.

True. I am fortunate enough to not have a data cap.

Yes, because random guy on Internet used basic armchair logic to calculate yield and worst case scenario for 7 nm process it must be true!

Okay, provide us with better data. Don't take that as a jab. Seriously, we need more information.

I'd heard the US was bad, but that's pretty terrible in todays age of online video consumption.

The cable/telco duopolies are getting worse in this regard. They know mobile data caps are a thing, so they're following suit with some of their cheaper landline connections.

Its weird that "AMD wants to make money" is somehow a contestable statement.

What's contestable is whether pricing their product at $450 makes them more money overall than pricing the same one at $300-$350. If the higher price kills demand for the product initially (which I argue that it will), it damages AMD's rep (poor price/performance ratio) and forces them to lower prices quickly to move stock (again damanging their rep; the retreat is seen as desperation).
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The differences in costs between process generations drop as the new process gets to be mature. As AMD makes more 7nm GPUs, cost difference per wafer will drop from the 2x shown above to a lower level. Perhaps even to a level where it looks like a historical gain rather than something amazing. The biggest contributing factor is high volume shipment.

I think over the long term, the differences will shrink enough that upfront costs start dominating.

Nvidia may be smart to let others like AMD shoulder the burden before moving onto 7nm. Since they have such a large architectural advantage, they can afford to do this, while AMD moves to 7nm quick to try to reduce that gap.
 
  • Like
Reactions: VirtualLarry

soresu

Diamond Member
Dec 19, 2014
4,252
3,756
136
The cable/telco duopolies are getting worse in this regard. They know mobile data caps are a thing, so they're following suit with some of their cheaper landline connections.
Unsurprising, the current FCC head Ajit Pai is a telco lobbyist, the moment he was confirmed I knew Net Neutrality was dead for sure.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Okay, provide us with better data. Don't take that as a jab. Seriously, we need more information.
I learned my lesson to not speculate about yields. And that is exactly what AdoredTV is doing. He is speculating about them.
 

maddie

Diamond Member
Jul 18, 2010
5,204
5,614
136
True. I am fortunate enough to not have a data cap.



Okay, provide us with better data. Don't take that as a jab. Seriously, we need more information.



The cable/telco duopolies are getting worse in this regard. They know mobile data caps are a thing, so they're following suit with some of their cheaper landline connections.



What's contestable is whether pricing their product at $450 makes them more money overall than pricing the same one at $300-$350. If the higher price kills demand for the product initially (which I argue that it will), it damages AMD's rep (poor price/performance ratio) and forces them to lower prices quickly to move stock (again damanging their rep; the retreat is seen as desperation).
The slide posted by Crisium provided some good data to infer further.

https://forums.anandtech.com/thread...vi-graphics-cards-at-e3.2564009/post-39850637

If yielded costs are 2X from14/16nm to 7nm and yield should (must ?) be lower for 7nm, then costs of wafer is less than 2X increase, maybe a lot less (33% less?).

RTX2060 445mm^2 ~= RX5700 251mm^2 in cost to fab?
Nvidia uses custom TSMC 12nm process. (safe to assume is not cheaper than generic 16nm previously used)
RTX2060 = 6GB vs RX5700 = 8GB.

This seems to be the main difference in cost between the two cards. This could indicate that AMD is getting as high a margin as Nvidia on those two competing products, or at least very close.
 
  • Like
Reactions: xpea

soresu

Diamond Member
Dec 19, 2014
4,252
3,756
136
Okay, provide us with better data. Don't take that as a jab. Seriously, we need more information.
I seem to remember a time when companies did talk about this, and I'm sure that TSMC and Samsung would not keep such information from their shareholders - are there not TSMC/Samsung Financial Analyst / investor Days that deal with these questions of yield and such?
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
What are prices of Nvidia midrange GPUs? RTX 2070 and 2080? How much Nvidia wanted for previous gen midrange GPUs: GTX 1070, 1070 Ti and 1080? How much Nvidia charged for even previous gen. midrange GPU: GTX 980?

This is what you guys do not get. It is not mainstream GPU in performance, but Midrange. And priced accordingly to what is on the market.

Vega was Midrange GPU in performance. And priced accordingly. Period.

Vega 56 and 64 were the high end cards from AMD, that performed below expectations. From were you get it was midrange? thats just not true, Vega was Fury replacement.

Im ive already told you that Nvidia did this... i belive two times to increase price, AMD has done it once, and now two. Im comparing AMD to AMD here.

You have no proof that Navi 10 is actually the Vega replacement, all evidence points to Navi 10 being a midrange GPU intended to replace Polaris 10/11 and they just doubled the price because they match RTX2070 performance. Even after WE KNOW that the HIGHER PERF card was to be called RX690 using the same naming that is a direct replacement to what Polaris is using now and you keep insisting it is not. They changed the naming at the last moment to make this less obious and to have people like you defend this price increase and call it a "Vega replacement".

Plase accept you have no evidence to sustain what you are saying. If i telling you this is because this happened before, several times, and i know the results. Next time dont complain when a entry level card cost $1000. The GPU market is in such a state now that only Intel could save it, so we are petty much doomed.
 
Last edited:
  • Like
Reactions: DooKey and crisium

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Vega 56 and 64 were the high end cards from AMD, that performed below expectations. From were you get it was midrange? thats just not true, Vega was Fury replacement.

Im ive already told you that Nvidia did this... i belive two times to increase price, AMD has done it once, and now two. Im comparing AMD to AMD here.

You have no proof that Navi 10 is actually the Vega replacement, all evidence points to Navi 10 being a midrange GPU intended to replace Polaris 10/11 and they just doubled the price because they match RTX2070 performance. Even after WE KNOW that the HIGHER PERF card was to be called RX690 using the same naming that is a direct replacement to what Polaris is using now and you keep insisting it is not. They changed the naming at the last moment to make this less obious and to have people like you defend this price increase and call it a "Vega replacement".

Plase accept you have no evidence to sustain what you are saying. If i telling you this is because this happened before, several times, and i know the results. Next time dont complain when a entry level card cost $1000. The GPU market is in such a state now that only Intel could save it, so we are petty much doomed.
The only one who has zero evidence, to your claims is you. You have your assumptions, not reality.

Stop spinning reality to what you like about it. Vega was Midrange, just as GTX 1080 was. It was priced directly to compete with it(499$) at the release. Currently, AMD has midrange GPU in performance and prices it accordingly, whether you like it or not. Period. They put it against Vega and RTX 2070 in their promotial material. What more do you want as proof that this is midrange GPU?

There is no point in arguing about it.
 
Status
Not open for further replies.