[Rumor, Tweaktown] AMD to launch next-gen Navi graphics cards at E3

Page 76 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

DrMrLordX

Lifer
Apr 27, 2000
23,226
13,305
136
they have no need for big market share in the desktop gpu space, they would rather have higher margins.
They can probably lower their prices by 40% on these chips and still make decent profits on them. RX5700XT chip is barely larger than Polaris RX 480/580 chip.

So are you personally happy about that? Why would any consumer agree to purchase that product for launch price when AMD is gouging just as badly as JHH? It isn't hard for a "regular consumer" to see that there's something wrong with those prices. There's limits to which any of us should be understanding of a given corporation's desire for profit. Their incentive for profit has to be balanced against our desire for affordable products.

Also, were I a major AMD shareholder, if I found out that they were missing out on an opportunity to pick up marketshare from nVidia, I would be outraged.

With very small die, like Zen 2 is, 70% yield is... terrible news for large die like Navi 10.

Potentially, yes. Though it's better than Intel's 10nm yields, and AMD is apparently ready to jump into that buzzsaw for Navi dice on PS5/Xbox 2 which will ship in the tens of millions next year. I think the yield situation may have improved somewhat since that point anyway . . . remember that AMD has been producing Vega20 on TSMC 7nm since last year.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Any reason why AMD never does CPU+GPU bundles?

Maybe wasn't worth it in the past? I believe they mentioned some product(s) be available to purchase on amd.com when officially launched. Selling bundles on amd.com would be easiest way to control the discounts.
 
  • Like
Reactions: DarthKyrie

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
So are you personally happy about that? Why would any consumer agree to purchase that product for launch price when AMD is gouging just as badly as JHH? It isn't hard for a "regular consumer" to see that there's something wrong with those prices. There's limits to which any of us should be understanding of a given corporation's desire for profit. Their incentive for profit has to be balanced against our desire for affordable products.

Also, were I a major AMD shareholder, if I found out that they were missing out on an opportunity to pick up marketshare from nVidia, I would be outraged.



Potentially, yes. Though it's better than Intel's 10nm yields, and AMD is apparently ready to jump into that buzzsaw for Navi dice on PS5/Xbox 2 which will ship in the tens of millions next year. I think the yield situation may have improved somewhat since that point anyway . . . remember that AMD has been producing Vega20 on TSMC 7nm since last year.

Sure you would. And if they were picking up marketshare you would be outraged they weren't picking up higher margins. Or somebody would. And round and round it goes...
 
  • Like
Reactions: psolord

DrMrLordX

Lifer
Apr 27, 2000
23,226
13,305
136
AMD sold Vega64 bundled with monitors.

Sure you would.

Of course I would. AMD needs to develop marketshare first and then margin second. Plus pushing for margin in a contracting market may hasten the decline of the market.
 
Last edited:

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
Even the 294mm2 gtx 680 beat the 550mm2 GTX 580 by 25-30%.

Navi is replacing Vega 64 in price only.

Pretty much. Its a slight improvement in performance with a much greater improvement in efficiency.

To be fair.. Geforce 580 was part of NVIDIA's Fermi family.. notoriously poorly optimized and extremely inefficient in both die space and power consumption. The move from 40nm to 28nm was also a huge upgrade. So yes when they moved to Kepler there was a huge increase in performance/die size.
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
They will recuperate their R&D money from the PS5 and Xbox 4,

Navi10 is it's own die which requires it's own maskset completely decoupled from consoles. Mask on 7nm alone will cost double digit millions. Total design cost is estimated at 300 Mio per chip. Ok, some basic r&d can be spread but making navi10 will still have cost them close to 100 mio at least.
 

DXDiag

Member
Nov 12, 2017
165
121
116
Jenhsen is already pissed by what Vega is doing in cloud and server space to their sales ;).

Not to mention HPC wins will be completely dominated by AMD in upcoming future.

Everybody here is looking at Nvidia's revenue from past years. But forget that it does not matter, when it goes for future. Nvidia does not have CPUs to pair their GPUs with. Intel and AMD both have them. And they both will kick Nvidia out of OEM space. Nvidia will get very little share here, because they will have to compete with Both Intel and AMD on the pricing.
The reality here is, all of NVIDIA's competitors still have a long way if they even hope to catch up to NVIDIA, the latest cloud marketshare numbers paint a terrifying image of NVIDIA's dominance there.

1561313489817.png
1561313677394.png
https://www.forbes.com/sites/paulte...ccelerators-more-than-you-think/#7b816d795edb
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
To be fair.. Geforce 580 was part of NVIDIA's Fermi family.. notoriously poorly optimized and extremely inefficient in both die space and power consumption. The move from 40nm to 28nm was also a huge upgrade. So yes when they moved to Kepler there was a huge increase in performance/die size.

That has more to do with the uarch changes than the node, same as now. AMD also went from 40nm to 28nm, they lost the efficiency crown, they raised prices almost 50% and had to sacrifice pretty much everything to remain competitive.

I hope AMD does better this time, though. I don't have high hopes.
 

Pohemi

Lifer
Oct 2, 2004
10,953
17,125
146
Sure you would. And if they were picking up marketshare you would be outraged they weren't picking up higher margins. Or somebody would. And round and round it goes...
Of course I would. AMD needs to develop marketshare first and then margin second. Plus pushing for margin in a contracting market may hasten the decline of the market.

I'm not an investing expert, but logic tells me that prioritizing margins over marketshare in this particular industry (and for AMD) would be foolish and short-sighted, hoping for short term returns rather than long term stability and staying power. Would you prefer individual units sell for slightly higher margins overall, or for the company to increase total marketshare by 5 or 10% ? Which would net you more dividends in the long run?
 

DrMrLordX

Lifer
Apr 27, 2000
23,226
13,305
136
Which would net you more dividends in the long run?

Part of the idea is that there might not be a long run. Consumer dGPU sales are sagging along with the PC market in general. AMD and nVidia (and apparently Intel) still want to serve this market, so having marketshare is going to be a good thing for at least another 5 years or so . . . maybe longer. It's hard to say when the dGPU market will finally go bust. But if you want to kill it quickly, you just raise prices and drive away buyers. Gaining marketshare with aggressively-priced products will drag things out a bit.
 
  • Like
Reactions: Head1985

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
But if you want to kill it quickly, you just raise prices and drive away buyers.

Exaclty. It's a self-fulfilling prophecy. First some greedy price-increases. Then they say volume is going down and cost up so we need to increase price even more, volume goes down even more...This can only be stopped by ditching the initial greedy price-increases.

if navi10 doesn't have huge margins I wonder why they even went 7nm...
 
Last edited:
  • Like
Reactions: Head1985

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Exaclty. It's a self-fulfilling prophecy. First some greedy price-increases. Then they say volume is going down and cost up so we need to increase price even more, volume goes down even more...This can only be stopped by ditching the initial greedy price-increases.

if navi10 doesn't have huge margins I wonder why they even went 7nm...
Yeah prices in last 10years like tripled or more than tripled(GTx580 top tier GPU 500usd and now TITAN RTX top tier GPU 2500usd).If you triple price you cant expect people will buy same numbers of cards.Are NV/AMD really that stupid?Midrange is now 450USD and they expecting they will sell same numbers of cards like if they cost 250usd?
Looks like both want dGPU market dead because they both do things to kill it very fast.
 
Last edited:
  • Like
Reactions: crisium and psolord

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Are NV/AMD really that stupid?Midrange is now 450USD and they expecting they will sell same numbers of cards like if they cost 250usd?

A bit of increase in line with inflation is ok actually. It's that the increase has been greater.

if navi10 doesn't have huge margins I wonder why they even went 7nm...

I want to put this in numbers. Let's assume the die size increases the same amount as the difference between Radeon VII and Vega 64. AMD also said out of the 50% perf/watt gains, 15% is due to process.

It goes from
-251mm2
-~RTX 2070 performance

to

-375mm2
-15% less than RTX 2070, which may be barely better than RTX 2060
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Looks like both want dGPU market dead because they both do things to kill it very fast.
Yeah, like OEM market is dying, at all.

DIY, consumer market is dying, because there is no growth in it. dGPUs are fine, and will be fine. But DIY, consumer market is destined to die.

I want to put this in numbers. Let's assume the die size increases the same amount as the difference between Radeon VII and Vega 64. AMD also said out of the 50% perf/watt gains, 15% is due to process.

It goes from
-251mm2
-~RTX 2070 performance

to

-375mm2
-15% less than RTX 2070, which may be barely better than RTX 2060
Have you actually checked the die size, of a GPU that has SIMILAR to Navi 10 amount of transistors on 14/16 nm process?

Over 10 bln Xtor GPU will be over 410 mm2 in die size.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Because Navi would be even worse without the benefits a process shrink brings.

Even worse? Do you know things that none of the rest of us do? We do not know how good or bad Navi is because NOBODY HAS TESTED IT.

People need to stop talking in past tense as if the card had been out for ages and we already knew everything about it.
 

lobz

Platinum Member
Feb 10, 2017
2,057
2,856
136
Even worse? Do you know things that none of the rest of us do? We do not know how good or bad Navi is because NOBODY HAS TESTED IT.

People need to stop talking in past tense as if the card had been out for ages and we already knew everything about it.
This. I do not have high hopes either, but after all this, we can wait 2 more weeks before we declare it doa, right?
 
  • Like
Reactions: Krteq and Pohemi

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
Even worse? Do you know things that none of the rest of us do? We do not know how good or bad Navi is because NOBODY HAS TESTED IT.

AMD tested it and showed us results and that will be the best case scenario. We also know TDP and that AMD didn't mention power use at all meaning it certainly isn't great. With these figures we can guess it's performance/watt being around RTX 2070 with a 1 node advantage. That's not great at all. And then there is the price...
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
if navi10 doesn't have huge margins I wonder why they even went 7nm...

Because Navi would be even worse without the benefits a process shrink brings.

The decision to go 7nm with NAVI saved them millions in R&D, millions of dollars they would have had to spend in order to get that extra performance they now got for free from the fab process.

So if they would go with 14/12 nm they would have to spend more in R&D (upfront) to reach the same performance they now reached with less R&D but using a more expensive fab process. You always have to compromise, spend more in R&D (upfront) and use a cheaper process or spend way less in R&D but use a more expensive fab process.
 
  • Like
Reactions: DarthKyrie

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
AMD tested it and showed us results and that will be the best case scenario. We also know TDP and that AMD didn't mention power use at all meaning it certainly isn't great. With these figures we can guess it's performance/watt being around RTX 2070 with a 1 node advantage. That's not great at all. And then there is the price...
What makes you believe they showed best case scenario? ;)
 
  • Like
Reactions: DarthKyrie

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
AMD tested it and showed us results and that will be the best case scenario. We also know TDP and that AMD didn't mention power use at all meaning it certainly isn't great. With these figures we can guess it's performance/watt being around RTX 2070 with a 1 node advantage. That's not great at all. And then there is the price...

Perf:Watt has nothing to do with actual performance though. Nobody even cared about it until nVidia added it to their marketing push for Maxwell. Now nVidia has mostly stopped talking about Perf:Watt, I don't recall them even mentioning it when Turing launched. They also completely stopped talking about frame times, which actually did impact performance. AMD potentially matching 2070 performance and power consumption with a 1/2 node advantage (12 to 7 isn't a full node) doesn't mean the chip is bad. It just means they drew a line for how much money to spend on increasing efficiency.
 
Status
Not open for further replies.