[WCCF] AMD Radeon R9 390X Pictured

Page 20 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Feb 19, 2009
10,457
10
76
Present day doesn't matter for old products because they are obsolete. Their purpose was to play games back THEN, its only a brain exercise to compare old stuff, but its irrelevant because nobody buys them NOW to play current games.
 

jpiniero

Lifer
Oct 1, 2010
17,211
7,585
136
1. 380 is an OEM card, which tells me little about the retail 380 series. Many times retail and OEM cards don't even match on the NV/AMD side.

nVidia has done this but not AMD. They would jump straight to the 4xx series instead if they had something new other than the 390/X. Considering how broke AMD is, there is no reason for them to do a new die unless it had something which would make them more competitive power consumption wise versus nVidia. Since they aren't doing 20 nm, that means HBM. And since HBM is expensive, it would only be possible at the 390/X price points.

I am BTW still highly skeptical that AMD would do a big die like what it would take to do a 4096 core single die. Either it's 2x2048 or it's much less cores and perhaps more of a competitor to the 980.

2. I've seen several people state that 380X is likely a 285X (i.e., 2048 SP, 32 ROP, 128 TMU Tonga) but I don't see how that card would be even remotely competitive with after-market R9 290/290X cards.

That's because the 290/X will be discontinued. Rebrand is still possible of course.
 

jpiniero

Lifer
Oct 1, 2010
17,211
7,585
136
The big die was the most confirmed thing?

http://cdn.videocardz.com/1/2014/07/Synapse-Design-500mm-AMD-GPU.jpg

Then there was the sushiwarrior post. Or perhaps you think we've gone back to the time of putting two dual core dies together to make a quadcore?

A 4096 core Tonga would be over 600 if it's on 28 nm. It'd at least explain the delays since they would have to get the drivers in order. It could very well be something like 3328 or 3072 although that wouldn't be enough to compete with the Titan X.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
A 4096 core Tonga would be over 600 if it's on 28 nm. .

No, it wouldn't.

1) You wouldn't use Tonga's memory controller but HBM.
2) You don't linearly increase the die size when you scale the chip (i.e., 4096 SPs is not the same as taking 2x Tongas 359mm2 die and doubling it).

R9 290X vs. HD7970Ghz

2816 SPs vs. 2048 SPs (+37.5%)
176 TMUs vs. 128 TMUs (+37.5%)
64 ROPs vs. 32 ROPs (+100%)
512-bit memory controller vs. 384-bit memory controller (+50%)

Die size only increased 24.4% (352mm2 --> 438mm2).

That means AMD can increase shaders & TMUs 37.5% and double the ROPs much the same way from 438mm2 --> 545mm2 die size.

^ That already gets to this point with a 512-bit memory controller:

3872 Shaders, 242 TMUs, 128 ROPs.

If they cut down 20-30mm2 with HBM1 memory controller, there you go 4096 SPs and 256 TMUs in a 550mm2 design no problem. But, don't forget that Tonga's 32 ROPs (285) >>>>> Hawaii 64 ROPs (R9 290). That means AMD doesn't need 96-128 ROPs in the 390X. They can just stick 64 Tonga ROPs inside R9 390X and that's equivalent to > 128 Hawaii XT ROPs. That saves space which means 4096 Shader/256 TMU design is actually possible inside a 550mm2 die.

Also, AMD can remove double precision from R9 390X as the last resort.

3) Your theory of 2x2048 Tongas makes no sense. Already discussed in this thread that AMD would not replace a a 295X2 with a slower card, not to mention when CF doesn't work, this $500-600 card would lose to a $300 290X/970. This is the one theory that never made sense actually. Also, not one site ever released such a rumour. This rumour was never corroborated by any other site.
 
Last edited:

Serandur

Member
Apr 8, 2015
38
0
6
There's an idea that's been floating around in my head that has me concerned, but I haven't seen it mentioned elsewhere.

Regarding that "dual-link interposer" to allow 8GB 390Xs, might it not suggest that half the VRAM would be competing with the other half for bandwidth? It sounds like some type of configuration where the connections between the GPU and HBM aren't increased, but an additional stack is added onto each connection in which case they would be fighting for bandwidth and effective bandwidth per stack would be halved (perhaps, suitably, to 320GB/s total; which could be plenty considering it's in the realm of GM200's bandwidth and Fiji will also have lossless color compression). That, or perhaps the GPU can only access half the stacks at any given time, which would be a problem as well?

Am I missing something that makes this unlikely?
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Considering how broke AMD is, there is no reason for them to do a new die unless it had something which would make them more competitive power consumption wise versus nVidia. Since they aren't doing 20 nm, that means HBM. And since HBM is expensive, it would only be possible at the 390/X price points.

It is not true that HBM is the only way AMD can be competitive on perf/watt. The good Tonga chips are quite competitive already on this metric, as can be seen in the Retina iMac. Even the FirePro W7100 at 150W TDP isn't bad; we're talking about performance similar to the GTX 960, and most 960s actually on the market top out at about 150W, even though that is above the official TDP figure for reference. In my opinion, R9 285 was never really intended as a fully viable product in its own right, but was just a dumping ground for trash silicon from the Tonga wafers ordered for Apple products. Judging perf/watt on the basis of this is a mistake.

Then consider that AMD may well move some of their products to Global Foundries 28nm instead of TSMC. This won't, of course, give the efficiency of a full node shrink, but it still has the potential to provide some gains. When AMD moved the cat cores from TSMC to GloFo, they got a 38% reduction in core leakage on the integrated GPU. If the same can be done with full-size discrete GPUs, that gives them the opportunity to increase clockspeeds, reduce power consumption, or do some combination of both. Imagine full Tonga running at 1150 MHz instead of the 918 MHz that the R9 285 runs at - and doing so at 150W TDP (up from about 125W for the Retina iMac's R9 M295X). Such a card could potentially come close to rivaling the GTX 970 in both raw performance and perf/watt.

I am BTW still highly skeptical that AMD would do a big die like what it would take to do a 4096 core single die. Either it's 2x2048 or it's much less cores and perhaps more of a competitor to the 980.

You keep making claims like this, but there's absolutely nothing backing them up. No data, no reliable sources, nothing at all - other than your apparent belief that AMD can't do anything right.

That's because the 290/X will be discontinued. Rebrand is still possible of course.

I don't think we will see a straight rebrand of Hawaii, not with the existing cards being blown out at fire-sale prices like they are now. Rather, I think that there's a good chance we will see a redesigned chip on GloFo 28nm SHP that is similar to Hawaii (same or similar number of shaders) but with GCN 1.2, a correspondingly smaller memory bus, FP64 performance cut down, and updated UVD block.

I don't buy the arguments that R9 380X will just be full Tonga and R9 390/390X will be Fiji. AMD isn't going to release a Fiji chip that can't at least come close to the Titan X, and that would leave too big a gap between the 380X and 390.
 

twjr

Senior member
Jul 5, 2006
627
207
116
And since HBM is expensive, it would only be possible at the 390/X price points.

Do you know how much HBM costs? It is repeatedly stated that it is expensive so it can only be high-end but does anyone have anything factual to back it up.?
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Also, AMD can remove double precision from R9 390X as the last resort.

I don't think this is going to happen. AMD's Financial Analyst Day slides focus a lot on the professional GPU market, perhaps more so than on the desktop side. AMD sees this as a major growth area. This means that, if anything, Fiji would have been designed as a compute-first chip. With Nvidia having sacrificed Double Precision in Maxwell, this offers AMD an opportunity to flat-out beat the Green Team, and by a substantial margin.

3) Your theory of 2x2048 Tongas makes no sense. Already discussed in this thread that AMD would not replace a a 295X2 with a slower card, not to mention when CF doesn't work, this $500-600 card would lose to a $300 290X/970. This is the one theory that never made sense actually. Also, not one site ever released such a rumour. This rumour was never corroborated by any other site.

As I have said before, the only way this would make any sense would be if AMD found a way to make two GPUs work together seamlessly, addressed by the system as one big chip. Anything requiring Crossfire or any kind of support from third-party software is an automatic fail, since (as you note) it would be flat-out inferior to the already existing R9 295 X2. The rumors all point to a big-die chip with HBM, and that's what AMD's diagram from Financial Analyst Day seems to show. Therefore, I expect to see Fiji as one large GPU with HBM, not some kind of MCM setup.
 

Elfear

Diamond Member
May 30, 2004
7,169
829
126
nVidia has done this but not AMD. They would jump straight to the 4xx series instead if they had something new other than the 390/X. Considering how broke AMD is, there is no reason for them to do a new die unless it had something which would make them more competitive power consumption wise versus nVidia. Since they aren't doing 20 nm, that means HBM. And since HBM is expensive, it would only be possible at the 390/X price points.

I am BTW still highly skeptical that AMD would do a big die like what it would take to do a 4096 core single die. Either it's 2x2048 or it's much less cores and perhaps more of a competitor to the 980.

Do you believe that AMD is so incompetent that in nearly 2 years they will only increase performance by 10%?
 

jpiniero

Lifer
Oct 1, 2010
17,211
7,585
136
You keep making claims like this, but there's absolutely nothing backing them up. No data, no reliable sources, nothing at all - other than your apparent belief that AMD can't do anything right.

Yields on something that big, even on something as mature as 28 nm is at this point is going to be bad. nVidia can get away with it because they have a legion of fanboys who will pay $1K each, but AMD has no such luxury. Given the 4 GB limitation of HBM it might be a better approach.

Do you believe that AMD is so incompetent that in nearly 2 years they will only increase performance by 10%?

The main purpose is to get the power consumption down rather than raw performance. Maybe if they felt it was necessary they could use the WCE edition as an additional model with an extremely high default clock.
 

crashtech

Lifer
Jan 4, 2013
10,695
2,294
146
Is it really a foregone conclusion that gamers won't make an informed decision to pay well for an AMD GPU? The info is going to be out there and easy to find once these puppies are released. I paid pretty dearly for my Sapphire 290 when it was new, and it has stood the test of time fairly well.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Yields on something that big, even on something as mature as 28 nm is at this point is going to be bad. nVidia can get away with it because they have a legion of fanboys who will pay $1K each, but AMD has no such luxury. Given the 4 GB limitation of HBM it might be a better approach.

Let's assume 552mm2 die size = 23mm x 25mm die

I get 94 die per 300mm wafer. I've seen estimates that 28nm wafer costs about $5000 USD. That means at 100% yield, it costs $53.19 to manufacture a 552mm2 die at TSMC/GLoFo. Let's apply a yield of 40% only since yields of large die aren't 100%, and we get ~$133 per the die. Coincidentally, this is actually more expensive than the cost to manufacture a 520mm2 GTX580. Wafer prices between very new 40nm and very mature 28nm by now shouldn't be much different. $5K per wafer sounds reasonable.
http://anysilicon.com/die-per-wafer-formula-free-calculators/

4u06C.jpg


Add
$35 for the heatsink
$15 for the power/VRMs
$30 for PCB/logics/passives/DisplayPort/HDMI controllers/outputs
$80 for 8GB HBM1 (1.5 years ago it cost $88 for PS4's 8GB GDDR5)
-----------------------
$160
+
$133 die
-----------------------
$293 USD

Apply 35% margin AMD might desire

$293 x 1.35 = $396 USD (but I don't know if AMD sells the die only or the entire package I specified above to the AIB. It could very well be that AMD's main profit margin of 35% is on the $133 die, not the entire $293 USD card cost. In that case, the card costs AIBs $340 ($133x1.35 + $160)). I am sure I made plenty of mistakes but this is rough of the envelope calculation.

Price this at $649. That leaves retailers/OEMs with at least $250 of revenue that leaves them plenty of profits after logistics/marketing/packaging/returns. That's a ton of $ for 3rd parties to make off each AMD card.

The main purpose is to get the power consumption down rather than raw performance. Maybe if they felt it was necessary they could use the WCE edition as an additional model with an extremely high default clock.

How do you know this? How do you know AMD didn't reduce power consumption in order to increase performance 40-60% at the same TDP? What looks more impressive a card 30-40% faster than a 980 with 290W power usage for $550-650 or a card that uses 180W of power and is only 10% faster than a 290X/ties a 980? Most high end gamers would pick a card 30-40% faster with 290W power usage.

This is the first generation of all time where people think AMD won't improve performance even 10% in 1.5 years and that AMD's engineers are completely incompetent and somehow Maxwell is totally untouchable. The Titan X may or may not be beaten by the 390X, but 980 is going down for sure.

All it would take is a 1.05Ghz 3584 shader 390X, 64 Tonga ROPs, Tonga's double the geometry shader performance, 224 TMUs, 512GB/sec HBM1 and AMD's next card is already 30% faster than a 290X.

Is it really a foregone conclusion that gamers won't make an informed decision to pay well for an AMD GPU? The info is going to be out there and easy to find once these puppies are released. I paid pretty dearly for my Sapphire 290 when it was new, and it has stood the test of time fairly well.

Well apparently if a $550 AMD card doesn't beat a $1000 Titan X/GM200 in every metric, it's a failure. So AMD is basically doomed unless they price a card 95% as fast as Titan X for $299 like the 290X.
 
Last edited:

gamervivek

Senior member
Jan 17, 2011
490
53
91
A 4096 core Tonga would be over 600 if it's on 28 nm. It'd at least explain the delays since they would have to get the drivers in order. It could very well be something like 3328 or 3072 although that wouldn't be enough to compete with the Titan X.

I think that 200mm2 over Tonga would easily get AMD to a doubling of shaders/ROPs/TMUs. Not everything needs to be doubled, I doubt the tessellation hardware or ACEs would be increased. Even if you discount any density improvements that AMD might have included with Fiji. But there might be a few more transistors for dx12_1 compatibility if AMD are going for that right now.

Interestingly sushiwarrior hinted at an even larger die, though I'm not sure if it includes the interposer or something else.

http://forums.anandtech.com/showpost.php?p=36254878&postcount=119

There's an idea that's been floating around in my head that has me concerned, but I haven't seen it mentioned elsewhere.

Regarding that "dual-link interposer" to allow 8GB 390Xs, might it not suggest that half the VRAM would be competing with the other half for bandwidth? It sounds like some type of configuration where the connections between the GPU and HBM aren't increased, but an additional stack is added onto each connection in which case they would be fighting for bandwidth and effective bandwidth per stack would be halved (perhaps, suitably, to 320GB/s total; which could be plenty considering it's in the realm of GM200's bandwidth and Fiji will also have lossless color compression). That, or perhaps the GPU can only access half the stacks at any given time, which would be a problem as well?

Am I missing something that makes this unlikely?

It sounds like the clamshell mode of GDDR5, and it should mean lower effective bandwidth but I'm not sure if it leads to a perfect halving of bandwidth or improves/degrades it.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Yields on something that big, even on something as mature as 28 nm is at this point is going to be bad. nVidia can get away with it because they have a legion of fanboys who will pay $1K each, but AMD has no such luxury. Given the 4 GB limitation of HBM it might be a better approach.

First of all, I'm unaware of any public information indicating what yields might be like on a ~550 mm^2 die at 28nm. Do you know of any source specifying this, or is your above statement just pure speculation?

Secondly, while AMD can't charge $999 for a gaming card, they probably can charge $2,999 for a professional graphics card that matches or beats Quadro M6000 at similar power levels - especially since Double Precision performance will likely be much better.

Finally, as far as I can determine, the alleged "4GB limitation of HBM" is just a forum rumor that got out of hand. The official Hynix resources indicate that there is a limit of 4 dice per stack, but one full stack of four chips only gets you to 1GB (4 chips at 2 gigabits per chip = 8 gigabits = 1 GB). This means you'd need four stacks to get to 4GB - and no one has ever suggested that the Fiji card will have less than that. And I have never seen any documentation indicating that four stacks is any kind of hard limitation. Why not use eight stacks for 8GB if that's what is needed to be competitive?

The main purpose is to get the power consumption down rather than raw performance. Maybe if they felt it was necessary they could use the WCE edition as an additional model with an extremely high default clock.

AMD wants to improve the perf/watt, but they also need to move the needle on raw performance in order to be competitive. They are not going to release a product that can't even beat their own prior generation in some benchmarks. The last time they did that (Bulldozer) it nearly destroyed the company and they still haven't fully recovered. They are not going to make that mistake again.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Let's assume 552mm2 die size = 23mm x 25mm die

I get 94 die per 300mm wafer. I've seen estimates that 28nm wafer costs about $5000 USD. That means at 100% yield, it costs $53.19 to manufacture a 552mm2 die at TSMC/GLoFo. Let's apply a yield of 40% only since yields of large die aren't 100%, and we get ~$133 per the die. Coincidentally, this is actually more expensive than the cost to manufacture a 520mm2 GTX580. Wafer prices between very new 40nm and very mature 28nm by now shouldn't be much different. $5K per wafer sounds reasonable.
http://anysilicon.com/die-per-wafer-formula-free-calculators/

4u06C.jpg


Add
$35 for the heatsink
$15 for the power/VRMs
$30 for PCB/logics/passives/DisplayPort/HDMI controllers/outputs
$80 for 8GB HBM1 (1.5 years ago it cost $88 for PS4's 8GB GDDR5)
-----------------------
$160
+
$133 die
-----------------------
$293 USD

Apply 35% margin AMD might desire

$293 x 1.35 = $396 USD (but I don't know if AMD sells the die only or the entire package I specified above to the AIB. It could very well be that AMD's main profit margin of 35% is on the $133 die, not the entire $293 USD card cost. In that case, the card costs AIBs $340 ($133x1.35 + $160)). I am sure I made plenty of mistakes but this is rough of the envelope calculation.

Price this at $649. That leaves retailers/OEMs with at least $250 of revenue that leaves them plenty of profits after logistics/marketing/packaging/returns. That's a ton of $ for 3rd parties to make off each AMD card.

Two-three things,

TSMC 28nm Wafer should be at the range of 3-4K as of now.

40% yields are too low, try 80%+

If COGs is $293, with 35% margin selling price will be $450.
$396 will be the markup price at 35%.

I would take $3500 for the wafer and 80% yields(it should be higher but lets take this one).

So with 94 dies per wafer and 80% yields will have 75 dies.

$3500 for the wafer / 75 dies = $46,66 per die.

Add
$35 for the heatsink
$15 for the power/VRMs
$30 for PCB/logics/passives/DisplayPort/HDMI controllers/outputs
$80 for 8GB HBM1

= $160

160 + 46,66 = $206,66

with 35% margins you will sell at ~$318
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
At launch though, I just don't see how 380X would beat 290X by 20-30% because that would beat the 980 easily. Considering 380X is probably a $349-399 card, this sound WAY too good to be true. I mean think about it, if $349-399 R9 380X is 15-20% faster than the 980 per Raghu78, what the heck is 980 worth, $299-329 overnight?

The same as the R9 290X was worth (atleast in the public and tech press perception) when the GTX 970 launched. You have such a short memory. :whiste:

Anyway I am done arguing with you as you have a certain attitude about underestimating AMD and overestimating Nvidia. I remember you arguing for months together on these very same forums that AMD could not beat the GTX Titan in 2013. Heck you even said beating GTX 780 would be difficult. We saw how that turned out. I am predicting AMD will again prove you wrong.

btw we now know that R9 290X aged better and played better with its 4GB VRAM compared with 780 Ti and same for HD 7970/R9 280X against GTX 680/GTX 770. I predict a similar fate in the next generation with the R9 390X growing its lead over time due to more demanding games placing higher stress on the GPU bandwidth and the memory bandwidth of R9 390X helping it pull further ahead. I also predict AMD's DX12 performance on R9 390X will be even better than their relative performance wrt Titan-X (which I expect to be higher by 10%).

I have made my opinions clear and we will see in a month's time. I expect AMD to return the favour back to Nvidia with regards to shaking up the pricing of their GPU stack once R9 3xx GPU stack launches. adios. :cool:
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
The same as the R9 290X was worth (atleast in the public and tech press perception) when the GTX 970 launched. You have such a short memory. :whiste:

Anyway I am done arguing with you as you have a certain attitude about underestimating AMD and overestimating Nvidia. I remember you arguing for months together on these very same forums that AMD could not beat the GTX Titan in 2013. Heck you even said beating GTX 780 would be difficult. We saw how that turned out. I am predicting AMD will again prove you wrong.

btw we now know that R9 290X aged better and played better with its 4GB VRAM compared with 780 Ti and same for HD 7970/R9 280X against GTX 680/GTX 770. I predict a similar fate in the next generation with the R9 390X growing its lead over time due to more demanding games placing higher stress on the GPU bandwidth and the memory bandwidth of R9 390X helping it pull further ahead. I also predict AMD's DX12 performance on R9 390X will be even better than their relative performance wrt Titan-X (which I expect to be higher by 10%).

I have made my opinions clear and we will see in a month's time. I expect AMD to return the favour back to Nvidia with regards to shaking up the pricing of their GPU stack once R9 3xx GPU stack launches. adios. :cool:

As far as the steady increases in performance....Speculating makes me think the Mole didn't make the layoff cut.

We'll see not soon enough how this launch goes. Thinking the internal performance goals of AMD aren't going to be influenced by forum speculation.

Seems like the NVIDIA 1 up them at the last minute is the going trend. I'll be surprised if AMD doesn't anticipate it.

The raw performance of NVIDIA isn't AMD's biggest concern at the moment....I'll speculate the delay is due to the Gameworks buster/hack isn't quite ready yet.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Let's assume 552mm2 die size = 23mm x 25mm die

I get 94 die per 300mm wafer. I've seen estimates that 28nm wafer costs about $5000 USD. That means at 100% yield, it costs $53.19 to manufacture a 552mm2 die at TSMC/GLoFo. Let's apply a yield of 40% only since yields of large die aren't 100%, and we get ~$133 per the die. Coincidentally, this is actually more expensive than the cost to manufacture a 520mm2 GTX580. Wafer prices between very new 40nm and very mature 28nm by now shouldn't be much different. $5K per wafer sounds reasonable.
http://anysilicon.com/die-per-wafer-formula-free-calculators/

4u06C.jpg


Add
$35 for the heatsink
$15 for the power/VRMs
$30 for PCB/logics/passives/DisplayPort/HDMI controllers/outputs
$80 for 8GB HBM1 (1.5 years ago it cost $88 for PS4's 8GB GDDR5)
-----------------------
$160
+
$133 die
-----------------------
$293 USD

Apply 35% margin AMD might desire

$293 x 1.35 = $396 USD (but I don't know if AMD sells the die only or the entire package I specified above to the AIB. It could very well be that AMD's main profit margin of 35% is on the $133 die, not the entire $293 USD card cost. In that case, the card costs AIBs $340 ($133x1.35 + $160)). I am sure I made plenty of mistakes but this is rough of the envelope calculation.

Margin is not markup... (facepalm)
 

DeathReborn

Platinum Member
Oct 11, 2005
2,786
789
136
Two-three things,

TSMC 28nm Wafer should be at the range of 3-4K as of now.

40% yields are too low, try 80%+

If COGs is $293, with 35% margin selling price will be $450.
$396 will be the markup price at 35%.

I would take $3500 for the wafer and 80% yields(it should be higher but lets take this one).

So with 94 dies per wafer and 80% yields will have 75 dies.

$3500 for the wafer / 75 dies = $46,66 per die.

Add
$35 for the heatsink
$15 for the power/VRMs
$30 for PCB/logics/passives/DisplayPort/HDMI controllers/outputs
$80 for 8GB HBM1

= $160

160 + 46,66 = $206,66

with 35% margins you will sell at ~$318

Don't forget to pay for Dual Link Interposer, R&D, Shipping, Marketing, Driver development & (if any) license fees.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
I don't think this is going to happen. AMD's Financial Analyst Day slides focus a lot on the professional GPU market, perhaps more so than on the desktop side. AMD sees this as a major growth area. This means that, if anything, Fiji would have been designed as a compute-first chip. With Nvidia having sacrificed Double Precision in Maxwell, this offers AMD an opportunity to flat-out beat the Green Team, and by a substantial margin.

agree. Fiji or whatever the flagship chip is named is going to be a fp64 enabled compute monster with close to 3.5-4 TFLOPS fp64 performance. Pro graphics has been a growth area for AMD and they have gone upto 25% from their traditional 10-15% marketshare.

As for GM200 its useless for fp64 with 0.2 TFLOPS of fp64 performance. :thumbsdown: So there is no contest there. AMD has a huge opportunity to make some serious market share gains in HPC.
 
Mar 10, 2006
11,715
2,012
126
Don't forget to pay for Dual Link Interposer, R&D, Shipping, Marketing, Driver development & (if any) license fees.

R&D, marketing, and driver development (this is R&D) are fixed costs and don't fall under COGS. The discussion here is about gross profit margin.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Margin is not markup... (facepalm)

Your response tells me nothing about the errors in my calculation. AMD does not sell the cards/components to the market, but to AIBs. I have already accounted for AMD's margin. From the point the mark-up goes to AIBs and we get MSRP.

Also, it looks like you didn't look at historical GPU prices for AMD/NV. As others already pointed out, my calculation assumes horrible 40% yields too, no die harvesting for 2nd or 3rd tier 390 cards. Once those are taken into account, it's possible AMD could afford to finally manufacture a 500-550mm2 die.

The point of my analysis wasn't to get exact profit margins for AMD, but to have a ballpark idea if it's possible for them to afford a 500-550mm2 GPU. I think it is. If you disagree, provide an explanation instead of a "facepalm".
 
Status
Not open for further replies.