Yields on something that big, even on something as mature as 28 nm is at this point is going to be bad. nVidia can get away with it because they have a legion of fanboys who will pay $1K each, but AMD has no such luxury. Given the 4 GB limitation of HBM it might be a better approach.
Let's assume 552mm2 die size = 23mm x 25mm die
I get
94 die per 300mm wafer. I've seen estimates that 28nm wafer costs about $5000 USD. That means at 100% yield, it costs $53.19 to manufacture a 552mm2 die at TSMC/GLoFo. Let's apply a yield of 40% only since yields of large die aren't 100%, and we get ~
$133 per the die. Coincidentally, this is actually more expensive than the cost to manufacture a 520mm2 GTX580. Wafer prices between very new 40nm and very mature 28nm by now shouldn't be much different. $5K per wafer sounds reasonable.
http://anysilicon.com/die-per-wafer-formula-free-calculators/
Add
$35 for the heatsink
$15 for the power/VRMs
$30 for PCB/logics/passives/DisplayPort/HDMI controllers/outputs
$80 for 8GB HBM1 (
1.5 years ago it cost
$88 for PS4's 8GB GDDR5)
-----------------------
$160
+
$133 die
-----------------------
$293 USD
Apply 35% margin AMD might desire
$293 x 1.35 = $396 USD (but I don't know if AMD sells the die only or the entire package I specified above to the AIB. It could very well be that AMD's main profit margin of 35% is on the $133 die, not the entire $293 USD card cost. In that case, the card costs AIBs $340 ($133x1.35 + $160)). I am sure I made plenty of mistakes but this is rough of the envelope calculation.
Price this at $649. That leaves retailers/OEMs with at least $250 of revenue that leaves them plenty of profits after logistics/marketing/packaging/returns. That's a ton of $ for 3rd parties to make off each AMD card.
The main purpose is to get the power consumption down rather than raw performance. Maybe if they felt it was necessary they could use the WCE edition as an additional model with an extremely high default clock.
How do you know this? How do you know AMD didn't reduce power consumption in order to increase performance 40-60% at the same TDP? What looks more impressive a card 30-40% faster than a 980 with 290W power usage for $550-650 or a card that uses 180W of power and is only 10% faster than a 290X/ties a 980? Most high end gamers would pick a card 30-40% faster with 290W power usage.
This is the first generation of all time where people think AMD won't improve performance even 10% in 1.5 years and that AMD's engineers are completely incompetent and somehow Maxwell is totally untouchable. The Titan X may or may not be beaten by the 390X, but 980 is going down for sure.
All it would take is a 1.05Ghz 3584 shader 390X, 64 Tonga ROPs, Tonga's double the geometry shader performance, 224 TMUs, 512GB/sec HBM1 and AMD's next card is already
30% faster than a 290X.
Is it really a foregone conclusion that gamers won't make an informed decision to pay well for an AMD GPU? The info is going to be out there and easy to find once these puppies are released. I paid pretty dearly for my Sapphire 290 when it was new, and it has stood the test of time fairly well.
Well apparently if a $550 AMD card doesn't beat a $1000 Titan X/GM200 in
every metric, it's a failure. So AMD is basically doomed unless they price a card 95% as fast as Titan X for $299 like the 290X.