wccftechAMD Pirate Islands : R9 300 Series Alleged Specifications Detailed

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

HurleyBird

Platinum Member
Apr 22, 2003
2,811
1,544
136
If the specs are right, nvidia will lose the single gpu crown

If (and this is a big if) both the alleged GTX 880 and 390X specs are correct, then the 390X will blow the GTX 880 completely out of the water.

And if that ends up being the case it doesn't require too much brain power to figure out what Nvidia will do. GM104 will get pushed down to mid-range status, where it arguably should be in the first place, and a (most likely cut down) consumer version of GM110 will come out earlier than it otherwise would have. If history is any indication, Nvidia will be just fine even if GM110 comes out six months later than Bermuda (assuming that is the actual code name) and slightly outperforms it despite consuming truckloads more power -- or even slightly under performs it but meaningfully wins out on power and acoustics. If, God forbid, GM110 is faster and draws less power any kind of ground AMD is able to gain as undisputed top dog for half a year will end up instantly erased.

And that's how much of an uphill battle this is for AMD. In order to gain any kind of significant ground on Nvidia, Bermuda needs not only to blow GM104 out of the water, but also beat GM110 in a fashion that leaves little doubt to the victor (or alternatively, come out with a new die somewhat close to GM110 launch that accomplishes the same). Anything less will result in small gains, treading water, or continued atrophy.
 
Last edited:
Feb 19, 2009
10,457
10
76
If (and this is a big if) both the alleged GTX 880 and 390X specs are correct, then the 390X will blow the GTX 880 completely out of the water.

And if that ends up being the case it doesn't require too much brain power to figure out what Nvidia will do. GM104 will get pushed down to mid-range status, where it arguably should be in the first place, and a (most likely cut down) consumer version of GM110 will come out earlier than it otherwise would have. If history is any indication, Nvidia will be just fine even if GM110 comes out six months later than Bermuda (assuming that is the actual code name) and slightly outperforms it despite consuming truckloads more power -- or even slightly under performs it but meaningfully wins out on power and acoustics. If God forbid GM110 is faster despite lower power and acoustics any kind of ground AMD gained in that half a year will be instantly erased.

Well imagine you are JHH and your Teslas utterly dominate the HPC sector, 95% market share and all that jazz, massive institution have huge orders for big Maxwell Teslas on a backlog, willing to pay $6000 each or more (how about $10,000??)...

Due to TSMC's inability to mass produce a big die with good yields on a new node (not surprising really), you don't have that many big Maxwell to sell.

Would you want to throw them into a consumer card that goes for $1,000 to $2,000 (yes, this insane price could happen and wont surprise me either! Not after Titan and Titan-Z) or do you want it in Teslas that can fetch much more?
 

HurleyBird

Platinum Member
Apr 22, 2003
2,811
1,544
136
Well imagine you are JHH and your Teslas utterly dominate the HPC sector, 95% market share and all that jazz, massive institution have huge orders for big Maxwell Teslas on a backlog, willing to pay $6000 each or more (how about $10,000??)...

Due to TSMC's inability to mass produce a big die with good yields on a new node (not surprising really), you don't have that many big Maxwell to sell.

Would you want to throw them into a consumer card that goes for $1,000 to $2,000 (yes, this insane price could happen and wont surprise me either! Not after Titan and Titan-Z) or do you want it in Teslas that can fetch much more?

You do both. If GM104 is thoroughly defeated you need to ship at least a few consumer GM110 cards to maintain the halo effect. With luck, GM110 is a good enough product that you can beat out AMD with lower quality dies that don't make the cut for your HPC products. In the end, you run a cost benefit analysis to determine binning cutoff and professional/consumer die distribution. Denying AMD the ability cement a graphics cash cow that can increase their future R&D spending in relation to yours is of course part of this analysis.
 
Feb 19, 2009
10,457
10
76
Binning occurs for Tesla and Quadros too, saving those precious big Maxwell for 2nd or 3rd tier HPC product still earns you bucketloads more. ;)

Releasing a paper launch big Maxwell works but as soon as its sold out and OOS with no ETA, i don't know how well that will be received.

There's a heap of backlash for months for low supply and high cost of R290/X causing lots of gamers to buy 780s instead.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
Was the GTX 680 spec'd equally with a 7970?

Yeah, it was. Other than a curious lack of shaders and memory bus width, it was the same. The clocks for both are typically higher so I'm not sure there's really a lack to speak of there.

Still, there's no denying the two GPUs are pretty much equal in general:

jhG42fa.gif


I think AMD was waiting for really shader heavy games to pop up after the 7970's release, but those are few and far between ;)
 

TreVader

Platinum Member
Oct 28, 2013
2,057
2
0
If (and this is a big if) both the alleged GTX 880 and 390X specs are correct, then the 390X will blow the GTX 880 completely out of the water.

And if that ends up being the case it doesn't require too much brain power to figure out what Nvidia will do. GM104 will get pushed down to mid-range status, where it arguably should be in the first place, and a (most likely cut down) consumer version of GM110 will come out earlier than it otherwise would have. If history is any indication, Nvidia will be just fine even if GM110 comes out six months later than Bermuda (assuming that is the actual code name) and slightly outperforms it despite consuming truckloads more power -- or even slightly under performs it but meaningfully wins out on power and acoustics. If, God forbid, GM110 is faster and draws less power any kind of ground AMD is able to gain as undisputed top dog for half a year will end up instantly erased.

And that's how much of an uphill battle this is for AMD. In order to gain any kind of significant ground on Nvidia, Bermuda needs not only to blow GM104 out of the water, but also beat GM110 in a fashion that leaves little doubt to the victor (or alternatively, come out with a new die somewhat close to GM110 launch that accomplishes the same). Anything less will result in small gains, treading water, or continued atrophy.

I think you're right, nVidia will release GM110 and possibly take the crown back, but even 6 months as gaming GPU king would help AMD a lot. Their GPU division (ATI) is really kicking ass, while their CPU division could use some work.


If we pretend that these specs are accurate, I would expect the 880 to perform really well at lower (<1600p) resolutions due to it's shader power, perhaps even better than the 780ti in benchmarks, but overall it should fall between the 780 and titan black.

I think the 370X would fall substantially below the 880, probably somewhere near the 7950 (as somebody already mentioned). The 390X looks like a total beast, and I'd expect it to lead the 290X by some 20-35%.

If big maxwell has ~4200 SP and 60-80 ROPs (my guess is 64), it should do pretty well against the 390X. But then again, big maxwell isn't even on the list right now.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
WCCF *shrug*

This news is nearly 1 month old. WCCF just finds information on the web and posts it as their own without crediting the original source.

http://extremespec.net/amd-pirate-islands-announced-summer/

AMD-pirate-islands.png


With Witcher 3 delayed to Feb 2015 and no single 4K monitor worth buying yet imo (no TN or 30 fps please), I don't mind waiting until 2015 to upgrade. The specs don't seem unrealistic for a 20nm shrink. R9 380X is faster than R9 290X suggesting that the mid-range will beat the previous flagship. This is not out of line with AMD's/NV's history:

Mid-range 7870 28nm > 6970 last gen flagship 40nm (node shrink)
Mid-range R9 280X > 7970 last gen flagship

For NV as well, GTX460 1GB > GTX285, GTX680 > 580 and 880 GM204 looks like it will beat GTX780Ti. Too bad if NV will follow the same model again of releasing a mid-range GM204 chip and holding out GM100/110. I was hoping we'd get flagship 20nm from NV/AMD late Q4 2014 or Q1 2015 at the latest. I suppose with the added difficulty of shrinking transistors to lower nodes and wafer costs rising, it shouldn't be that surprising that delays will continue to happen more frequently.

450mm-wafercosts.jpg

http://www.extremetech.com/computin...y-450mm-wafers-halted-and-no-path-beyond-14nm

The previous leaks got R9 290X and 7970 specs wrong. I would wait for a more credible source. Sushiwarrior is more credible than 90% of these sites posting leaks. ;)
 
Last edited:

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
Yeah, it was. Other than a curious lack of shaders and memory bus width, it was the same. The clocks for both are typically higher so I'm not sure there's really a lack to speak of there.

Still, there's no denying the two GPUs are pretty much equal in general:

jhG42fa.gif


I think AMD was waiting for really shader heavy games to pop up after the 7970's release, but those are few and far between ;)
You say "other than" as if those two differences were minor. Those differences are huge and 75% shader count compared to 7970 and 2/3 of the bandwidth of the 7970.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
You say "other than" as if those two differences were minor. Those differences are huge and 75% shader count compared to 7970 and 2/3 of the bandwidth of the 7970.

Yeah, but they're clocked much, much higher with Boost, totally negating the difference in shader power.

As for the bandwidth, yeah that's true, but it only seems to affect performance scaling at higher resolutions. And even then, not that much-- they have the same amount of ROPs after all...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Still, there's no denying the two GPUs are pretty much equal in general:

jhG42fa.gif


I think AMD was waiting for really shader heavy games to pop up after the 7970's release, but those are few and far between ;)

7970Ghz is about 10% faster than 680 and continued to be so for a while after Nov 2013 for sure. Even on release it beat the 680. Right now at 1600P, 770 is a competitor to the 7970Ghz not 680.
http://www.computerbase.de/2014-04/amd-radeon-r9-295x2-benchmark-test/5/

Not trying to derail the thread as 7970Ghz/R9 280X/770/680 has been beaten to death. We can conclude that it's pointless to compare NV vs. AMD specs such as CUDA cores and SPs on paper. Trying to formulate 880's performance based on specs or R9 390X's performance is pointless since both the 880 and R9 390X will have higher IPC courtesy of Maxwell and GCN 2.0 architectures.

What we need to know are prices too. If 880 is slower than R9 390X but it costs $499 while R9 390X costs $649 and is only 10% faster? All of this is just speculation. We can't say how powerful 880 will be since 750Ti kicks 650Ti's ass but on paper specs, it's not much better vs. it's real world performance increase.

I would give NV the edge next round since Maxwell has already shown a 2x increase in performance/watt and 35% increase in IPC and that was only on 28nm! I would never underestimate NV's Maxwell on 20nm. What pisses me off more is the notion of NV launching GM204 as 880 and trying to pass that off as flagship again. :mad:

You say "other than" as if those two differences were minor. Those differences are huge and 75% shader count compared to 7970 and 2/3 of the bandwidth of the 7970.

Cannot accurately compare NV vs. AMD on specs alone when both will debut with new/significantly revised architectures and on newer node. We can't even compare 750Ti vs. 660 on paper specs but you are trying to do that for R9 390X vs. 880?

On paper 660 has 50% more CUDA cores, 100% more TMUs, 50% more ROPs, 67% more memory bandwidth (144 vs 86) against the 750Ti but is only 20% faster than 750Ti:
http://www.computerbase.de/2014-02/nvidia-geforce-gtx-750-ti-maxwell-test/5/

Therefore, you are just wasting your time trying to both extrapolate 880's performance relative to 780Ti and especially its performance relative to R9 390X. Comparing 7970 vs. 680 actually proves that comparing NV vs. AMD specs on paper is generally a shot in the dark. For starters, other bottlenecks exist in the GPU such as rasterization/geometry power and real world efficiency that imply comparison of 2048 Stream Processors to 1536 CUDA cores is a wasted effort. For example, R9 290X has 2816 SPs vs. 2880 CUDA cores for 780Ti and their performance is fairly close but yet 680 is very close to 7970 despite far inferior paper specs.
 
Last edited:

Bubbleawsome

Diamond Member
Apr 14, 2013
4,834
1,204
146
Remember Tahiti rumors? How about Kepler rumors? The "if it's true then slap my buttoks, praise helix, and the GPU market is back to ~2008 prices!"
Even if it is true, nvidia will come out with big maxwell to spoil the 390x and AMD will either be unable to counter, or we will get a 395x or 390xpro or x390x or whatever.
Nvidia knows that they need to release the big die either later after gm104 or not until 900 series, it's become a profit area and gives them cushion against AMD, along with possible yield issues. If there is no competition from the red team them will not release gm110 until much later.
This two-step release is going to be permanent from at least one side each cycle now.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
OC 7970 v OC 770. I own them both, and it isn't even close on some games. There is no 10% lead over 680, either.

perfrel.gif

so, lol, a 770 boosting to a minimum of 1150 and an average of 1224 mhz is 10% faster than a 7970 @ 1000Mhz

you're really not making a good case for that GPU you know ;)
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
We have to add the posibility of amd gf wsa having impact. Gpu are huge and could take a lot of the wsa burden. The 20nm cost is high and perf is low so we could see some amd gpu on gf process perhaps even 28nm. Gaa. But whats your guess here?

680 was a far more balanced gpu than 7970. Maxwell looks promising. But there is no reason to beliewe amd will not improve much here. Cgn is not new anymore and 290 is already more balanced.

If nv chart about 20nm cost per transistor costing more than 28nm is true - and i assume it is - unfortunately it looks like whoever comes with the biggest card first the cost will stay very high and cancel the benefit. We dont get the same benefits as 40 to 28nm hkmg transition. Cost goes up not down. Perf stays the same. Ipc is not going make up for that.

Better to keep expectations low. It might be 2 years before we have to invest in good 4k screens :)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
RS, he even gave you a pretty picture in the post you quoted. Try looking at it.

Not interested in discussing 7970Ghz vs. 680 vs. 770. If people still care to review their standings, there are 100s of reviews online that show 7970Ghz/R9 280X compete with 770s and 680 is slightly behind both. Linking a Lightning 770 is also way off base since the discussion was on reference cards. Also, 770 came out end of May 2013, nearly 1.5 years after 7970 launch, while 4GB versions cost $450 the price of a 7970Ghz in July 2012! The fact that anyone is even discussing 770 vs. 7970Ghz shows how awesome the 7970 was and how overpriced the 770 was. Anyway, I really don't want to derail the main thread. For all intents and purposes 680/770/7970Ghz can be considered the same tier as people will upgrade from them to 20nm to something 50-75% faster.

The point still stands, even if we were to entertain that 880's and R9 390X's specs are accurate, it's way too difficult to predict 880's performance since 750Ti performs far above its paper specs by putting 660 vs. 750Ti side-by-side. Also, we don't know what performance increase GCN 2.0 will bring over 1.1 in IPC, nor do we know how much better 20nm Maxwell will be over 28nm. If Maxwell has 2x the performance/watt on 28nm, then it might have > 2x on 20nm.

Also, I like shiny hardware but without 4K 60Hz IPS at reasonable prices and yet to launch next gen PC games, might as well wait for the big-daddy GM110. I am probably going to do that as I really don't like the idea of paying $500-550 for a mid-range Maxwell. I've had AMD for way too long now and want to get NV next round, which is why I am patiently waiting for the 500mm2 Maxwell.
 
Last edited:
Feb 19, 2009
10,457
10
76
I dunno the Samsung TN 4K got a really good review, amazing colors for a TN(!!) and insane response time. It's only around $700 here, which is great for such a monitor.

4K gaming is already affordable now, in another year it may well be approaching a new standard. Also fully agree, cannot compare paper specs for new architectures.
 

dangerman1337

Senior member
Sep 16, 2010
384
45
91
The point still stands, even if we were to entertain that 880's and R9 390X's specs are accurate, it's way too difficult to predict 880's performance since 750Ti performs far above its paper specs by putting 660 vs. 750Ti side-by-side. Also, we don't know what performance increase GCN 2.0 will bring over 1.1 in IPC, nor do we know how much better 20nm Maxwell will be over 28nm. If Maxwell has 2x the performance/watt on 28nm, then it might have > 2x on 20nm.

Also, I like shiny hardware but without 4K 60Hz IPS at reasonable prices and yet to launch next gen PC games, might as well wait for the big-daddy GM110. I am probably going to do that as I really don't like the idea of paying $500-550 for a mid-range Maxwell. I've had AMD for way too long now and want to get NV next round, which is why I am patiently waiting for the 500mm2 Maxwell.
I don't think they'll even be a 20nm Maxwell at all, 20nm SoC does not have much of a cost-effective increase over 28nm HP if at all and also not much of a jump as there is no 20nm HP. I think even on 28nm there could be a very noticeable jump but I would not expect a 384-bit GM204 or 512-bit GM200 (btw there is no GM110) as it just takes up too mcuh die space if the 25% die size increase from GK107>GM107. This poster at post Beyond3d saying that there won't ever be a 20nm Maxwell and a 28nm GM204 will be out by late Q3 or early Q4 and has already been taped out since late March: http://forum.beyond3d.com/showpost.php?p=1838346&postcount=1507, http://forum.beyond3d.com/showpost.php?p=1837191&postcount=1478. I think second Generation Maxwell is referring to futher architecture improvements/changes which will give 28nm GM20x a bigger jump than expected or bring something new to the table.

I notice only places like WCCFTech or Fudzilla claiming 20nm but it does not make any sense considering 20nm costs or lack of a good jump. I do expect that we'll see Pascal on 16FF in 2016 and Volta on 10nmFF in 2018, makes fiscal sense as those two processes would been out for a while which would lead to lower costs as the Smartphone companies take the brunt of the costs and have a tighter release schedule unlike with Kepler with the big one releasing after a year and that being an expensive compute based GeForce.

EDIT:

I dunno the Samsung TN 4K got a really good review, amazing colors for a TN(!!) and insane response time. It's only around $700 here, which is great for such a monitor.

4K gaming is already affordable now, in another year it may well be approaching a new standard. Also fully agree, cannot compare paper specs for new architectures.
While that 4K Samsung is decent for the price, 4K gaming is not exactly affordable unless you mean 600-800 USD and playing on a mish-mash of high and medium settings to get out 60FPS average on today's games and let
alone later this years or next or the year after that. We've got games like AC: Unity which looks like the PC version of it and does not look like you can easily run it on 2560x1440 a single GPU set tup, let alone 4k on respectable settings or the Witcher 3 and similar.

I can see affordable 4K gaming in 2018 with Volta's stacked DRAM (Pascal seems to just be 3D Memory unless that's the same thing).
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
OC 7970 v OC 770. I own them both, and it isn't even close on some games. There is no 10% lead over 680, either.

perfrel.gif

All resolutions definitely helps the nvidia cards in that chart. Nvidia usually has higher fps in more CPU limited situations, but they seem to slow down more with higher loads. Kinda like geforces are sports cars and radeons are trucks. Load a few tons of weight on a sports car and it struggles while the truck just keeps trucking.

At least that was always my perception.
 
Last edited:
Feb 19, 2009
10,457
10
76
While that 4K Samsung is decent for the price, 4K gaming is not exactly affordable unless you mean 600-800 USD and playing on a mish-mash of high and medium settings to get out 60FPS average on today's games and let
alone later this years or next or the year after that. We've got games like AC: Unity which looks like the PC version of it and does not look like you can easily run it on 2560x1440 a single GPU set tup, let alone 4k on respectable settings or the Witcher 3 and similar.

I can see affordable 4K gaming in 2018 with Volta's stacked DRAM (Pascal seems to just be 3D Memory unless that's the same thing).

I hope you realize most games often have one or two graphics settings that destroys performance for very minimal image quality gains. You can disable that and get very high performance AND enjoy the benefits of higher clarity and more pixels.

I have never felt the need to max games just because, even when I have the hardware that can do it, features such as DoF, HDAO and Soft Shadows are the first on my list to be disabled. It means my power consumption is significantly less to maintain constant 60 fps.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Do not spoil the thread by arguing over GTX 770 / R9 280X. With the R9 390X, AMD is likely to make the transition to HBM.

http://electroiq.com/blog/2013/12/amd-and-hynix-announce-joint-development-of-hbm-memory-stacks/

Bryan Black, Sr Fellow and 3D program manager at AMD noted that while die stacking has caught on in FPGAs and image sensors &#8220;..there is nothing yet in mainstream computing CPUs, GPUs or APUs&#8221; but that &#8220;HBM (high bandwidth memory) will change this.&#8221; Black continued, &#8220;Getting 3D going will take a BOLD move and AMD is ready to make that move.&#8221; Black announced that AMD is co-developing HBM with SK Hynix which is currently sampling the HBM memory stacks and that AMD &#8220;&#8230;is ready to work with customers

A single HBM stack has a bandwidth of 128 GB/s . Four of these should provide a massive 512 gb/s bandwidth. That should be enough for the high end flagship R9 390X.

http://sites.amd.com/us/Documents/TFE2011_006HYN.pdf
http://www.i-micronews.com/news/SK-...ed-memory-commercialization-closer,10000.html

As for the number of stream processors it remains to be seen how aggressive AMD is. Given that 20nm is a full node shrink in terms of transistor density, a 50% increase in sp count is easily possible. AMD is also likely to have improved the perf/sp with an enhanced GCN 2.0 architecture.
 
Last edited: