[VC][TT] - [Rumor] Radeon Rx 300: Bermuda, Fiji, Grenada, Tonga and Trinidad

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
Pointless arguing if fun to read at times.

Will be nice to see some performance leaks here and there to offset the bickering.

Not really sure why all the hate towards AIO coolers. About the only real scenario I can come up with is the GREEN team doesn't offer it for the devoted.

Looking forward to the anti-water fear mongering " It'll leak and destroy your whole rig " smear campaign! Oops can't forget the once the water leaks out flames will shoot out and ignite your whole house.

I predict being 30% faster than the 980 will be washed away by the AIO fear campaign. A larger performance gap will just step up the smear campaign. I can picture the devoted sacrificing a couple of AMD's offering like lambs. Nice visual sob story how it destroyed their whole rig. Of course they won't show the pinholes they modded the cooler with.

On another note GTX 970 #2 is being yanked and returned today. The grass is greener on the NVIDIA side....Too bad ii's artificial.

so true


AMD Fiji XT R9 390X Coming With Cooler Master Liquid Cooler

We’ve reported earlier on Asetek’s largest design win with an “undisclosed OEM” for desktop graphics products that will begin shipping in the first half of 2015. The design win is estimated to result in 2-4 million dollars in revenue for Asetek. Which would translate to selling between 50 to 100 thousands units. It was clear from the get-go that this “undisclosed OEM” was AMD. Soon afterwards the R9 390X cooling shroud that would accommodate the Asetek liquid cooling design was leaked.


at least he used was instead of has

anything about water cooling should not have leaked anywhere on the same page .
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
so true


AMD Fiji XT R9 390X Coming With Cooler Master Liquid Cooler

We’ve reported earlier on Asetek’s largest design win with an “undisclosed OEM” for desktop graphics products that will begin shipping in the first half of 2015. The design win is estimated to result in 2-4 million dollars in revenue for Asetek. Which would translate to selling between 50 to 100 thousands units. It was clear from the get-go that this “undisclosed OEM” was AMD. Soon afterwards the R9 390X cooling shroud that would accommodate the Asetek liquid cooling design was leaked.


at least he used was instead of has

anything about water cooling should not have leaked anywhere on the same page .

I do hope they release purely air-cooled variants too. I don't want to pay the premium for the extra cooling since all I will do is trip it off right away. I will look for the best overclocking core/voltage setup and purchase based on that.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Sweclockers are now essentially confirming what we had already suspected: Fiji will be the only new arch in the 300-series. So that means 390X and 390. It is GCN 1.3. The rest we already now, 4 GB VRAM with 640 GB/s in bandwidth, 4000+ SPs etc.

380X/380 will be Hawaii and the rest will be either Tonga(although the full Tonga, the cancelled 285X, so I guess you could say it is semi-new) as well as the oldie Bonaire.

The release date is the same as Sweclockers broke some weeks ago: around Computex. Expect actual shipping parts in Q3 in many parts of the world. Which would put it head to head to GM200.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Sweclockers are now essentially confirming what we had already suspected: Fiji will be the only new arch in the 300-series. So that means 390X and 390. It is GCN 1.3. The rest we already now, 4 GB VRAM with 640 GB/s in bandwidth, 4000+ SPs etc.

380X/380 will be Hawaii and the rest will be either Tonga(although the full Tonga, the cancelled 285X, so I guess you could say it is semi-new) as well as the oldie Bonaire.

The release date is the same as Sweclockers broke some weeks ago: around Computex. Expect actual shipping parts in Q3 in many parts of the world. Which would put it head to head to GM200.

I hope that's not the case...was really hoping for a mid-2Q release rather than pushing 3Q. :/
 

B-Riz

Golden Member
Feb 15, 2011
1,595
765
136
It would be really nice if we all moved away from Watts and looked at Amps.

Consider the following:

980 - 18 max amps @ 12 volts

290x - 24.25 max amps @ 12 volts

295x - 30.5 max amps @ 12 volts

(Based on 12 volt supplies from the PCI-X slot and required card power connectors.)

So look at the max amps a card could draw and adjust your PSU requirements accordingly, including the rest of the components.

I really do not see a next gen single GPU going beyond an +6 (6.25 amps) and +8 (12.5 amps) power connector.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
https://twitter.com/draginol/status/567428591630426112

This indicates there could be a huge difference between just having GTX 970/980 "DX12 support" and having next gen close to metal DX12/Mantle hardware acceleration!

I don't think what you are saying was the intent of the statement you linked to. The tweet was to highlight the differences in performance on the same CPU/GPU using DX11 vs. DX12.

On a related note, Maxwell IS fully DX12 compatible. (not 'ready' or anything like that. It is fully DX12 programmable.
http://blogs.nvidia.com/blog/2015/01/21/windows-10-nvidia-dx12/

Same with all GCN AMD products...

Anyways, getting more and more off-topic here....
 

garagisti

Senior member
Aug 7, 2007
592
7
81
I don't think what you are saying was the intent of the statement you linked to. The tweet was to highlight the differences in performance on the same CPU/GPU using DX11 vs. DX12.

On a related note, Maxwell IS fully DX12 compatible. (not 'ready' or anything like that. It is fully DX12 programmable.
http://blogs.nvidia.com/blog/2015/01/21/windows-10-nvidia-dx12/

Same with all GCN AMD products...

Anyways, getting more and more off-topic here....
Nvidia will support key features of Dx12 but not in its entirety, and it is known since last year. It didn't support DX11 in full as well, or 10 for that matter. I have not seen any new information post launch which suggested that that it has changed, but some claims here and there in the forums.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I don't think what you are saying was the intent of the statement you linked to. The tweet was to highlight the differences in performance on the same CPU/GPU using DX11 vs. DX12.

On a related note, Maxwell IS fully DX12 compatible. (not 'ready' or anything like that. It is fully DX12 programmable.
http://blogs.nvidia.com/blog/2015/01/21/windows-10-nvidia-dx12/

Same with all GCN AMD products...

Anyways, getting more and more off-topic here....

Maxwell and GCN 1.1 may or may not be able to take 100% advantage of DX12.
So far, all we know is that they claim they can. The technical details of the actual Direct3D 12 feature set have not been released. Those, plus low-overhead optimization, are apparently the main hallmarks of DX12. That does not mean they are the ONLY features, just possible the big ones that may be most apparent. We'll find out in a month, IIRC.

I hope they are FULLY compatible with everything that is--and becomes--DX12. DX12.1, when it comes, is almost surely to not be supported.
It remains a possibility that the current architectures will support DX12, but only in a way that enables the low-overhead of 12 and then all of the 11.3 features, yet missing support for other features of 12. I feel it is odd that everything but ONE specific feature is available outside of 12, and so far, all we know is 11.3 is everything 12 is, without the low-overhead. Seems, odd. Might very well be the case, and I'd gladly accept that. :)
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
Nvidia will support key features of Dx12 but not in its entirety, and it is known since last year. It didn't support DX11 in full as well, or 10 for that matter. I have not seen any new information post launch which suggested that that it has changed, but some claims here and there in the forums.

Link?

NV, AMD and Microsoft have stated on numerous occasions that full DX12 support was available. Not trying to be obtuse, but there are a lot of links that state full DX12 support is from Fermi on up on the NV side (just for an example). It doesn't state partial or conditional. NV and AMD have been working with MS for almost 5 years on DX12 compatibility, so it would be a pretty huge fail if that is not the case.

http://wccftech.com/microsofts-directx-12-api-supports-amd-gcn-nvidia-fermi-kepler-maxwell-intels-iris-graphics-cores/
 

garagisti

Senior member
Aug 7, 2007
592
7
81
Link?

NV, AMD and Microsoft have stated on numerous occasions that full DX12 support was available. Not trying to be obtuse, but there are a lot of links that state full DX12 support is from Fermi on up on the NV side (just for an example). It doesn't state partial or conditional. NV and AMD have been working with MS for almost 5 years on DX12 compatibility, so it would be a pretty huge fail if that is not the case.

http://wccftech.com/microsofts-directx-12-api-supports-amd-gcn-nvidia-fermi-kepler-maxwell-intels-iris-graphics-cores/
http://forums.guru3d.com/showpost.php?p=4998548&postcount=15
This above is not from too long ago. Older links/ items are more than 6 months old, and searching for them becomes very, very hard. I unfortunately don't remember where it was, and i'm also feeling particularly lazy right now. Some specific features may be supported like before, and some may not, and newer cards will support most of the features. The last generation cards most people here on forums were bickering suggested all 'relevant' bits were supported. Surely someone with a better memory can chip in and add information.

Speaking of AMD and GCN, i'll be surprised if they meet DX12 requirements fully. It may not be such a surprise afterall, if there's truh to suggestions that Mantle had something to do with DX12. However, i'll still wait and see what may come yet, than bodly proclaim anything just yet about cards already out.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
http://forums.guru3d.com/showpost.php?p=4998548&postcount=15
This above is not from too long ago. Older links/ items are more than 6 months old, and searching for them becomes very, very hard. I unfortunately don't remember where it was, and i'm also feeling particularly lazy right now. Some specific features may be supported like before, and some may not, and newer cards will support most of the features. The last generation cards most people here on forums were bickering suggested all 'relevant' bits were supported. Surely someone with a better memory can chip in and add information.

Speaking of AMD and GCN, i'll be surprised if they meet DX12 requirements fully. It may not be such a surprise afterall, if there's truh to suggestions that Mantle had something to do with DX12. However, i'll still wait and see what may come yet, than bodly proclaim anything just yet about cards already out.

I really hope the last few gens do fully support DX12, otherwise I fear it will be years until most games ship with native support. On the plus side, Xbox One will have DX12 support, so that might help quite a bit, especially with the console's lack of CPU power.

Here is to hoping. :)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
https://twitter.com/draginol/status/567428591630426112

This indicates there could be a huge difference between just having GTX 970/980 "DX12 support" and having next gen close to metal DX12/Mantle hardware acceleration!

I wouldn't draw any conclusions about AMD's DX12 vs. NV's DX12 results from that twitter post. However, looking strictly at AMD's DX11 vs. DX12 performance, it's possible that under some extreme scenario a flagship next gen card (whether dual or single chip) will become 100% CPU limited under DX11.

There is a 4.5-5X performance difference with the 290X already when running DX12/Mantle. The developer of that SW demo already said that SW only took 2 months to make and their next demo at GDC that will showcase DX12 will put Star Starm benchmark to shame. If we have a situation where AMD spent 0 time optimizing DX11 drivers for this GDC DX12 demo and instead optimized for DX12/Mantle drivers, with the drawcall bottleneck, it's possible that R9 390X would be 8-10X faster under DX12 than under completely unoptimized and 100% draw-call CPU limited DX11 scenario. However, I wouldn't draw any negative/positive correlation about NV's "full" DX12 support based on testing of some random card in a demo, not even a real game.

71449.png


Also, by the time real DX12 games drop, we'll be past R9 390X/GM200 generation. I don't expect any true DX12 until 2016 when we'll already have Pascal. If developers free up the drawcall bottleneck, and actually get extra performance, they could implement more advanced AI, more NPC units, but all these extra units in the game will need more graphics rendering which would make the game more demanding. For that reason I am not overly concerned that R9 390X/GM200 will be fast enough for next gen DX12 games. Based on history of next gen DX, by the time true next gen games of that next gen API arrive, the 1st and even 2nd generation DX cards of that generation are too slow. i.e., 1st generation DX9 and DX11 cards weren't good enough for DX9/11 games that started to come out years after.

---

Seiki announced that they should have DP 1.3 monitors by Q2 2015. I think there is a slight chance that R9 390 cards might have HDMI 2.0 + DP 1.2a/1.3.
 
Last edited:

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
I do hope they release purely air-cooled variants too. I don't want to pay the premium for the extra cooling since all I will do is trip it off right away. I will look for the best overclocking core/voltage setup and purchase based on that.
I can't believe there is doubt about open air coolers. it is a given unless amd forgoes 3rd party resellers. does anyone here see that happening?

Amd should launch with 3rd party cards at the same time as the reference one. that would be great.

Can't wait for the reviews to suddenly use 3rd party air coolers now the reference one is the best one D::sneaky::eek:
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Air coolers is an almost certainty given that the professional version (firepro) pretty much requires an air cooled version.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Can anyone tell me when 390X is expected to launch and whether it is on 20nm or still on 28nm?
 

SimianR

Senior member
Mar 10, 2011
609
16
81
There was an AMD post on facebook saying that they are "putting the final touches on the 300 series cards" and some more rumors that followed said 4-6 weeks from now, so an April release doesn't seem that unlikely. I'm not sure if it will be 28nm or 20nm.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
There was an AMD post on facebook saying that they are "putting the final touches on the 300 series cards" and some more rumors that followed said 4-6 weeks from now, so an April release doesn't seem that unlikely. I'm not sure if it will be 28nm or 20nm.

Everything points to 28nm

OK thanks.

So... 4-6 weeks and STILL not sure if 20nm or 28nm? I've seen more sources claim 28nm than 20nm.. that in fact NV and AMD are both skipping 29nm and going straight to 14/16nm.

If it's still 28nm, as so many are saying, then I'm going to be disappointed.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
OK thanks.

So... 4-6 weeks and STILL not sure if 20nm or 28nm? I've seen more sources claim 28nm than 20nm.. that in fact NV and AMD are both skipping 29nm and going straight to 14/16nm.

If it's still 28nm, as so many are saying, then I'm going to be disappointed.

Yeah, all known 20nm processes that exist today are known to be low power, low performance optimized for ARM SOC's. I expect 28nm until late next year. 5 years on the same node.....so sad.....
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
OK thanks.

So... 4-6 weeks and STILL not sure if 20nm or 28nm? I've seen more sources claim 28nm than 20nm.. that in fact NV and AMD are both skipping 29nm and going straight to 14/16nm.

If it's still 28nm, as so many are saying, then I'm going to be disappointed.

There are some rumours that AMD may unveil more info on R9 300 series around GDC or Computex. However, based on comments from Gibbo and various sources, it seems the card would only launch June-July 2015, with wider availability expected for Q3. None of this information has been proven or disproven. AMD has been very secretive it seems. As far as 28nm vs. 20nm goes, the data doesn't align. For starters, it's difficult to believe AMD built a 550mm2 20nm chip but was limited to just 438mm2 on less expensive and higher yielding 28nm. Secondly, it seems 20nm is for low power devices. If AMD went with a 20nm power efficient design, I doubt it would have a 300W TDP, or the performance would be off the charts. 20nm and HBM -> the performance should be much more than 40-45% faster than 290X. Therefore, at least on the surface, the current leaks don't align with 20nm+HBM. This is just my opinion and I could be wrong.

780Ti was more or less ~ 2X faster than a 580 after a new architecture+node shrink. I do not think either GM200 or a 390X will be ~ 2x faster than 290X/780Ti. This is a compromise year for GPUs where NV and AMD are forced to use other techniques than a node shrink to improve absolute performance and perf/watt. NV has done a great job so AMD should also show some improvements over the 290X. Chances are the jump to 14nm flagship in 2017 will be much greater than GM200/390x will be over 290X/780Ti but it would also mean waiting ~ 2 years I would say. However, both cards should bring a much more substantial boost for 290 series and 780 series owners who decided to skip the 980.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Yeah, all known 20nm processes that exist today are known to be low power, low performance optimized for ARM SOC's. I expect 28nm until late next year. 5 years on the same node.....so sad.....

It's not really the fault of AMD or Nvidia on this one.

When it comes to large-die, high transistor density, it's pretty much only Intel around the 20nm mark, with their 22nm node (and now 14nm).

Outside of low-power SOCs, all the players in the CPU and GPU industry have had serious trouble producing viable 20nm node. Most companies are actually developing two node size processes, well, two overlap. Since most companies were struggling so much at 20nm and losing significant cash, they decided to junk that and continue developing either 16 or 14nm.

Both AMD and Nvidia have been developing and designing for 20nm or lower, but have had no partners capable of producing wafers for them at that size.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
It's not really the fault of AMD or Nvidia on this one.

When it comes to large-die, high transistor density, it's pretty much only Intel around the 20nm mark, with their 22nm node (and now 14nm).

Outside of low-power SOCs, all the players in the CPU and GPU industry have had serious trouble producing viable 20nm node. Most companies are actually developing two node size processes, well, two overlap. Since most companies were struggling so much at 20nm and losing significant cash, they decided to junk that and continue developing either 16 or 14nm.

Both AMD and Nvidia have been developing and designing for 20nm or lower, but have had no partners capable of producing wafers for them at that size.

I fully realize it's the foundries that are holding us back. It's still sad regardless of whose fault it is. If there weren't so many 10's of millions of ARM chips to produce for Apple and Qualcomm, maybe Nvidia and AMD's GPU needs would be a higher priority for TSMC.