[WCCF] AMD Radeon R9 390X Pictured

Page 18 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,608
6,094
136
Handicapping AMD's Fiji GPU with David Kanter

^ This interview summarizes the entire thread in just 13 minutes.
- Why Water cooling on a small PCB?
- Why HBM?
- Why 28nm and not 20nm?
- Why Fiji is an evolution of Tonga?

Thanks for the link. Great discussion about the potential for the card.

I personally like CLC cooling of graphics cards (been running a NZXT bracket with a CLC on my R290X) so I would be very happy if they had a CLC cooled version out of the box. I dump the heat right out the back exhaust vent of my case and it helps keep my CPU a few degrees cooler to boot.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Thanks for the link. Great discussion about the potential for the card.

I personally like CLC cooling of graphics cards (been running a NZXT bracket with a CLC on my R290X) so I would be very happy if they had a CLC cooled version out of the box. I dump the heat right out the back exhaust vent of my case and it helps keep my CPU a few degrees cooler to boot.

If AMD provides us with options: 4GB and 8GB, Water-cooled mini PCB card and air-cooled larger card, gamers will get to choose what they want.

It's an interesting point that Kanter brought up that if AMD/NV want to make smaller sized GPUs in the future, there isn't viable space for an air cooled solution to cool down a 250W TDP flagship.

Nvidia's Pascal presentation of what they are intending to make with HBM2 - look how small the PCB is compared to Titan X/980 or 290X.
07259236-photo-nvidia-gtc-2014-jen-hsun-huang-pascal.jpg


If if they add another 1 inch for the power connectors and PCIe, there is no room to drop a massive open air cooled/blower heatsink with fan on such a short PCB.
 

alcoholbob

Diamond Member
May 24, 2005
6,390
470
126
Titan X here is OC'ed. Both in gaming performance, and in power consumption tests.

And is OC'ed over 18% from the stock clocks. So where was it mistaken?

But lets don't get dragged away from the topic. Its not about the clocks of TitanX and Chiphell. Its about R9 390X.

The leaks don't make sense because the Firestrike Extreme score requires a large overclock whereas the gaming benchmarks are similar to stock card. So the numbers don't jive.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The leaks don't make sense because the Firestrike Extreme score requires a large overclock whereas the gaming benchmarks are similar to stock card. So the numbers don't jive.

Maybe, maybe not. It could be early drivers or those games didn't get all the pathces necessary. It could be fake numbers. What we do know is that every single leak from ChipHell on HD7900 series, GTX600 series, GTX700 series, R9 200 series and GTX980 series has been spot on. That's 5 series in a row ChipHell got it right. Are you willing to bet against them this time?

ChipHell literally showed HD7950 OC > HD7970 and both of those cards trashing a 580, they were spot on way too many times for their charts to be simply ignored when we are talking 10-20W power usage differences. Also, how are you so certain that gaming benchmarks don't look right because they averaged them out, so it's impossible to compare individual games.

Do you realize how accurate ChipHell's prediction of the Titan X's performance was? It's almost like 98% accurate and they had this data last year, way before we even knew the specs of the Titan X.

ChipHell - Titan X with 1189mhz boost (in games Titan X Boosts to about 1150-1215)

146 Titan X
106 980
100 R9 290X
96 780Ti
88 Titan

14090648358l.jpg


vs.

Actual performance

148 Titan X
106 980
100 R9 290X
95 780Ti
85 Titan

9435

http://www.sweclockers.com/recension/20193-nvidia-geforce-gtx-titan-x/18#content

The major discrepancy in ChipHell's data are 285/960 with too high of a performance. That could be a red flag but the top cards match up almost perfectly up to now.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Handicapping AMD's Fiji GPU with David Kanter

^ This interview summarizes the entire thread in just 13 minutes.
- Why Water cooling on a small PCB?
- Why HBM?
- Why 28nm and not 20nm?
- Why Fiji is an evolution of Tonga?

Thanks for the link, that was interesting. He mentioned he's held and seen a Fiji based card. I wonder how much of that speculation was knowledge being passed off as speculation :p

I really want to see how Fiji does and if it manages to put the boots to Titan X.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
I think some people in this thread will be literally stunned if R9 390X beats the Titan X by even 1% because of how conservative some of you guys are.

And even more if its > 10%. :cool: When was the last time a AMD or ATI GPU was > 500 sq mm. Never. This gpu is the most ambitious GPU ever designed by ATI or AMD. imo AMD is going for the GPU crown with serious intent.

With R9 390X (and the entire R9 3xx series) I am confident the underlying shader core has been improved for better perf/sp, perf/sq mm and perf/watt. The incremental changes upto Tonga plus further architectural improvements will contribute to a GPU which is >50% faster than R9 390X. My expectations are 55-65% faster than R9 290X. I have said from long back that I expect the R9 380 to be on par to slightly faster than GTX 980. We will see in mid-late June whether I got it right or wrong. :whiste:
 
Feb 19, 2009
10,457
10
76
I was expecting way before launch, that Titan X would be ~35% above 980 at ~780Ti power levels (235-240W).

I'm expecting that HBM Fiji XT WCE is going to be ~15% above Titan X ~R290X power levels.

From the impressions of journalists who have first hand experience with it, they were very impressed.

@Grooveriding
David Kanter in that video thinks its 8GB HBM. "16GB would be hard, but 8GB is well within specs from Hynix."
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
I was expecting way before launch, that Titan X would be ~35% above 980 at ~780Ti power levels (235-240W).

I'm expecting that HBM Fiji XT WCE is going to be ~15% above Titan X ~R290X power levels.

From the impressions of journalists who have first hand experience with it, they were very impressed.

@Grooveriding
David Kanter in that video thinks its 8GB HBM. "16GB would be hard, but 8GB is well within specs from Hynix."

Such a significant lead would most certainly force Nvidia to launch a full GM200 with AIO cooling and a 275W TDP. June 24th can't come sooner. :cool:
 
Feb 19, 2009
10,457
10
76
Such a significant lead would most certainly force Nvidia to launch a full GM200 with AIO cooling and a 275W TDP. June 24th can't come sooner. :cool:

The funny thing is this close to launch, nobody publicly knows why its delayed. The word from AIBs is that volume is low and that's the cause for the delay.. why is volume low?

A) Apple new Mac Pro taking all the GPUs? They took all the good Tonga dies.
B) HBM yields low
C) Interposer stacking at GloFo is bad

Anyway, looking forward to it. A good upgrade til 14nm goes prime time (IMO 2017 for big die GPU on that process).
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
The funny thing is this close to launch, nobody publicly knows why its delayed. The word from AIBs is that volume is low and that's the cause for the delay.. why is volume low?

A) Apple new Mac Pro taking all the GPUs? They took all the good Tonga dies.
B) HBM yields low
C) Interposer stacking at GloFo is bad

Anyway, looking forward to it. A good upgrade til 14nm goes prime time (IMO 2017 for big die GPU on that process).

I think people are underestimating the magnitude of this technology transition and forming wrong opinions based on their lack of knowledge of what 2.5D stacking and HBM entails. The transition to HBM is not like from GDDR3 to GDDR5. Its a fundamentally revolutionary tech and AMD is the first to adopt it in high volume. The impact of HBM on GPU form factors is going to be nothing short of revolutionary. The impact of HBM will be even more revolutionary on notebook form factors in 2017 when Zen based APUs with HBM ship.

GF and Hynix have been working on 2.5D stacking and HBM along with AMD and Amkor for more than 5 years. This tech is not easy and other than Xilinx who adopted it for very low volume, very high margin FPGAs in 2012 no one has been able to do 2.5D stacking with HBM in high volume and good yields. GF has even developed a custom bump termination type for 28nm Chip package Interconnect (CPI) called CRTM for a specific customer for logic die sizes > 500 sq mm and large interposers (see video from 5:00 - 6:30). Its easy to see GF did this for AMD and the R9 390X. The effort and experience in designing such technology should help GF/AMD in future with 14nm CPI and support for large 14nm logic die on large interposer. GF expects 14nm CPI to be qualified for volume production in H2 2015. So again do not be surprised if AMD has a more smoother transition to 14nm and HBM2 than Nvidia.

https://www.youtube.com/watch?v=po29B53bpic

GF and Hynix had their respective timelines for volume production on 2.5D stacking and HBM. Both expected to start in late 2014/early 2015 and thats exactly what happened. The lead times for 2.5D stacked GPUs are also longer than traditional GPUS in that it takes more steps and more weeks with co-ordination with OSAT partners like Amkor to ship a 2.5D stacked product.

Ramping production and building a decent volume takes time and all these cannot be wished away. But make no mistake AMD's experience in developing HBM and implementing it first is a significant competitive edge going forward. Do not forget that AMD has made the transition to HBM and 2.5D stacking on a very mature high yielding 28nm node. Nvidia will attempt to make the transition on a 16/14nm FINFET process which will be immature with yield struggles and volume issues.

You might not have forgotten how much time it took Nvidia to design a high end GDDR5 product. GTX 480 was 21 months behind HD 4870. I expect Nvidia's first FINFET flagship GPU to be 15-18 months behind AMD's R9 390X. I also expect AMD's experience with HBM and 2.5D stacking to help them transition smoother to 16/14nm FINFET with HBM2. It also helps to keep the fact in mind that AMD is Hynix's development partner and co-inventor for HBM. So AMD will get priority over Nvidia in HBM and even HBM2 volume allocation. Do not expect Nvidia to have a smooth ride with the transition to HBM2 and 2.5D stacking. Nvidia have literally zero experience with HBM/HBM2 and 2.5D stacking and are attempting to make that transition on the most difficult process node transition ever (for both Intel and foundries). Frankly thats asking for trouble.

Nvidia could never match AMD's GDDR5 memory controller till Kepler. The first gen Nvidia GDDR5 controller ran at 900-925 Mhz just like the first gen AMD GDDR5 controlelr found in HD 4870 ran at 900 Mhz. But HD 5870 with 2nd gen GDDR5 memory controller ran at 1200 Mhz. It took GTX 680 to come out with a GDDR5 memory controller capable of clocking as good as AMD's. So anybody expecting Nvidia's first gen HBM2 controller to be as good or even better than AMD's HBM2 controller is in for a surprise. So in summary I would say Nvidia's 9 month time to market lead with Maxwell will quickly be forgotten when we see the ramifications of AMD's 15-18 month HBM time to market lead. oh btw I expect a HBM based R9 380/ R9 380X which will also double up as the flagship notebook GPUs. AMD stands to gain the most from HBM in notebooks where they have very less presence in the high end due to Nvidia's superior perf/watt for 2 consecutive generations with GK104 and GM204. This is where I expect the HBM time to market advantage to provide a huge competitive edge and a good opportunity for AMD to gain back high end high margin notebook GPU market share. :whiste:
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think people are underestimating the magnitude of this technology transition and forming wrong opinions based on their lack of knowledge of what 2.5D stacking and HBM entails. The transition to HBM is not like from GDDR3 to GDDR5. Its a fundamentally revolutionary tech and AMD is the first to adopt it in high volume. The impact of HBM on GPU form factors is going to be nothing short of revolutionary. The impact of HBM will be even more revolutionary on notebook form factors in 2017 when Zen based APUs with HBM ship.

GF and Hynix have been working on 2.5D stacking and HBM along with AMD and Amkor for more than 5 years. This tech is not easy and other than Xilinx who adopted it for very low volume, very high margin FPGAs in 2012 no one has been able to do 2.5D stacking with HBM in high volume and good yields. GF has even developed a custom bump termination type for 28nm Chip package Interconnect (CPI) called CRTM for a specific customer for logic die sizes > 500 sq mm and large interposers (see video from 5:00 - 6:30). Its easy to see GF did this for AMD and the R9 390X. The effort and experience in designing such technology should help GF/AMD in future with 14nm CPI and support for large 14nm logic die on large interposer. GF expects 14nm CPI to be qualified for volume production in H2 2015. So again do not be surprised if AMD has a more smoother transition to 14nm and HBM2 than Nvidia.

https://www.youtube.com/watch?v=po29B53bpic

GF and Hynix had their respective timelines for volume production on 2.5D stacking and HBM. Both expected to start in late 2014/early 2015 and thats exactly what happened. The lead times for 2.5D stacked GPUs are also longer than traditional GPUS in that it takes more steps and more weeks with co-ordination with OSAT partners like Amkor to ship a 2.5D stacked product.

Ramping production and building a decent volume takes time and all these cannot be wished away. But make no mistake AMD's experience in developing HBM and implementing it first is a significant competitive edge going forward. Do not forget that AMD has made the transition to HBM and 2.5D stacking on a very mature high yielding 28nm node. Nvidia will attempt to make the transition on a 16/14nm FINFET process which will be immature with yield struggles and volume issues.

You might not have forgotten how much time it took Nvidia to design a high end GDDR5 product. GTX 480 was 21 months behind HD 4870. I expect Nvidia's first FINFET flagship GPU to be 15-18 months behind AMD's R9 390X. I also expect AMD's experience with HBM and 2.5D stacking to help them transition smoother to 16/14nm FINFET with HBM2. It also helps to keep the fact in mind that AMD is Hynix's development partner and co-inventor for HBM. So AMD will get priority over Nvidia in HBM and even HBM2 volume allocation. Do not expect Nvidia to have a smooth ride with the transition to HBM2 and 2.5D stacking. Nvidia have literally zero experience with HBM/HBM2 is and 2.5D stacking and are attempting to make that transition on the most difficult process node transition ever (for both Intel and foundries). Frankly that a recipe for disaster.

Nvidia could never match AMD's GDDR5 memory controller till Kepler. The first gen Nvidia GDDR5 controller ran at 900-925 Mhz just like the first gen AMD GDDR5 controlelr found in HD 4870 at 900 Mhz. But HD 5870 with 2nd gen GDDR5 memory controller ran at 1200 Mhz. It took GTX 680 to come out with a GDDR5 memory controller capable of clocking as best as AMD's. So anybody expecting Nvidia's first gen HBM2 controller to be as good or even better than AMD's HBM2 controller is in for a surprise. So in summary I would say Nvidia's 9 month time to market lead with Maxwell will quickly be forgotten when we see the ramifications of AMD's 15-18 month HBM time to market lead. oh btw I expect a HBM based R9 380/ R9 380X which will also double up as the flagship notebook GPUs. AMD stands to gain the most from HBM in notebooks where they have very less presence in the high end due to Nvidia's superior perf/watt for 2 consecutive generations with GK104 and GM204. This is where I expect the HBM time to market advantage to provide a huge competitive edge and a good opportunity for AMD to gain back high end high margin notebook GPU market share. :whiste:

Sounds expensive. ;)
 
Feb 19, 2009
10,457
10
76
It's not far fetched to say this, but HBM is indeed what AMD has been waiting for all this time to allow them to excel in notebooks. Instead of a 35W CPU + 50-100W dGPU combo with the form factor associated with that, we can get a high performance gaming APU @100W with HBM providing ample bandwidth to keep the GCN cores fed, at a lower footprint. It's a revolutionary product and IF zen cores are as good as claimed, I can see myself buying an AMD notebook 100%.

I say this as someone who has purchased Intel for many years on notebooks & desktops with no hesitation.

On the dGPU front, the uarch have to take advantage of all that extra bandwidth else its only advantage would be efficiency gains which is good of itself but not wow.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I'm expecting that HBM Fiji XT WCE is going to be ~15% above Titan X ~R290X power levels.

Let's not get carried away here. There is no way HBM1 benefits that much. They are still on 28nm node and the architecture is basically Tonga. 15% faster than Titan X is basically 70%+ faster than R9 290X. You are setting this card up for failure imo.

If this card beats R9 290X by 65% at 4K on average at R9 290X's power usage, I'll be floored. 45% faster at $550 is already epic enough considering AMD isn't using a new architecture like NV is with Maxwell. Also, if it retains the double precision performance, even 1/4th or 1/5th of SP, that would be ridiculous. That would essentially mean the extra power usage on top of Titan X alone would be justifiable for the compute performance for their FirePro series. If AMD managed a gaming card faster than the Titan X and it has full DP compute in a smaller die size than 601mm2 (Titan X's), that would be the most mind-blowing come back in AMD's history.

I personally think to get higher perf/watt, AMD needs to drop DP to like 1/32. Leave that functionality for FirePro.
 
Last edited:
Feb 19, 2009
10,457
10
76
Let's not get carried away here. There is no way HBM1 benefits that much. They are still on 28nm node and the architecture is basically Tonga. 15% faster than Titan X is basically 70%+ faster than R9 290X. You are setting this up for failure imo. If this card beats R9 290X by 65% at 4K on average at R9 290X's power usage, I'll be floored. 45% faster at $550 is already epic enough. :p

I'm actually expecting it to match R295X2 and potentially beat it at 4K. :)

It's not that mind blowing.

Hawaii vs GK110. Match and eventually win (or too close to call, whatever, minus GW titles, its a clear win). ~25% smaller die, more DP throughput. Same memory tech. How is that even possible? It shouldn't be.

Fiji vs GM200. Similar die size. GloFo 28nm which is ~30% superior than TSMC for leakage/power. Vastly superior memory tech. You can put two & two together.
 
Last edited:

StereoPixel

Member
Oct 6, 2013
107
0
71
Conference call mentioned 20nm and 28nm, 20nm didn't turn out as good as advertised so they stuck to 28nm and moved to GF. There's no mention of other processes until 14nm from GF/Samsung.

During the question and answer session I asked Dr. Caulfield about GlobalFoundries SOI plans. He replied that they are developing a 22nm process in Malta for manufacturing in Dresden. The goal is 14nm FinFET performance at 28nm costs.
https://www.semiwiki.com/forum/content/4630-asmc-2015-globalfoundries-22nm-soi-plans-more.html?
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
Handicapping AMD's Fiji GPU with David Kanter

^ This interview summarizes the entire thread in just 13 minutes.
- Why Water cooling on a small PCB?
- Why HBM?
- Why 28nm and not 20nm?
- Why Fiji is an evolution of Tonga?
Actually you can overcome the short pc, but you would need to take the z axis of the equation in consideration for that. Like how backside helps a ton in keeping temps in check for fx chips, aibs should start thinking of ways to cool their non reference designs from both sides of the pcb. Overcome the innherently estetic nature of the backplate and rework it into something actually useful for cooling, ala arctic cooling and their newer 3 fan cooler designs.

It also should be revised the entire expansion slot atx specification, if not, we will see a continuation of the trend in widening pcbs to enhance cooling capacity (just check how the strix and the msi gaming coolers stand out from the end of the output shield).

A vrm plus VRAM backside-cooled, gpu frontside-cooled design should help keeping temps in check for short pcbs. Also a better design of the cooler's shroud to have better pressure on the fans and start tinkering with the idea of pull fans on short pcbs for better cooling in ssf cases.
If AMD provides us with options: 4GB and 8GB, Water-cooled mini PCB card and air-cooled larger card, gamers will get to choose what they want.

It's an interesting point that Kanter brought up that if AMD/NV want to make smaller sized GPUs in the future, there isn't viable space for an air cooled solution to cool down a 250W TDP flagship.

Nvidia's Pascal presentation of what they are intending to make with HBM2 - look how small the PCB is compared to Titan X/980 or 290X.
07259236-photo-nvidia-gtc-2014-jen-hsun-huang-pascal.jpg


If if they add another 1 inch for the power connectors and PCIe, there is no room to drop a massive open air cooled/blower heatsink with fan on such a short PCB.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Let's not get carried away here. There is no way HBM1 benefits that much. They are still on 28nm node and the architecture is basically Tonga. 15% faster than Titan X is basically 70%+ faster than R9 290X. You are setting this card up for failure imo.

Just because you say there are no architectural improvements does not mean thats the truth. Frankly the amounts of obscene bandwidth combined with the improvements to memory bandwidth efficiency using color compression mean R9 390X would be a hugely imbalanced chip if there were no improvements to the core shader perf (per/sp). Think about it. Even assuming R9 390X has only 512 GB/s bandwidth combined with Tonga color compression which brings a 40% improved memory bandwidth efficiency it would have a >50% increase in bandwidth per sp or compute unit.

http://www.anandtech.com/show/8460/amd-radeon-r9-285-review/3

512 /320 x 1.4 = 2.24 times the effective bandwith of R9 290X. So R9 390X has more than twice the effective bandwidth of R9 290X for just 1.45x (4096/2816 = 1.45) the shaders. So effective bandwidth increase per sp is 2.24/1.45 = 1.54x more bandwidth per sp and per compute unit. Thats 50% more bandwidth per sp. What the heck is AMD gonna do with such an increase in bandwidth when we see that Hawaii is not bandwidth bottlenecked. :rolleyes:

So you see its not HBM1 alone which brings the perf improvement. Its the actual micro architectural improvements. If Nvidia can improve perf/cc by 35% do you think its not possible for AMD to improve the perf/sp by 15-20%. Do you think the improvements to the tesselation, ROP and memory bandwidth efficiency (color compression) in Tonga were done without AMD having a design to scale shader perf and efficiency.

btw sweclockers mentioned that there are further microarchitectural improvements. They talked of tiled GCN. So why are you so confident that there are no architectural improvements to increase perf/sp, perf/sq mm and perf/watt.

https://translate.google.com/transl...ande-chip-monolitico-anche-per-amd&edit-text=

"The third new feature is the micro-architecture. With Fiji design GCN should fully embrace the "tiled architecture" going to review the organization for ALU thread within Compute Units in order to improve workload management."

Let me give you a history of AT/AMD GPU architectures. These GPU architectures have a long life and are extremely versatile and extensible. The fundamental R600 Xenos GPU architecture found in Xbox 360 served ATI/AMD from late 2005 to late 2011. Minor tweaks were done but the underlying architecture was a phenomenally scalable architecture.

http://en.wikipedia.org/wiki/Xenos_(graphics_chip)
"The Xenos is a custom graphics processing unit (GPU) designed by ATI (now taken over by AMD), used in the Xbox 360 video game console developed and produced for Microsoft. Developed under the codename "C1",[1] it is in many ways related to the R520 architecture and therefore very similar to an ATI Radeon X1800 series of PC graphics cards as far as features and performance are concerned. However, the Xenos introduced new design ideas that were later adopted in the TeraScale microarchitecture, such as the unified shader architecture. The package contains two separate dies, the GPU and an eDRAM, featuring a total of 337 million transistors."

http://en.wikipedia.org/wiki/TeraScale_(microarchitecture)

"TeraScale is the codename for a family of graphics processing unit microarchitectures developed by ATI Technologies/AMD and their second microarchitecture implementing the unified shader model following Xenos. TeraScale replaced the old fixed-pipeline microarchitectures and competed directly with Nvidia's first unified shader microarchitecture named Tesla.

TeraScale was used in HD 2000 manufactured in 80 nm and 65 nm, HD 3000 manufactured in 65 nm and 55 nm, HD 4000 manufactured in 55 nm and 40 nm, HD 5000 and HD 6000 manufactured in 40 nm. TeraScale was also used in the AMD Accelerated Processing Units code-named "Brazos", "Llano", "Trinity" and "Richland". TeraScale is even found in some of the succeeding graphics cards brands."

That R600 Xenos architecture which launched with Xbox 360 in late 2005 had a life of 6 years in GPUs (8 yrs including APUs like Brazos, Llano,Trinity and Richland) and was replaced by GCN in late 2011. GCN is expected to have an even longer life than R600 .

I remember Raja Koduri saying GCN is the world's most scaleable GPU architecture at the R9 290X launch and I believe he was not exaggerating. GCN was designed to have a very long life and we now know from AMD FAD 2015 presentations that GCN will be around for 2015, 2016 and 2017. The 2016 GCN products might launch in Q3 2016 and they will have a life of atleast 24 months.

Frankly I don't like it when somebody talks about Maxwell as the next thing after sliced bread. Maxwell is impressive in every aspect - perf , perf/watt and perf /sq mm. But to say AMD cannot design a GPU which beats Titan-X comprehensively is just argumentative. Atleast wait for the products to prove what they are capable of before passing off such high handed dismissive comments.

If this card beats R9 290X by 65% at 4K on average at R9 290X's power usage, I'll be floored. 45% faster at $550 is already epic enough considering AMD isn't using a new architecture like NV is with Maxwell. Also, if it retains the double precision performance, even 1/4th or 1/5th of SP, that would be ridiculous. That would essentially mean the extra power usage on top of Titan X alone would be justifiable for the compute performance for their FirePro series. If AMD managed a gaming card faster than the Titan X and it has full DP compute in a smaller die size than 601mm2 (Titan X's), that would the most mind-blowing come back in AMD's history.

I personally think to get higher perf/watt, AMD needs to drop DP to like 1/32. Leave that functionality for FirePro.

This is exactly what I expect is going to happen. A 550 sq mm flagship GPU with 8 GB HBM for Radeon using 4Hi HBM and dual link interposer. 55-65% faster than R9 290X. 1/8 fp64 for Radeon (just like R9 290x) and 1/2 fp64 rate for Firepro (like Firepro W9100) . The Firepro versions will launch in Q4 2016. Those cards are waiting for the availability of 8 Hi HBM stacks which will double the effective memory capacity. Thus a 16 GB HBM Firepro is definitely on the cards. Apple could be a launch customer with the Mac Pro getting a Haswell-E and next gen Firepro with 16 GB HBM. :cool:
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
I'm actually expecting it to match R295X2 and potentially beat it at 4K. :)

It's not that mind blowing.

Hawaii vs GK110. Match and eventually win (or too close to call, whatever, minus GW titles, its a clear win). ~25% smaller die, more DP throughput. Same memory tech. How is that even possible? It shouldn't be.

Fiji vs GM200. Similar die size. GloFo 28nm which is ~30% superior than TSMC for leakage/power. Vastly superior memory tech. You can put two & two together.

Actually I expect it the other way around. R9 390X will be faster at 1440p and slower at 4k. We have seen R9 295X2 scaling improve with higher res as they are more GPU limited at that res.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/29.html

The R9 295 x2 is 20% faster at 4k but only 6% faster at 1440p wrt Titan-X. Add 10-15% to Titan-X and you will see faster at 1440p and slower at 4k wrt R9 295x2. But with all the benefits of a single GPU. Frankly I can see why AMD wanted to clear channel inventory. Who would buy a R9 295X2 at USD 650 if you get a single GPU at USD 700 - USD 800 with roughly 55-60% of the power usage of R9 295X2 and which is basically slightly slower at 4k and faster at 1440p. Obviously when overclocked the R9 390X will beat out the R9 295x2 at 4k too. Add to it a PCB which is half the size and comes with a full cover water block covering both GPU with HBM (covered by Integrated Heat Spreader) and VRMs.

I see this across the stack as a behaviour. The R9 3xx perf/watt will be way up. oh my other predictions are R9 380 and R9 390 series sport HBM. R9 380 will match GTX 980 or be slightly faster(5%) at around 170-180W. R9 380X will be 15-20% faster than GTX 980 at 200-210W TDP. R9 390 at 250W TDP and perf on par with Titan-X and R9 390X at 270W TDP and perf roughly 10% faster than Titan-X. :cool:
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
oh my other predictions are R9 380 will match GTX 980 at around 170W. R9 380X will be 15-20% faster than GTX 980 at 200W TDP. R9 390 at 250W TDP and perf on par with Titan-X and R9 390X at 270W TDP and perf roughly 10% faster than Titan-X. :cool:


Based on...?
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Based on...?

My expectation that AMD has made further architectural improvements over Tonga to improve perf/sp, perf/watt, perf/sq mm (without counting HBM related power/area efficiency gains) which when combined with HBM will give it impressive perf and perf/watt. Frankly I don't think AMD is stupid to not address perf/sp when the amount of bandwidth is growing disproportionately. With 1.5x more bandwidth per shader AMD definitely knew what they had to do to avoid an unbalanced chip - which is to improve perf/sp and efficiency through architectural improvements to the core shaders / sp and compute unit.

I also want to mention that chiphell leaks in Dec were hinting at something similar.

http://www.overclock.net/t/1530716/...-fiji-380x-and-bermuda-390x-benchmarks-leaked
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
7,502
17,935
136
It's an interesting point that Kanter brought up that if AMD/NV want to make smaller sized GPUs in the future, there isn't viable space for an air cooled solution to cool down a 250W TDP flagship.
Out of the entire video, that is the only thing that came out as being plain wrong: whether cooling with air or water, the area of contact between the hot surface(s) and the cooling apparatus is the same.

The only restriction when cooling on air arises from the heat sink size requirement, and the only reason one would want that heat sink to be just as small as the new smaller PCB would be to use it in small cases. Otherwise, one can just as easily create a bigger heat sink (relative to PCB).
 
Status
Not open for further replies.