Fury X voltage adjustment now available

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
At what resolution are we talking about here?

1080p? I would think at 1440p and above the gap should be close to double?

http://www.techpowerup.com/reviews/ASUS/R9_Fury_Strix/31.html

1440p
R9 280X (1 ghz) - 60
Fury X(1.05 Ghz) - 107

If you look at clock for clock performance its even lesser. This is the problem with AMD. They have architecturally done very little improvements with GCN especially given the fact that 28nm will be a 5yr old node by late 2016 when the first next gen FINFET flagship GPUs launch. Thats why they are in a pathetic situation in terms of market share. Once Maxwell launched AMD went from 35% to 24% in just a single quarter. They have not been able to gain back that lost market share due to a uncompetitive and unattractive product stack. Nvidia meanwhile is executing flawlessly with their AIB partners unleashing the full power of the Maxwell architecture. GTX 980 Ti custom cards are 15% faster than ref 980 Ti and gain another 10-15% when overclocked to the max. This is with the low voltage increase available of +87mv. I shudder to think what a water cooled fully voltage unlocked full GM200 can do. The GPU market now definitely resembles the CPU market with a dominant Nvidia more or less having a monopoly and strengthening its stranglehold on the GPU market and AMD fading away. :thumbsdown:
 
Last edited:
Mar 10, 2006
11,715
2,012
126
http://www.techpowerup.com/reviews/ASUS/R9_Fury_Strix/31.html

1440p
R9 280X (1 ghz) - 60
Fury X(1.05 Ghz) - 107

If you look at clock for clock performance its even lesser. This is the problem with AMD. They have architecturally done very little improvements with GCN especially given the fact that 28nm will be a 5yr old node by late 2016 when the first next gen FINFET flagship GPUs launch. Thats why they are in a pathetic situation in terms of market share. Once Maxwell launched AMD went from 35% to 24% in just a single quarter. They have not been able to gain back that lost market share due to a uncompetitive and unattractive product stack. Nvidia meanwhile is executing flawlessly with their AIB partners unleashing the full power of the Maxwell architecture. GTX 980 Ti custom cards are 15% faster than ref 980 Ti and gain another 10-15% when overclocked to the max. This is with the low voltage increase available of +87mv. I shudder to think what a water cooled fully voltage unlocked full GM200 can do. The GPU market now definitely resembles the CPU market with a dominant Nvidia more or less having a monopoly and strengthening its stranglehold on the GPU market and AMD fading away. :thumbsdown:

NVIDIA spends more in R&D annually than AMD does now and it focuses on a much narrower set of products, so it's really no surprise that NVIDIA is on the ball here while AMD is fumbling.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
NVIDIA spends more in R&D annually than AMD does now and it focuses on a much narrower set of products, so it's really no surprise that NVIDIA is on the ball here while AMD is fumbling.

This was not the case back in early 2012. But the wretched Bulldozer which launched in late 2011 decimated AMD's dwindling market share in servers and destroyed their APUs too. This has been a case of a failed CPU architecture destroying a company which was already struggling against a monopoly competitor executing flawlessly. From Nehalem to Haswell Intel just pummelled AMD and today AMD is insignificant in servers - 1.5% market share. AMD's CPU failures have now destroyed their GPU division too. The continued market share losses in CPU and GPU, falling revenues and mounting losses led to deep cuts in R&D and today we see a AMD without enough cash to invest in R&D and compete against Intel and Nvidia.
 

DownTheSky

Senior member
Apr 7, 2013
800
167
116
Wow. Kinda like made at home Fury Nano. Makes me wonder what the clocks of the dual GPU card will be. That card will stick at the top of most benchmarks for a while.

If they'd make a 250-300W dual chip card it would sell like hotcakes.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
http://www.techpowerup.com/reviews/ASUS/R9_Fury_Strix/31.html

1440p
R9 280X (1 ghz) - 60
Fury X(1.05 Ghz) - 107

If you look at clock for clock performance its even lesser. This is the problem with AMD. They have architecturally done very little improvements with GCN especially given the fact that 28nm will be a 5yr old node by late 2016 when the first next gen FINFET flagship GPUs launch. Thats why they are in a pathetic situation in terms of market share. Once Maxwell launched AMD went from 35% to 24% in just a single quarter. They have not been able to gain back that lost market share due to a uncompetitive and unattractive product stack. Nvidia meanwhile is executing flawlessly with their AIB partners unleashing the full power of the Maxwell architecture. GTX 980 Ti custom cards are 15% faster than ref 980 Ti and gain another 10-15% when overclocked to the max. This is with the low voltage increase available of +87mv. I shudder to think what a water cooled fully voltage unlocked full GM200 can do. The GPU market now definitely resembles the CPU market with a dominant Nvidia more or less having a monopoly and strengthening its stranglehold on the GPU market and AMD fading away. :thumbsdown:

gcn has been getting minor updates for a while now, gcn 1.2 added better fp16 support which is said to offer lower power consumption while not minimizing IQ, mobile gpus have had used fp16 for a while. This is one of the many lower level changes amd have made to gcn. If you haven't noticed, gcn has 2 benefits...dual purpose aka graphics and compute and secondly stability. Devs have to target fermi, kepler and maxwell on nv's side, gen 7.5 and gen x on intel while they just need to target gcn for amd. It also helps that they have intimate knowledge of gcn dos and donts from console dev.

tl;dr if amd were to make major changes to gcn they would probably be in no better place than they are now.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
gcn has been getting minor updates for a while now, gcn 1.2 added better fp16 support which is said to offer lower power consumption while not minimizing IQ, mobile gpus have had used fp16 for a while. This is one of the many lower level changes amd have made to gcn. If you haven't noticed, gcn has 2 benefits...dual purpose aka graphics and compute and secondly stability. Devs have to target fermi, kepler and maxwell on nv's side, gen 7.5 and gen x on intel while they just need to target gcn for amd. It also helps that they have intimate knowledge of gcn dos and donts from console dev.

tl;dr if amd were to make major changes to gcn they would probably be in no better place than they are now.

That part is very true, and I said it another thread, and I agree with Raghu here. Also see the 4870 thread that is sort of echoing this. I know people want to focus on "Kepler is not optimized" but AMD literally dropped it's old uarch when it moved on to GCN. There were no VLIW4/VLIW5 optimizations and AMD's own techs destroyed performance on them.

Riding the success of their console deals, AMD is basically tied to GCN for the length of this console duration. And consoles aren't known to push new technologies. If it's taking AMD 10% more transistors to still lose to Nvidia on the PC side, the future looks bleak. And if them switching to a new uarch in the next year or so essentially decimates their previous users (since we like to extrapolate based on one occurrence around here.)

AMD is basically stuck. Like they were with that failed CPU uarch.
 

Makaveli

Diamond Member
Feb 8, 2002
4,990
1,579
136
AMD is basically stuck. Like they were with that failed CPU uarch.

I don't know about this GCN has been able to compete in most of its versions yes not always in the lead position but I would say in the same ball park.

I haven't been able to say that about amd's cpu since socket 939 :p
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
That part is very true, and I said it another thread, and I agree with Raghu here. Also see the 4870 thread that is sort of echoing this. I know people want to focus on "Kepler is not optimized" but AMD literally dropped it's old uarch when it moved on to GCN. There were no VLIW4/VLIW5 optimizations and AMD's own techs destroyed performance on them.

Riding the success of their console deals, AMD is basically tied to GCN for the length of this console duration. And consoles aren't known to push new technologies. If it's taking AMD 10% more transistors to still lose to Nvidia on the PC side, the future looks bleak. And if them switching to a new uarch in the next year or so essentially decimates their previous users (since we like to extrapolate based on one occurrence around here.)

AMD is basically stuck. Like they were with that failed CPU uarch.


All fair points but I disagree about consoles and the new tech bit. It is the consoles that push the boundaries, not pc gaming. It seems counter intuitive but the low level Apis nurture this kind of innovation.
Also your comment about amds last gen uarch is inaccurate just by the fact that amd used terascale for many generations before moving to gcn.

Another note, gcn isn't only a gaming card. Compute is a strong suit for gcn, so claiming that amd needs 10% more transistors to compete is very skewed. In a historical context amd has mostly been smaller dice than nv, Hawaii and now Fiji are the ones to change this. While nvidia has reduced the compute performance of their cards.

Also I know you like to be hyperbolic but failed is a bit stronk don't you think?
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
All fair points but I disagree about consoles and new tech.bit is the consoles that push the boundaries, not pic gaming. It seems counter intuitive but the low level Apis nurture this kind of innovation.

Feel free to cite any innovations that were started on consoles. Even Bitmapping which was a prominent feature of Halo was part of the original Mac version.

Also your comment about amds last gen uarch is inaccurate just by the fact that amd used terascale for many generations before moving to gcn.

That is what I mean. Teracale* was their uarch from HD 2k to HD 6k, spanning almost 6 years, and it was dropped completely when they moved on to GCN. Here we are with GCN about to hit it's 4th year. Do you think GCN will exist the remainder of the current console life span - possibly another 4 years?

Another note, gcn isn't only a gaming card. Compute is a strong suit for gcn, so claiming that amd needs 10% more transistors to compete is very skewed. In a historical context amd has mostly been smaller dice that nv, Hawaii and now Fiji are the ones to change this

Look at Fiji's size, transistor count, and then compare it to GM200. Yet it only wins in specific scenarios, and those victories are slim. Of course CFX runs away with the show, but that isn't the very day user (actually, I wonder how Fury Nano CFX will be? anyways).


Either way, AMD's slope is even steeper than before, and if you haven't been following their quarterlies, things don't seem bright. Will they do another round of lay offs? Probably. Will that help their R&D? Absolutely not.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Feel free to cite any innovations that were started on consoles. Even Bitmapping which was a prominent feature of Halo was part of the original Mac version.



That is what I mean. Teracale* was their uarch from HD 2k to HD 6k, spanning almost 6 years, and it was dropped completely when they moved on to GCN. Here we are with GCN about to hit it's 4th year. Do you think GCN will exist the remainder of the current console life span - possibly another 4 years?



Look at Fiji's size, transistor count, and then compare it to GM200. Yet it only wins in specific scenarios, and those victories are slim. Of course CFX runs away with the show, but that isn't the very day user (actually, I wonder how Fury Nano CFX will be? anyways).


Either way, AMD's slope is even steeper than before, and if you haven't been following their quarterlies, things don't seem bright. Will they do another round of lay offs? Probably. Will that help their R&D? Absolutely not.


Sorry don't see what bringing up r&d or quarterlies have to do with an architectural discussion.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Sorry don't see what bringing up r&d or quarterlies have to do with an architectural discussion.

Someone has to feed the engineers. Either way, at this point it's off topic so, I'm dropping it :) Feel free to focus on my other points.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Lol Railven, I've been talking about Fury Nano CF for awhile now. I'm curious as well
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Lol Railven, I've been talking about Fury Nano CF for awhile now. I'm curious as well

If price is right with the current CFX scaling it might be a killer combination.

Of course, mGPU shot my dog so I'm very sour at it (both companies). I'd still read a in-depth article on it though :D
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
That is what I mean. Teracale* was their uarch from HD 2k to HD 6k, spanning almost 6 years, and it was dropped completely when they moved on to GCN. Here we are with GCN about to hit it's 4th year. Do you think GCN will exist the remainder of the current console life span - possibly another 4 years?
yes. the just need to optimize and tweak gcn to what the developers need. I don't believe they have to redesign it.

Look at Fiji's size, transistor count, and then compare it to GM200. Yet it only wins in specific scenarios, and those victories are slim. Of course CFX runs away with the show, but that isn't the very day user (actually, I wonder how Fury Nano CFX will be? anyways).
not so sure fury is for everyday users, much less cfx. Yes fury doesn't outperform gm200 in gaming but that could be attributed to the massive gaming optimizations nvidia did for that uarch.

but we are going way /ot
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
If price is right with the current CFX scaling it might be a killer combination.

Of course, mGPU shot my dog so I'm very sour at it (both companies). I'd still read a in-depth article on it though :D

How can Nano possibly be priced "right" and not kill Fury Vanilla and Fury X sales given AMD's performance estimates??
I think Fury Nano will have a price premium because of it's binning and nitch status. Either same price as vanilla or perhaps even $50 more.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Another note, gcn isn't only a gaming card. Compute is a strong suit for gcn, so claiming that amd needs 10% more transistors to compete is very skewed. In a historical context amd has mostly been smaller dice than nv, Hawaii and now Fiji are the ones to change this. While nvidia has reduced the compute performance of their cards.

Fiji reduced / dropped compute performance as well. Its very much a competitor to GM200 across the board in performance (nearly splitting victories vs. GM200 here http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/24 ), and loses most of the time in all gaming situations.

And again, as railven said, it needs 10% more transistors and way more bandwidth / more expensive memory to do so.
 
Feb 19, 2009
10,457
10
76
All it needs is a bunch of favorable games in benchmark suites and the result is it beats GM200.

This half of the year belongs to NV with titles like Dying Light, Project Cars, Witcher 3, Batman AK etc. Do they have anymore GW titles due soon? Cos all the next big AAA titles are AMD GE. That alone may swing the results back in favor of Fury X.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
If price is right with the current CFX scaling it might be a killer combination.

Of course, mGPU shot my dog so I'm very sour at it (both companies). I'd still read a in-depth article on it though :D

Ya, but you play a wide range of games. I play games where it works and microstutter is fixed (and I doubt I'll notice anyway I'm closer to console gaming at this point but pixels and downsamplign are important at 80 inch displays.)

I'll see an article although at this point I'm 97% sure I'm getting a puppy and ignoring this gpu gen. The 290x held up very well as a card. Everything after this has been lackluster right now. Next round of chips though I think will be different. HBM from both sides will be interesting.
 
Feb 19, 2009
10,457
10
76
The 290x held up very well as a card. Everything after this has been lackluster right now. Next round of chips though I think will be different. HBM from both sides will be interesting.

Yup, that's why when I looked to upgrade my 7950 rig, only one choice was obvious, R290X for ~half the price of 980s (not joking, brand new ASUS custom R290X is ~$430 vs $800 980s! Gigabyte R290X are ~$400), cheaper than 970. But they're going out of stock in most shops here, so people have to folk out more $ for 390/970 etc.

Fury X is $999 here, with most 980Ti models for $1099. The performance leap from $430 cool & quiet R290X is no way worth it.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
All it needs is a bunch of favorable games in benchmark suites and the result is it beats GM200.

This half of the year belongs to NV with titles like Dying Light, Project Cars, Witcher 3, Batman AK etc. Do they have anymore GW titles due soon? Cos all the next big AAA titles are AMD GE. That alone may swing the results back in favor of Fury X.

It could, but I doubt it. Nearly all reviews show GTX 980 TI (cutdown GM200) being faster and more efficient at all resolutions than Fury X regardless of the testing suite. And msot reviews that use gameworks titles disable Nvidia-specific features. And even then, some gameworks titles run better on AMD hardware (Far Cry 4, Batman AO), while some GE titles run better on Nvidia hardware (BF4, DA:I). It's a wash.

Should Nvidia decide to rebadge Maxwell if Pascal isn't coming until the end of 2016, then we'd get a full-fat GM200 faster with higher clocks than Titan X for a cheaper price, ensuring/distancing Nvidia's performance lead. But Fury X never beats GM200 when it comes to outright performance and overclocking. GM200 is the performance king on 28nm, while GM204 is the perf/w king.
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Still the only real problem with AMDs architecture, that I see, is power consumption. When the 970 came out and I was anxious to get it, it seemed to beat the 290x. I thought it was a no-brainer. Now its competing with the 390 and loses most of the time. My 290x is faster than my 970 from my testing and no worries about VRAM issues or lacking asynchronous shader support.

There's also the fact GCN 1.1 was/is smaller, faster and more feature-packed than kepler. AMDs market share will vary with the release periods and since they had nothing major last year while nvidia had a major PR storm, they went down for those quarters.

The real problem is not their GPUs, its their other markets. That's what has lost them the most revenue I think. Luckily for them they have a strong architecture in GCN but they have to become competitive on CPUs again to build the company back. As far as GPUs, solid except for power consumption below the highest end.

Nano will be interesting. An underclocked nano OC-ed (maybe under water) could give people a better impression. AMD might need to be more careful about saying certain things. Saying OC'ers dream gives people different ideas. What kind of OC should be expected? typical of GCN? More? Since it was GCN and not massively changed AND on the same process, I'd just guess typical GCN oc. 1300 MHz would be my max expected and might be around what people hoped for.
 
Last edited:
Feb 19, 2009
10,457
10
76
Saying OC'ers dream gives people different ideas. What kind of OC should be expected? typical of GCN? More? Since it was GCN and not massively changed AND on the same process, I'd just guess typical GCN oc. 1300 MHz would be my max expected and might be around what people hoped for.

GCN peaks out at 1.25ghz, its very difficult/rare to get them to 1.3ghz. Hwbot agrees, average is ~1160mhz.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Still the only real problem with AMDs architecture, that I see, is power consumption. When the 970 came out and I was anxious to get it, it seemed to beat the 290x. I thought it was a no-brainer. Now its competing with the 390 and loses most of the time. My 290x is faster than my 970 from my testing and no worries about VRAM issues or lacking asynchronous shader support.

There's also the fact GCN 1.1 was/is smaller, faster and more feature-packed than kepler. AMDs market share will vary with the release periods and since they had nothing major last year while nvidia had a major PR storm, they went down for those quarters.

The real problem is not their GPUs, its their other markets. That's what has lost them the most revenue I think. Luckily for them they have a strong architecture in GCN but they have to become competitive on CPUs again to build the company back. As far as GPUs, solid except for power consumption below the highest end.

Nano will be interesting. An underclocked nano OC-ed (maybe under water) could give people a better impression. AMD might need to be more careful about saying certain things. Saying OC'ers dream gives people different ideas. What kind of OC should be expected? typical of GCN? More? Since it was GCN and not massively changed AND on the same process, I'd just guess typical GCN oc. 1300 MHz would be my max expected and might be around what people hoped for.

390 would be in a much better spot if AMD enabled 4K VSR, had updated it to HDMI 2.0, and hadn't added that unnecessary VRAM(Shave another $30-50 off the price of the 390 and without that extra 4GB of VRAM and you have a card that decimates the NV lineup).

Just getting rid of the VRAM alone and dropping the price a little would make the card amazing. HDMI 2.0 support + 4K VSR as well? The 390 would be AMAZING. AMD went with pushing 4GB into the card vs other improvements the card actually could have used which is disappointing.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Still the only real problem with AMDs architecture, that I see, is power consumption. When the 970 came out and I was anxious to get it, it seemed to beat the 290x. I thought it was a no-brainer. Now its competing with the 390 and loses most of the time. My 290x is faster than my 970 from my testing and no worries about VRAM issues or lacking asynchronous shader support.

From an end-user point of view, you are exactly right. If the user doesn't care about power consumption and has the necessary power supply ready to go, then the GTX 970 loses some of it's appeal. The 970's main competitor is probably the 390 at this point, in which in the 390 may be a ever so slightly faster but the cheapest 970 is $30 less expensive than the cheapest 390 and the 970 also comes with AAA game right now (according to newegg.com). So from a value perspective, the GTX 970 is still looking fine. Even the GTX 980 is starting to look decent in the value proposition, with the ASUS Poisden being $460 After MIR + free game vs. the cheapest R9 390x at $430.
 
Last edited: