AMD Fury X Postmortem: What Went Wrong?

Page 15 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
I believe the AMD R9 FURY X is a step forward but not enough to defeat my CF XSPC RAZER 290X's equivalent to the R9 295 x's 2 or above a single nVidiia 980Ti given resolution and then you got to consider the Display.

Aside from the power consumption, I will wait for AMD's Second GEN Figi FURY X by which time I will need a new Processor and MB.

In the mean time I will stick with a cheap $300 Korean QNIX 2510 EVO II IPS 27" Dvi-D 1440p Display running between 60 and 120 Hz and WAIT for 4k to Develop with CF water cooled R9 290X's.
 
Last edited:

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Here we go again with the history rewrites, the 290X wasn't faster than the currently available 780s when it came out PERIOD. In fact I remember clearly being involved along with BallaTheFeared against members who had 290Xs in a bit of a bench off in which most of the titles selected were said to be AMD favored titles....it didn't go well for the 290s.

http://www.anandtech.com/bench/product/1059?vs=1036

I've used, and tested, both at length, even the stock 290X is quite a bit faster than the stock 780 (~15-25%) when it isn't throttling (of which it shouldn't do too much in uber mode). Even the normal R9 290 beat the 780 in many games.

It traded blows with the Titan:

http://www.anandtech.com/bench/product/1059?vs=1060

You could absolutely overclock the 780 by a good 20~25%, but when you also overclocked the 290X (by its ~15% max) the two were effectively tied, with the 780 winning at 1080p by a small margin, and the 290x slowly inching away at higher resolutions.

--

Okay, so I made a nice spreadsheet with all of the results. The 290X Uber was 17.47% faster on average stock -vs-stock.

With both overclocked, that spread doesn't change, of course. The R9 290X can pretty much always hit a 15% overclock on the GPU and 15~20% on the RAM, which results in about a 15% total increase in performance.

The 780 usually does around 20~25% on the GPU and ~15~18% on the RAM. This usually adds about 25% more performance.

This leaves the 780 OC 8% slower, overall, than the 290X OC.

Incidentally, that is about the total performance difference between the R9 290 and the R9 290X, and the 290X can clock a bit better (percentage wise) since it starts at slightly lower clocks. So the R9 290 is about an even match for the 780, even when both are overclocked.

http://www.anandtech.com/bench/product/1068?vs=1036
 
Last edited:

RaulF

Senior member
Jan 18, 2008
844
1
81
Here's something they did wrong.

Paper launch! What did they sent out 1000 cards?
 

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
I believe AMD has hit the nail on the head with their FURY X before APU takes over Vs discreet PC Video Cards. HBM Memory, no matter the die size with a H20 Cooling has entered the scene of the Discreet GPU. HBM with the proper drivers just may save our Desk Tops and a gate into true 4K gaming.

I'm just about ready for FULL 4K with CF XSPC RAZOR R9 290x's but I choose to keep my $300 Korean QX2510 1440p IPS display running between 96 to 120Hz until true 4k solution evolves.

Believe me I totally impressed with that 27" 1440p QNIX IPS display with CF Water Cooled 290X's.
 
Last edited:

EightySix Four

Diamond Member
Jul 17, 2004
5,121
49
91
I don't feel like digging through all the reviews, is there a way to do a DX12 draw call comparison yet? Could the DX11 drivers just be topped out and AMD's engineering team waiting for DX12 to bail them out?
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Its nothing to do with HBM. That excuse needs to go away by now. However its GCN 1.2. And the 285 aka 380 didnt get much love on that matter. So what love there is to give we have to see. But they had their time to work on it. The 285 wasnt released yesterday.

It could very easily be HBM. Its access granularity is 8 times higher, meaning alignments and accesses have different requirements and performance characteristics.

If a game is only showing 3800MB used of the 4096MB RAM it means a great deal of the memory is likely [mis]aligned, which will help/hurt performance/capacity drastically (and somewhat variably). That is what we are seeing in GTA-V on the Fury X, most likely - memory misalignment resulting in multiple reads when one would otherwise do.

Even if it had 8GB it would behave the same way. Drivers can (and undoubtedly will - eventually) fix this. Sadly, in the worst case, it may require a game patch. I've noticed that Mantle doesn't seem to work with the Fury X, which could be a related issue as well - the closer you can get to the hardware using software the more hardware changes affect you.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
I do for what it is, it's absolutely gorgeous. But I have no idea which cards are considered to be the first stock and which are supposedly fixed, and that leaves me annoyed.
My current stance is that I'll return it now, wait a couple of weeks for prices to settle and the rest of the lineup to launch and then I'll reevaluate. I'll likely try water again and so far between 700€ 980 Hybrid, 700€ Fury X and 820€ 980TI Hybrid there's little to debate about value imho.

You should consider running the card for a few days solid. My pump (normal water pump, though) made an annoying high pitched sound for the first few days. I kept running because of how much of a PITA it would be to swap back to my old pump and the noise just went away. Now, it's dead silent (and, yes, it's still running :biggrin:).
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Not worth $100 more? Absolute performance for Fury X is worse and overclocking is way worse, especially at 1440p and 1080p. Aftermarket 980 TI's don't perform as well in the temp and noise department, but that is literally splitting molecules at that point.

High end customers want the best performance without throwing all logic out of the door (ahem Titan). The reason people are saying Fury X needs to be $550 is because OC'd Fury X vs. OC'd Titan X gets absolutely destroyed at 1440p. We're talking 20-25%. And the perf/$ ration of an OC'd $670 aftermarket 980 TI vs. an OC'd $550 Fury X is nearly the same. At release, the 780 TI was able to price itself $150 more than the R9 290x for only 15% more performance when both were OC'd. The performance delta between a $550 Fury X and $650-670 aftermarket 980 TI will be the same, but the performance difference will be larger.

The standard 780Ti is definitely not worth $100 more than the Fury X. Unless your absolutely only concern for a video card is an unnoticeable amount of extra performance for that $100, while also running hotter and louder.

We don't know the overclocking potential of the Fury X yet, either. The voltage control isn't available - and the 780Ti automatically adjusts its voltage while overclocking, so don't go spouting off nonsense about what you can with at stock clocks with nVidia - there is very little headroom on stock clocks, in fact, that's one of the power saving features of their cards. Modern Intel CPUs do something similar, in fact, though not as aggressively.
 

YBS1

Golden Member
May 14, 2000
1,945
129
106
http://www.anandtech.com/bench/product/1059?vs=1036

I've used, and tested, both at length, even the stock 290X is quite a bit faster than the stock 780 (~15-25%) when it isn't throttling (of which it shouldn't do too much in uber mode). Even the normal R9 290 beat the 780 in many games.

It traded blows with the Titan:

http://www.anandtech.com/bench/product/1059?vs=1060

You could absolutely overclock the 780 by a good 20~25%, but when you also overclocked the 290X (by its ~15% max) the two were effectively tied, with the 780 winning at 1080p by a small margin, and the 290x slowly inching away at higher resolutions.

--

Okay, so I made a nice spreadsheet with all of the results. The 290X Uber was 17.47% faster on average stock -vs-stock.

With both overclocked, that spread doesn't change, of course. The R9 290X can pretty much always hit a 15% overclock on the GPU and 15~20% on the RAM, which results in about a 15% total increase in performance.

The 780 usually does around 20~25% on the GPU and ~15~18% on the RAM. This usually adds about 25% more performance.

This leaves the 780 OC 8% slower, overall, than the 290X OC.

Incidentally, that is about the total performance difference between the R9 290 and the R9 290X, and the 290X can clock a bit better (percentage wise) since it starts at slightly lower clocks. So the R9 290 is about an even match for the 780, even when both are overclocked.

http://www.anandtech.com/bench/product/1068?vs=1036

It seems you only read what you wanted to read, I said currently available 780s at the time of the 290 series launch. As you stated in your post the 290X was pretty much neck and neck with the original Titan. The problem with that is some of the custom 780s (Classified for one) were already outpacing the Titan (some models substantially while overclocked). Now is this a completely fair comparison, no....That is what happens when you get beat to the market by several months though, vanilla versus custom. It happening again right now.
 
Last edited:

Z15CAM

Platinum Member
Nov 20, 2010
2,184
64
91
www.flickr.com
Kinda makes me wonder when I hear members discuss Air Cooled Vs Water Cooled GPU Peripherals. Once you go Water Cooled you will never go back. Very similar argument as to why IPS 1440p Over-Clock-Able Display users will never go back to a 1080p TN Display.
 
Last edited:

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
No, not really. A 750LI certainly is more "luxury" than a 135i, but at the end of the day they're both more luxurious than the red line.

Folks need to get some perspective here. End of the day the Titan remains the "best of the best" regardless of how slim that margin may be. Complaints that focus on a limited few being willing to pay for that smack of jealously, rather than frugality.
you need to understand what I posted :biggrin:
 

MagickMan

Diamond Member
Aug 11, 2008
7,537
3
76
As was mentioned by someone else earlier, I don't believe anything "went wrong", it's just more of a DX12 GPU that's held back by immature drivers. Time and development will catch up to it eventually. You could say it was released before its time.
 

at80eighty

Senior member
Jun 28, 2004
458
3
81
As was mentioned by someone else earlier, I don't believe anything "went wrong", it's just more of a DX12 GPU that's held back by immature drivers. Time and development will catch up to it eventually. You could say it was released before its time.

after thinking over it more, im arriving at the same conclusion - drivers and a card not built for older APIs. their marketing is slipping again imo, as they should be talking to sites and helping potential users understand what value prop they are going for. i do understand their budget constraints in that regard; but if the cards turns out to shine in future, they could atleast control the narrative in advance.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
after thinking over it more, im arriving at the same conclusion - drivers and a card not built for older APIs. their marketing is slipping again imo, as they should be talking to sites and helping potential users understand what value prop they are going for. i do understand their budget constraints in that regard; but if the cards turns out to shine in future, they could atleast control the narrative in advance.

I don't get this. DX12 changes things, but this GPU finds its foundation in an older DX11 GPU, right? I might believe some driver stuff was up, but I doubt that DX12 (when compared to a 980ti in DX12, so apples to apples) will make the gap close all by itself.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
As was mentioned by someone else earlier, I don't believe anything "went wrong", it's just more of a DX12 GPU that's held back by immature drivers. Time and development will catch up to it eventually. You could say it was released before its time.

Only time will tell on this one. It could go just like that or not.
AMD seems to be waiting impatiently for Windows 10 as it will incorporate a new API.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
So DX12 will be the "saviour" of Fury?

Even if we for fun imagine this to somehow be true. Its not really working out for the Fury is it?

Stop bargaining for excuses. The result is what the result is.

And in late 2016/early 2017 everyone will be busy with their new 14/16nm flagships with something like 16GB VRAM and the Fury will be long forgotten.
 
Last edited:

vissarix

Senior member
Jun 12, 2015
297
96
101
So DX12 will be the "saviour" of Fury?

Even if we for fun imagine this to somehow be true. Its not really working out for the Fury is it?

Stop bargaining for excuses. The result is what the result is.

And in late 2016/early 2017 everyone will be busy with their new 14/16nm flagships with something like 16GB VRAM and the Fury will be long forgotten.

dont ruin their dreams man, thats all they got ;)
 

svenge

Senior member
Jan 21, 2006
204
1
71
As was mentioned by someone else earlier, I don't believe anything "went wrong", it's just more of a DX12 GPU that's held back by immature drivers. Time and development will catch up to it eventually. You could say it was released before its time.

Isn't that basically the same excuse that AMD and their fanboys trotted out to explain Bulldozer's massive failure? How exactly did that turn out again?
 

MagickMan

Diamond Member
Aug 11, 2008
7,537
3
76
I don't get this. DX12 changes things, but this GPU finds its foundation in an older DX11 GPU, right? I might believe some driver stuff was up, but I doubt that DX12 (when compared to a 980ti in DX12, so apples to apples) will make the gap close all by itself.

I didn't say "all by itself", it's only part of the puzzle.
 

MagickMan

Diamond Member
Aug 11, 2008
7,537
3
76
Isn't that basically the same excuse that AMD and their fanboys trotted out to explain Bulldozer's massive failure? How exactly did that turn out again?

No idea, I'm not an AMD fanboy, brand loyalty in GPUs, and pretty much anything else, is stupid. It simply appears that it was made to work best with more direct APIs. It's an absolute beast with Mantle (confirmed) and (presumably) DX12. So we'll see what happens as things mature and shake-out, likely that's why they haven't tried to flood the market with them yet, though I imagine that will change once Win X officially launches.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
So DX12 will be the "saviour" of Fury?

Even if we for fun imagine this to somehow be true. Its not really working out for the Fury is it?

Stop bargaining for excuses. The result is what the result is.

And in late 2016/early 2017 everyone will be busy with their new 14/16nm flagships with something like 16GB VRAM and the Fury will be long forgotten.

Kind of agree here. I don't expect any large numbers of DX12 games for at least 2 years, probably 3.

By that time, unless devs have smartened up on their ports, the Fury X simply will not have enough Vram for 4K gaming. The massive shader array will help a lot but I expect a number of games to have problems at 4K.

If 14nm GPUs gets a butload of Vram devs will simply bloat vram requirements even more and GPUs with 4 GB will be left in the dust, similar to how GPUs with 2 GB are today.
 
Feb 19, 2009
10,457
10
76
The console generation determines cross-platform game development, everything revolves around that unless the studio is very PC-centric.

Notice vram requirements suddenly spike once games come out that were designed for PS4/Xbone ground up?

So even if you have 16GB vram GPUs in the next 2 years, it would be useless for gaming in most titles. We can't even tell the difference between 980Ti 6GB vs Titan X 12GB at 4K. Its not going to suddenly spike up when devs will be limited by console hardware for the next 4-5 years of its cycle.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The console generation determines cross-platform game development, everything revolves around that unless the studio is very PC-centric.

Notice vram requirements suddenly spike once games come out that were designed for PS4/Xbone ground up?

So even if you have 16GB vram GPUs in the next 2 years, it would be useless for gaming in most titles. We can't even tell the difference between 980Ti 6GB vs Titan X 12GB at 4K. Its not going to suddenly spike up when devs will be limited by console hardware for the next 4-5 years of its cycle.

If that was true, 256-512MB Graphics cards should have been all we needed before PS4/Xbox one got released.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Console VRAM probably provides the point of diminishing returns. Sure, some PS360 games used extra RAM, but in many games didnt really end up looking much better for it (e.g. Titanfall "Ultra" textures which hogged VRAM but were visually identical). Plus the allowance for the fact that console RAM is partially compressed at rest (at least PS360 was) and PC isn't.

So when the X1/PS4 came out, games that dont run on PS360 have a higher point of diminishing returns for memory use.

IMO the rate at which memory needs increase will slow and that the fastest portion of the rate of change already occurred.
 
Last edited: