I can't wait for Fury (aircooled) to come out ASAP

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Borealis7

Platinum Member
Oct 19, 2006
2,901
205
106
air cooled fiji (Fury non-X)? i think the R9 Nano is going to be a lot more interesting to bench, especially with a kill-o-watt :D
 
Last edited:
Feb 19, 2009
10,457
10
76
@Head1985
If they were to beef up everything they would run out of die space, or gone bigger than the capability of manufacturing and drive up TDP. The design is a trade-off, its got a lot of potential with the 8 ACE engines that isn't utilized much in DX11 due to the non asynchronous pipeline.

It scales well at 4K and should be great for DX12. For DX11 lower resolutions, getting efficiency/uptime on the 4K shaders will be hard with the same front-end designed for Hawaii, it shows it with the poor improvement at 1080p vs 290/390X.

AMD's GCN uarch can be considered "forward/future proof" due to its design for asynchronous workloads which sadly doesn't happen in DX11, so they made Mantle and eventually it lives on in Vulkan/DX12. So we're seeing a uarch that is literally ahead of its time, designed for a more advanced API. Good decision or bad? I don't know, its a tough call, because they will struggle more (less efficient) at DX11 versus NV and that's where the majority of the gaming is still at, and likely for the next year or two before Win10 become "popular". Historically and the current events, you could say it was a bad call. They should have designed hardware more efficient for DX11 then when Mantle/Vulkan/DX12 is mainstream, release a new uarch designed for those APIs. But obviously its a risk factor decision as I'd imagine there's pros & cons associated with that which AMD's financials would not allow constant multiple uarch designs.

There is a parallel to draw here given their CPU FX series and going with more coars (!) before the software was able to utilize it all. But luckily for them, DX12 will come a lot sooner than asking devs to make their windows software truly multi-threaded.

Just to add, if you listen to the devs featured on AMD's E3 show, they showcased using the asynchronous compute engines to add effects & physics that's in their words, basically "free" in terms of performance. It's a big deal because that's what GCN was designed for but it hasn't been utilized.
 
Last edited:

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
@Head1985
If they were to beef up everything they would run out of die space, or gone bigger than the capability of manufacturing and drive up TDP. The design is a trade-off, its got a lot of potential with the 8 ACE engines that isn't utilized much in DX11 due to the non asynchronous pipeline.

It scales well at 4K and should be great for DX12. For DX11 lower resolutions, getting efficiency/uptime on the 4K shaders will be hard with the same front-end designed for Hawaii, it shows it with the poor improvement at 1080p vs 290/390X.

AMD's GCN uarch can be considered "forward/future proof" due to its design for asynchronous workloads which sadly doesn't happen in DX11, so they made Mantle and eventually it lives on in Vulkan/DX12. So we're seeing a uarch that is literally ahead of its time, designed for a more advanced API. Good decision or bad? I don't know, because they will struggle more (less efficient) at DX11 versus NV and that's where the majority of the gaming is still at, and likely for the next year or two before Win10 become "popular".

I think
3840SP
224TMu
96Rops
with 6x pipelines which means + 50%geometry/tessellation/polygon performance vs current FIJI will be better and wil not be bigger.
BTW fiji not scales well vs 390x even in 4K.I didnt see +45% average performance.its like 30% at 4k and 20% at 1080P and 2K.
 
Last edited:
Feb 19, 2009
10,457
10
76
Let's not pretend to know as much as dedicated engineers at AMD. It was obviously their decision to go with a trade-off design, maybe it was quicker, cheaper to implement given the non-change of using Tonga's front-end.

Who knows, maybe this entire series is just a stop-gap, never intended to exist at all given the rush to HBM1 but they got caught with their pants down by Maxwell 2 on 28nm (rather than just another Kepler refresh to last to 16nm).
 

JimmiG

Platinum Member
Feb 24, 2005
2,024
112
106
Because it will (hopefully) trigger a price war and push down GTX 980 prices.
Just 2'ish weeks left now until release.
For those of us living in EU, the GPU prices here are bonkers. The 980 Ti costs 900 dollar equivalent.

The biggest reason is the strong dollar relative to the Euro and other related currencies. There might be a small price drop, but I wouldn't expect anything huge because that would eat into their profit margins. Probably the gap between the US and EU pricing will just widen. Cards like the GTX 970 and 980 have actually gotten *more* expensive here since they first launched.

What we really need isn't a new card, we just need certain countries in the EU to stop acting like third world banana republics and get their act together.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Let's not pretend to know as much as dedicated engineers at AMD. It was obviously their decision to go with a trade-off design, maybe it was quicker, cheaper to implement given the non-change of using Tonga's front-end.

Who knows, maybe this entire series is just a stop-gap, never intended to exist at all given the rush to HBM1 but they got caught with their pants down by Maxwell 2 on 28nm (rather than just another Kepler refresh to last to 16nm).

Let's not pretend to know more than the dedicated engineers at Nvidia with the GTX 970 design then.
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,538
136
@Head1985
If they were to beef up everything they would run out of die space, or gone bigger than the capability of manufacturing and drive up TDP. The design is a trade-off, its got a lot of potential with the 8 ACE engines that isn't utilized much in DX11 due to the non asynchronous pipeline.

It scales well at 4K and should be great for DX12. For DX11 lower resolutions, getting efficiency/uptime on the 4K shaders will be hard with the same front-end designed for Hawaii, it shows it with the poor improvement at 1080p vs 290/390X.

AMD's GCN uarch can be considered "forward/future proof" due to its design for asynchronous workloads which sadly doesn't happen in DX11, so they made Mantle and eventually it lives on in Vulkan/DX12. So we're seeing a uarch that is literally ahead of its time, designed for a more advanced API. Good decision or bad? I don't know, its a tough call, because they will struggle more (less efficient) at DX11 versus NV and that's where the majority of the gaming is still at, and likely for the next year or two before Win10 become "popular". Historically and the current events, you could say it was a bad call. They should have designed hardware more efficient for DX11 then when Mantle/Vulkan/DX12 is mainstream, release a new uarch designed for those APIs. But obviously its a risk factor decision as I'd imagine there's pros & cons associated with that which AMD's financials would not allow constant multiple uarch designs.

There is a parallel to draw here given their CPU FX series and going with more coars (!) before the software was able to utilize it all. But luckily for them, DX12 will come a lot sooner than asking devs to make their windows software truly multi-threaded.

Just to add, if you listen to the devs featured on AMD's E3 show, they showcased using the asynchronous compute engines to add effects & physics that's in their words, basically "free" in terms of performance. It's a big deal because that's what GCN was designed for but it hasn't been utilized.

Couldn't have said it better. This is why I think even the 7970, 2 ACE design is up for some renewed ass kicking under DX12, even after almost four years, at least with the lower overhead benefit of the API although it doesn't support FL12.0. Hawaii is getting most of the improvement with the 8 ACEs, and Fiji will be able to get that huge shader array going much better than now under DX11.

The problem is, as you've said, we're still in the DX11 era and this means trouble for AMD. On top of that DX12 games are going to get heavier on the GPU thanks to these new possibilities, but still, there should be an improvement over DX11 for AMD's hardware.

Time will tell.
 
Last edited:

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
Because it will (hopefully) trigger a price war and push down GTX 980 prices.
Just 2'ish weeks left now until release.
For those of us living in EU, the GPU prices here are bonkers. The 980 Ti costs 900 dollar equivalent.

I sold my two 290s in Crossfire - which worked great but having a single GPU is a blessing, too, since I don't have a private sauna in my living room anymore - and I am now trigger-happy on the buy-button. I've bought and cancelled the order twice now.

Using my trusty old 560 Ti is as fun as it sounds. Can't wait for July 14th to come around fast enough.

Fury? We're recycling names again. Shame.

82O5jQX.jpg
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Couldn't have said it better. This is why I think even the 7970, 2 ACE design is up for some renewed ass kicking under DX12, even after almost four years, at least with the lower overhead benefit of the API although it doesn't support FL12.0. Hawaii is getting most of the improvement with the 8 ACEs, and Fiji will be able to get that huge shader array going much better than now under DX11.

The problem is, as you've said, we're still in the DX11 era and this means trouble for AMD. On top of that DX12 games are going to get heavier on the GPU thanks to these new possibilities, but still, there should be an improvement over DX11 for AMD's hardware.

Time will tell.

The people who keeping banging the drum of "AMD has more driver CPU overhead" should be the first ones to acknowledge that DX12 would therefore have a greater FPS uplift in AMD cards than nVidia just on that point alone. But of course you don't see this in reality.

In any case, it'll be a while before devs can release DX12 games much less get a lot of good experience with it.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
The people who keeping banging the drum of "AMD has more driver CPU overhead" should be the first ones to acknowledge that DX12 would therefore have a greater FPS uplift in AMD cards than nVidia just on that point alone. But of course you don't see this in reality.

In any case, it'll be a while before devs can release DX12 games much less get a lot of good experience with it.

If the CPU is fast enough, as the reviews tend to be, then the "uplift" is 0.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Fury? We're recycling names again. Shame.

82O5jQX.jpg

Your post is about a month late.

Already had a huge thread on the naming, which most of us agreed going back to an older name was a good thing. Just like going back to Pro/X/XT is a good thing.