[Rumor (Various)] AMD R7/9 3xx / Fiji / Fury

Page 90 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

looncraz

Senior member
Sep 12, 2011
722
1,651
136
In all the excitement we seem to have forgotten that the Fury X has only 4gb, and nVidia was crucified endlessly about the 3.5gb on the 970.

The 3.5GB issue existed because 1) nVidia claimed it had 4GB at a set performance level, but it really only had 3.5GB at that level and 2) because nVidia drivers (and games) treated that 0.5GB slow segment equally with the 3.5GB 'normal' memory segment.

A full-speed 4GB will not have that problem, since we have LOTS of experience prioritizing memory on a GPU and shuttling infrequently used data to the GPU from system RAM over slower transports than currently available just in time to avoid serious performance degradation in the common case.

If the 970 were suddenly just treated as if it only had 3.5GB, its memory-related stuttering would vanish since the problem occurred when often-used memory was allocated to the slower segment and accesses resulted in pipeline stalls.
 

jackstar7

Lifer
Jun 26, 2009
11,679
1,944
126
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.
 

sze5003

Lifer
Aug 18, 2012
14,304
675
126
I dunno I think I would be happy with the card. But then again I don't play a ton of PC games but I do play triple a titles and have gotten used to just cranking everything.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.

Funny, there is a thread about purchasing a Fury next week and most of the AMD fanclub said they are buying one and there hasn't even been a review yet. I guess somehow they just know they are gonna like it.
:rolleyes:

Threadcrapping is not allowed here
Markfw900
 
Last edited by a moderator:

sze5003

Lifer
Aug 18, 2012
14,304
675
126
Funny, there is a thread about purchasing a Fury and most of the AMD fanclub said they are buying one and there hasn't even been a review yet.
:rolleyes:
A few like myself are still waiting for reviews. I'd get one over the 980ti because of the water cooling and the speculation so far that it would perform better since the price is the same.
 

jackstar7

Lifer
Jun 26, 2009
11,679
1,944
126
I'm just responding to the speculative negativity. Seems like a lot of debbie-downers for a product with such limited info available.

But I have no problem admitting I'm in the speculative positivity camp. I want one, but I'd like to see confirmation on GCN version (and associated features) and a couple benchmarks to confirm it'll be the jump I'd like to see coming from my 7990. I like the AIO and believe it will work well inside my Fortress.
 

BryanC

Junior Member
Jan 7, 2008
19
0
66
Um, yeah, about that...HBM is a big reason for the huge efficiency gains.


This is not accurate. HBM is much more efficient than GDDR5, but the memory power budget is much smaller than the shader array. It's possible that the AIO cooler improves overall Perf/Watt more than HBM - the cooler temps reduce leakage over the whole chip, while HBM saves maximum 20-30 W.
 

maddie

Diamond Member
Jul 18, 2010
5,151
5,537
136
This is not accurate. HBM is much more efficient than GDDR5, but the memory power budget is much smaller than the shader array. It's possible that the AIO cooler improves overall Perf/Watt more than HBM - the cooler temps reduce leakage over the whole chip, while HBM saves maximum 20-30 W.

Correct, HBM is only a small part of the power savings.

AMD in the Carrizo details had two slides corresponding to the GPU portion. I assume they will be using these techniques in Fiji.

Voltage adaptive operation which seems to save about min 5% power rising to 10% as clocks increase from 1.1 > 1.7 Ghz. This actually tells us that as we over clock Fiji the processor gets more efficient with this technique relative to a normal non-voltage adaptive die GPU.
8%20-%20Voltage%20Adaptive%20Operation_575px.png


'AMD revealed Voltage Adaptive Operation back with Kaveri, and it makes a reappearance in Carrizo with its next iteration. The principle here is that with a high noise line, the excess voltage will cause power to rise. If the system reduces the frequency of the CPU during high noise/voltage segments - as power is proportional to voltage squared - power consumption will be reduced and then frequency can be restored when noise returns to normal.'


We can see that Carrizo has 1/8 the shaders of Fiji and this corresponds to about 35 W power on the curve. We will have to extend the curves, but they never seem to cross implying a power saving at all frequencies.
7%20-%20Low%20Power%20Graphics_575px.png


'The high density, power optimized design also plays a role in the GPU segment of Carrizo, offering lower leakage at high voltages as well as allowing a full 8 GCN core design at 20W. This is an improvement from Kaveri, which due to power consumption only allowed a 6 GCN design at the same power without compromising performance.'
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
Funny, there is a thread about purchasing a Fury next week and most of the AMD fanclub said they are buying one and there hasn't even been a review yet. I guess somehow they just know they are gonna like it.
:rolleyes:

Funny, there are 9000 posts about the VRAM "limit" of Fury and most of the **** fanclub said 4GB is not enough and there hasn't even been a review yet. I guess somehow they just know the future.

I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.

Agreed... :sneaky: Don't buy this card. Leave stock at retailers... thanks!
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.

I agree, I think there are some people who wouldn't like it even if it were demonstrably superior in every way to the competition.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Correct, HBM is only a small part of the power savings.

AMD in the Carrizo details had two slides corresponding to the GPU portion. I assume they will be using these techniques in Fiji.

Voltage adaptive operation which seems to save about min 5% power rising to 10% as clocks increase from 1.1 > 1.7 Ghz. This actually tells us that as we over clock Fiji the processor gets more efficient with this technique relative to a normal non-voltage adaptive die GPU.
8%20-%20Voltage%20Adaptive%20Operation_575px.png


'AMD revealed Voltage Adaptive Operation back with Kaveri, and it makes a reappearance in Carrizo with its next iteration. The principle here is that with a high noise line, the excess voltage will cause power to rise. If the system reduces the frequency of the CPU during high noise/voltage segments - as power is proportional to voltage squared - power consumption will be reduced and then frequency can be restored when noise returns to normal.'


We can see that Carrizo has 1/8 the shaders of Fiji and this corresponds to about 35 W power on the curve. We will have to extend the curves, but they never seem to cross implying a power saving at all frequencies.
7%20-%20Low%20Power%20Graphics_575px.png


'The high density, power optimized design also plays a role in the GPU segment of Carrizo, offering lower leakage at high voltages as well as allowing a full 8 GCN core design at 20W. This is an improvement from Kaveri, which due to power consumption only allowed a 6 GCN design at the same power without compromising performance.'

There were nearly full page post claiming HBM would give Fiji huge power savings. Some people were suggesting up to 50watts (possibly more).
Now you are gonna post this?

HBM was expected to have a good impact on power consumption because the bus, not so much the memory modules. Those post still exist on this forum. The huge busses AMD was using took energy. HBM saves power with its radical new design.

I have no idea why you are posting Carrizo stuff now.
Not only that, you are downplaying HBM and ignoring the removal of the large and power hungry bus.

Even if you don't agree with the 50watt estimates. If the older bus fit into a 25watt envelope, snipping that off is still a pretty significant power savings.

Carrizo is one thing, Fiji is another. We know for sure that the bus has an impact on power consumption and the AMD had some rather large buses.
I just don't see why you would spend so much time disragarding HBM and throwing up charts from an APU. I have only seen you claim that Carrizo power savings features are in Fiji. So it seems like really wild speculation while totally dismissing one of the most obvious place that power consumption would have been cut
 

maddie

Diamond Member
Jul 18, 2010
5,151
5,537
136
There were nearly full page post claiming HBM would give Fiji huge power savings. Some people were suggesting up to 50watts (possibly more).
Now you are gonna post this?

HBM was expected to have a good impact on power consumption because the bus, not so much the memory modules. Those post still exist on this forum. The huge busses AMD was using took energy. HBM saves power with its radical new design.

I have no idea why you are posting Carrizo stuff now.
Not only that, you are downplaying HBM and ignoring the removal of the large and power hungry bus.

Even if you don't agree with the 50watt estimates. If the older bus fit into a 25watt envelope, snipping that off is still a pretty significant power savings.

Carrizo is one thing, Fiji is another. We know for sure that the bus has an impact on power consumption and the AMD had some rather large buses.
I just don't see why you would spend so much time disragarding HBM and throwing up charts from an APU. I have only seen you claim that Carrizo power savings features are in Fiji. So it seems like really wild speculation while totally dismissing one of the most obvious place that power consumption would have been cut

I said that HBM alone cannot give you 50% power savings vs 290X, which works out to be around 125W. AFAIK no one ever said that.

These are some other avenues for saving power that AMD have implemented in their latest GPU portion of APUs, so its not unreasonable to expect them used in Fiji.

Remember this when the full reviews are released.
 

at80eighty

Senior member
Jun 28, 2004
458
5
81
And reviewers of other 4gb cards dont forget. Unless HBM has some special trick that makes it act like it has 6 gb , I think the Techy-nerds are right.

as mentioned in another thread - there are ways to optimise the VRAM consumption.
it is up to devs and not an issue with hardware. load whats needed, and not everything.
 

at80eighty

Senior member
Jun 28, 2004
458
5
81
Unless of course you want to count ALL GPUs. Then AMD and nVidia are both tied with ~16% market share each, with Intel having the rest.

i agree, this is the right way to look at it. Intel is eating the bottom end gladly, and catching up. meanwhile the nerdherd keeps slapfighting as if only 2 camps exist.

not even sure what the strategy is to counter intel. not too familiar with APU's yet. should probably read up more.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I said that HBM alone cannot give you 50% power savings vs 290X, which works out to be around 125W. AFAIK no one ever said that.

These are some other avenues for saving power that AMD have implemented in their latest GPU portion of APUs, so its not unreasonable to expect them used in Fiji.

Remember this when the full reviews are released.

No, I'll link what you said...it was certainly not "HBM alone cannot give you 50% power savings" though.

Correct, HBM is only a small part of the power savings.

Dude, it's even on the same page. How did "HBM is only a small part of the power savings" somehow morph into "I said that HBM alone cannot give you 50% power savings"? Let me guess...you're running Donald Trump's campaign, right?

i agree, this is the right way to look at it. Intel is eating the bottom end gladly, and catching up. meanwhile the nerdherd keeps slapfighting as if only 2 camps exist.

not even sure what the strategy is to counter intel. not too familiar with APU's yet. should probably read up more.

Is mercedes correct to cede the <$3000 market in China to Geely, or should they start making luxury cars with rubber bands, glue, and duct tape? Intel tried and failed (miserably) to enter the high end GPU fray after they couldn't work out the merger/purchase with NV...even with 2/3 of the market their profits on it are likely tiny because most of their "market share" is just crap that they integrated onto their cpu die.
 
Last edited:

at80eighty

Senior member
Jun 28, 2004
458
5
81
if only they were such an analogous comparison.

more and more people are dropping the need for discrete cards. Jon Peddie Research shows a 13% drop Q1 from the last quarter. and q2 is historically even tougher. and YoY trends have been this way for a while. this is at the expense of other market players.

I myself, a strong believer in discrete, am looking forward to a skylake build for an HTPC to go with my gaming rig, since now it should handle HEVC just fine @ 4k.

They are a player in graphics. unless you mean AAA gaming alone.
 

maddie

Diamond Member
Jul 18, 2010
5,151
5,537
136
No, I'll link what you said...it was certainly not "HBM alone cannot give you 50% power savings" though.



Dude, it's even on the same page. How did "HBM is only a small part of the power savings" somehow morph into "I said that HBM alone cannot give you 50% power savings"? Let me guess...you're running Donald Trump's campaign, right?

You're right. By small part, I meant less than 1/2 of the power saved. Bad phrasing on my part.

HBM is important, but not the main power saving feature in Fiji. HBM alone cannot account for 50% increase in performance/Watt. AMD had to have implemented other important techniques. I trust the reviews will explore in depth what they're doing.
 

Prefix-NA

Junior Member
May 17, 2015
8
0
36
The 3.5GB issue existed because 1) nVidia claimed it had 4GB at a set performance level, but it really only had 3.5GB at that level and 2) because nVidia drivers (and games) treated that 0.5GB slow segment equally with the 3.5GB 'normal' memory segment.

A full-speed 4GB will not have that problem, since we have LOTS of experience prioritizing memory on a GPU and shuttling infrequently used data to the GPU from system RAM over slower transports than currently available just in time to avoid serious performance degradation in the common case.

If the 970 were suddenly just treated as if it only had 3.5GB, its memory-related stuttering would vanish since the problem occurred when often-used memory was allocated to the slower segment and accesses resulted in pipeline stalls.


3.5gb at LOWER levels

3.5gb at 198gb/s
.5 gb at 26gb/s

That is not 224gb/s you don't get to claim 4gb at 224gb/s.

Cod Advanced Warfare had to gimp all GPU to work around by limiting gpu vram to 75% (u can fix in config) because they didn't know why the 970 was messsed up when it used full vram (it has options to use full vram to increase framerates and storing more files)