do I really have to read through your posts again just to prove a point? one you will promptly ignore? I kinda don't wanna torture my self.Example of a post made by me that suggests that?
do I really have to read through your posts again just to prove a point? one you will promptly ignore? I kinda don't wanna torture my self.Example of a post made by me that suggests that?
do I really have to read through your posts again just to prove a point? one you will promptly ignore? I kinda don't wanna torture my self.
haha, if only I care enoughJust another poster who can't back statements up with facts...![]()
In all the excitement we seem to have forgotten that the Fury X has only 4gb, and nVidia was crucified endlessly about the 3.5gb on the 970.
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.
A few like myself are still waiting for reviews. I'd get one over the 980ti because of the water cooling and the speculation so far that it would perform better since the price is the same.Funny, there is a thread about purchasing a Fury and most of the AMD fanclub said they are buying one and there hasn't even been a review yet.
![]()
Um, yeah, about that...HBM is a big reason for the huge efficiency gains.
This is not accurate. HBM is much more efficient than GDDR5, but the memory power budget is much smaller than the shader array. It's possible that the AIO cooler improves overall Perf/Watt more than HBM - the cooler temps reduce leakage over the whole chip, while HBM saves maximum 20-30 W.
Funny, there is a thread about purchasing a Fury next week and most of the AMD fanclub said they are buying one and there hasn't even been a review yet. I guess somehow they just know they are gonna like it.
![]()
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.
I'm gonna say it even before the reviews are out, I think some of you guys should just not buy a Fury X. I don't think you'd like it at all.
Funny, there are 9000 posts about the VRAM "limit" of Fury and most of the **** fanclub said 4GB is not enough and there hasn't even been a review yet. I guess somehow they just know the future.
and most of the **** fanclub said 4GB is not enough
Correct, HBM is only a small part of the power savings.
AMD in the Carrizo details had two slides corresponding to the GPU portion. I assume they will be using these techniques in Fiji.
Voltage adaptive operation which seems to save about min 5% power rising to 10% as clocks increase from 1.1 > 1.7 Ghz. This actually tells us that as we over clock Fiji the processor gets more efficient with this technique relative to a normal non-voltage adaptive die GPU.
![]()
'AMD revealed Voltage Adaptive Operation back with Kaveri, and it makes a reappearance in Carrizo with its next iteration. The principle here is that with a high noise line, the excess voltage will cause power to rise. If the system reduces the frequency of the CPU during high noise/voltage segments - as power is proportional to voltage squared - power consumption will be reduced and then frequency can be restored when noise returns to normal.'
We can see that Carrizo has 1/8 the shaders of Fiji and this corresponds to about 35 W power on the curve. We will have to extend the curves, but they never seem to cross implying a power saving at all frequencies.
![]()
'The high density, power optimized design also plays a role in the GPU segment of Carrizo, offering lower leakage at high voltages as well as allowing a full 8 GCN core design at 20W. This is an improvement from Kaveri, which due to power consumption only allowed a 6 GCN design at the same power without compromising performance.'
There were nearly full page post claiming HBM would give Fiji huge power savings. Some people were suggesting up to 50watts (possibly more).
Now you are gonna post this?
HBM was expected to have a good impact on power consumption because the bus, not so much the memory modules. Those post still exist on this forum. The huge busses AMD was using took energy. HBM saves power with its radical new design.
I have no idea why you are posting Carrizo stuff now.
Not only that, you are downplaying HBM and ignoring the removal of the large and power hungry bus.
Even if you don't agree with the 50watt estimates. If the older bus fit into a 25watt envelope, snipping that off is still a pretty significant power savings.
Carrizo is one thing, Fiji is another. We know for sure that the bus has an impact on power consumption and the AMD had some rather large buses.
I just don't see why you would spend so much time disragarding HBM and throwing up charts from an APU. I have only seen you claim that Carrizo power savings features are in Fiji. So it seems like really wild speculation while totally dismissing one of the most obvious place that power consumption would have been cut
And reviewers of other 4gb cards dont forget. Unless HBM has some special trick that makes it act like it has 6 gb , I think the Techy-nerds are right.
Unless of course you want to count ALL GPUs. Then AMD and nVidia are both tied with ~16% market share each, with Intel having the rest.
I said that HBM alone cannot give you 50% power savings vs 290X, which works out to be around 125W. AFAIK no one ever said that.
These are some other avenues for saving power that AMD have implemented in their latest GPU portion of APUs, so its not unreasonable to expect them used in Fiji.
Remember this when the full reviews are released.
Correct, HBM is only a small part of the power savings.
i agree, this is the right way to look at it. Intel is eating the bottom end gladly, and catching up. meanwhile the nerdherd keeps slapfighting as if only 2 camps exist.
not even sure what the strategy is to counter intel. not too familiar with APU's yet. should probably read up more.
No, I'll link what you said...it was certainly not "HBM alone cannot give you 50% power savings" though.
Dude, it's even on the same page. How did "HBM is only a small part of the power savings" somehow morph into "I said that HBM alone cannot give you 50% power savings"? Let me guess...you're running Donald Trump's campaign, right?
The 3.5GB issue existed because 1) nVidia claimed it had 4GB at a set performance level, but it really only had 3.5GB at that level and 2) because nVidia drivers (and games) treated that 0.5GB slow segment equally with the 3.5GB 'normal' memory segment.
A full-speed 4GB will not have that problem, since we have LOTS of experience prioritizing memory on a GPU and shuttling infrequently used data to the GPU from system RAM over slower transports than currently available just in time to avoid serious performance degradation in the common case.
If the 970 were suddenly just treated as if it only had 3.5GB, its memory-related stuttering would vanish since the problem occurred when often-used memory was allocated to the slower segment and accesses resulted in pipeline stalls.