[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 36 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

goa604

Junior Member
Apr 7, 2015
24
0
0
Like Im gonna bother replying to all of that. No thanks. Didnt even read it all.

Why did you even bother with registering at this forum?

Warning issued for thread crapping.
-- stahlhart
 
Last edited by a moderator:

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Does it matter? The vast majority of cards won't have reference coolers anyway.

Yes, as RS was using those measurements to extract an imaginary performance clock over and above Titan, via a thermal envelope.
And I wasnt talking about dual chip cards with CLC.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
If I drop my GDDR5 speeds from 1750mhz to 800mhz, my power usage falls 30W on a single 7970 card which has a 384-bit bus.

For what it's worth, some rough back-of-the-envelope calculations seem to indicate that this number actually matches up to AMDs claim of 85W at 8Gbps@512bit.

The following assumes that voltage is unchanged when going from 1750 to 800 MHz, and power scales linearly with frequency and the number of I/O pins.

We have a 54% drop in frequency corresponding to a 30W drop in power which would put the original power usage at 30W/0.54=55W.

Scaling this up to 8Gbps (2000MHz), would get us to 55W*8/7=63W.

Scaling from 384bit I/O to 512bit gets us: 63W*512/384=84W, or almost exactly the same as AMDs claim of 85W.

Scaling this back down to the 5Gbps@512bit of the 290X, we get 52W, or about 20-25W more than the 1Gbps@4096bit HBM solution.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
For what it's worth, some rough back-of-the-envelope calculations seem to indicate that this number actually matches up to AMDs claim of 85W at 8Gbps@512bit.

The following assumes that voltage is unchanged when going from 1750 to 800 MHz, and power scales linearly with frequency and the number of I/O pins.

We have a 54% drop in frequency corresponding to a 30W drop in power which would put the original power usage at 30W/0.54=55W.

Scaling this up to 8Gbps (2000MHz), would get us to 55W*8/7=63W.

Scaling from 384bit I/O to 512bit gets us: 63W*512/384=84W, or almost exactly the same as AMDs claim of 85W.

Scaling this back down to the 5Gbps@512bit of the 290X, we get 52W, or about 20-25W more than the 1Gbps@4096bit HBM solution.

Keep in mind.

1) Scaling is not linear.
2) 30W figure is assumed correct (some of that 30W is simply because the GPU is getting fed less and core output drops).

As far as the power usage of the actual memory modules goes.

http://www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-13.html

DDR4 uses about 1.5W for a 4 GB DIMM. Even if GDDR5 uses 10x the power its still well below 20W for 4 GB VRAM.

However, literature points to GDDR5 using less power than DDR4.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Like Im gonna bother replying to all of that. No thanks. Didnt even read it all.

Here is a summary:

1. You think you are smarter than 1000+ AMD engineers who were too stupid to spend millions of dollars and waste 1.5 years of their lives adding HBM1 memory cuz the marketing team thought it would be a brilliant idea. Obviously NV's engineers are 10X smarter since they wouldn't waste 1.5 years of their lives and millions of dollars on a pure marketing exercise. :rolleyes:

2. You compare NV's choice for GDDR5 and ignore everything about how GCN works, or that AMD needs HBM for APU designs too. In fact, you ignore the most important comparison --> AMD moving from its own 512-bit controller to HBM but instead try to downplay HBM1's advantages by using NV's method of having sufficient memory bandwidth on 384-bit controller for Maxwell. It never occurs to you at all that AMD can't just go and pick up NV's L2 cache efficiencies and NV's 384-bit controller and shove it into its GCN chip. Therefore, all of your comments miss the most important element -> how would HBM1 benefit AMD specifically if they dropped their older 512-bit controller that likely would have never been able to efficiently work with 7Ghz GDDR5 to begin with. You just assume that pairing 7-8Ghz GDDR5 with AMD's 512-bit bus is a done deal. It never occurred to you that it took NV 2 years and a full new architecture to solve GDDR5 clock speed issues of Fermi to Kepler. It's possible that instead of spending millions of dollars to re-design a new 512-bit controller that could even run 7-8Ghz, AMD decided this is a total waste of money since GDDR5 is dead end for flagship cards for them. So might as well invest into newest tech!

3. You ignore that even at 448GB/sec memory bandwidth, GDDR5 would already need to be clocked at 7Ghz, which isn't far off from AMD's 85W usage at 8Ghz. That means by all accounts, HBM1 would be miles more efficient when taking at the full package: memory controller + memory type power usage. By you focusing primarily on the power usage of GDDR5 only, you keep ignoring this critical point. And that slide was discussing 4GB option, not 8GB, which makes GDDR5 over 512-bit an even worse decision as you start increasing VRAM amount. As you try to downplay the power consumption differences, you ignore that HBM likely becomes even more efficient the larger the VRAM pool becomes. It's no wonder to get faster clocked GM200, that NV will drop Titan X's 12GB of GDDR5 or 6GB. It only makes sense because extra GDDR5 wastes power.

4. You ignore that the less complex PCB and memory controller on the GPU are added benefits of going HBM1. If AMD's 390X manages to tie or beat Titan X with much smaller die size than 600mm2, what will be your response -- that AMD would have been better off waiting for HBM2 and using GDDR5? ^_^

5. You ignore that HBM1 and HBM2 are very similar, which means choosing to go HBM1 earlier or HBM2 later is a matter of when to invest cash flows as part of your business strategy, not whether one approach is better than the other. You can't comprehend that both approaches are correct, depending on the GPU architecture but since you constantly want to compare NV to AMD, you don't understand how the designs behind GCN and Maxwell are different -- hence you cannot understand why AMD's engineers decided to use HBM earlier than NV. If they did, it was because it made sense for 390X GCN and it didn't make sense for Maxwell.

Yes, as RS was using those measurements to extract an imaginary performance clock over and above Titan, via a thermal envelope.
And I wasnt talking about dual chip cards with CLC.

My point about reference design had nothing to do with performance of 390X over the Titan X. All things being equal, I prefer after-market or warrantied AIO CLCs and after-market VRM/MosFET components of Asus Strix/MSI LightningGaming, etc.. I even said in my post for that reason alone GM200 6GB (as well as 390X) has the potential to trounce the Titan X as the better products. Blowers have already shown as the of the line for maximum performance above 250W because they fall apart above that power usage because they cannot manage good temperatures and quiet noise levels simultaneously. Good thing NV also realizes this and will give us after-market GM200 6GB products!
 
Last edited:
Feb 19, 2009
10,457
10
76
Yes, as RS was using those measurements to extract an imaginary performance clock over and above Titan, via a thermal envelope.
And I wasnt talking about dual chip cards with CLC.

I think in that particular case, because Titan X is limited to reference blower design, so if NV ever release & allow custom AIB variants on GM200, with better components & cooling, in theory, it has higher headroom. We could end up seeing a 980Ti that comes out of the box faster than Titan X with a boost to 1.4ghz for example.

Also, reference design for most GPUs may not matter due to the overwhelming superiority of custom designs, it matters for reviews because as we've seen, years after many good custom 7970, R290/X are out, lots of review sites still compare to the AMD reference design which for the R290/X series, throttles.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Keep in mind.

1) Scaling is not linear.

Actually dynamic power should scale roughly linearly with frequency (C*V^2*f), but of course this doesn't include leakage, which I have no idea what is in a GDDR5 module.

As far as the power usage of the actual memory modules goes.

http://www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-13.html

DDR4 uses about 1.5W for a 4 GB DIMM. Even if GDDR5 uses 10x the power its still well below 20W for 4 GB VRAM.

However, literature points to GDDR5 using less power than DDR4.

This is not quite true, literature does not point to GDDR5 using less power than DDR4, literature points to GDDR5 using less power per Gbps/pin than DDR4. As such the exact memory setup is quite important, as it can differ substantially between DDR4 and GDDR5.

For instance comparing the single DDR4 module in your link to a 290x, the 290X has more pins (512 versus 64) and runs at a slightly higher frequency (1250 Mhz versus 1066 Mhz). And of course last but not least, GDDR5 runs at 1.5V versus 1.2V for DDR4

(512/64)*(1250/1067)*(1.5/1.2)^2*1.5W=22W

GDDR5 uses about 10-15% less power per Gbps/pin than DDR4 (based on the hynix slide), so 22W*0.87=19W.

Also I'm talking about the whole memory solution here, including memory controllers, not just the memory module alone. And the memory controllers appears to account for about 60% of the power usage (based on AMDs slide). So 19W/(1-0.6)=48W. Roughly the same as before.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Just look at the study and stop this pointless arguing and out of a rear power consumption calculations.

here is my post from way back when hbm was in the news:
There is something about stacked on-die memory power savings, but its too late for me to find any useful numbers showing GDDR5 power consumption:
http://www.cse.psu.edu/~juz138/files/islped209-zhao.pdf

We propose an energy-efficient reconfigurable in-package graphics
memory design that integrates wide-interface graphics DRAMs with
GPU on a silicon interposer. We reduce the memory power consumption
by scaling down the supply voltage and frequency while
maintaining the same or higher peak bandwidth. Furthermore, we
design a reconfigurable memory interface and propose two reconfiguration
mechanisms to optimize system energy efficiency and throughput.
The proposed memory architecture can reduce memory power
consumption up to 54%, without reconfiguration. The reconfigurable
interface can improve system energy efficiency by 23% and throughput
by 30% under a power budget of 240W.∗

Actually, I found this. Not sure how accurate:
We computed the maximum power consumption
of GPU processors and memory controllers by subtracting
the DRAM power from the reported maximum power consumption
of Quadro R FX5800 [15], resulting in 124W. The power of 4GB
DRAM is calculated as 60W, based on Hynix’s GDDR5 memory [8].


DRAMs on HD6990 eat 21,8% of 375 Watts card consumes = 81 Watts for memory chips alone!
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Just look at the study and stop this pointless arguing and out of a rear power consumption calculations.

here is my post from way back when hbm was in the news:

I'm sorry, but that seems to be a fairly useless study to me. First of all they use TDP instead of actual measured power usage, secondly they seem to believe that the FX 5800 uses GDDR5, when in reality it uses GDDR3. Finally they appear to say that a 512bit@1250MHz setup (as found on the 290X), uses a completely unrealistic 150W for the DRAM alone (i.e. no memory controllers included), and that's with only 2GB not the 4GB that the 290X actually has (figure 3C).
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
International you know :) World != America.
(America having their own holiday for this seems to be historical rather than the cold war thing I half supposed it to be.).
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Labour Day is in September.
As difficult it may be to understand for you north american`s,
Canada/USA is not the the entire world

Moderator action taken for trolling
-Moderator Subyman
 
Last edited by a moderator:

cbrunny

Diamond Member
Oct 12, 2007
6,791
406
126
I do not live in America. I am Canadian. I am only as American as Ted Cruz.

But on topic, I'm impatient and want hard details. lol. I guess I'll just have to wait like everyone else.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I am looking forward to R9 390X WCE because the benefits are truly huge.

Hybrid cooled 980 @ 1393mhz Boost operates 25*C cooler than a stock 980 with the reference Titan blower!
GPU-Cooling.jpg


Key-Feature_7.jpg


Also, should the fan/pump fail, it'll be easier to replace it with an off-the-shelf 120mm AIO. Right now replacing a heatsink on a reference blower card/or even an after-market one essentially means buying the expensive $70-90 Accelero Xtremes.

The EVGA GTX980 Hybrid costs $100 more compared to the standard blower 980, which means the higher factory pre-overclock + 120mm AIO CLC + warranty is about a $100 premium that we likely can expect the R9 390X WCE to have over the standard 390X.

If $549 R9 390 standard = 1.4Ghz 980 and R9 390X is 15% faster than a 1.4Ghz GTX980 for $650, that would be a nice improvement from where we are currently sitting.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
I am looking forward to R9 390X WCE because the benefits are truly huge.

Hybrid cooled 980 @ 1393mhz Boost operates 25*C cooler than a stock 980 with the reference Titan blower!
GPU-Cooling.jpg


Key-Feature_7.jpg


Also, should the fan/pump fail, it'll be easier to replace it with an off-the-shelf 120mm AIO. Right now replacing a heatsink on a reference blower card/or even an after-market one essentially means buying the expensive $70-90 Accelero Xtremes.

The EVGA GTX980 Hybrid costs $100 more compared to the standard blower 980, which means the higher factory pre-overclock + 120mm AIO CLC + warranty is about a $100 premium that we likely can expect the R9 390X WCE to have over the standard 390X.

If $549 R9 390 standard = 1.4Ghz 980 and R9 390X is 15% faster than a 1.4Ghz GTX980 for $650, that would be a nice improvement from where we are currently sitting.

What's you're next card Rs. Know it's impossible to say without reviews but preliminary guess as to your next card/when purchasing
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
EVGA's design is decent, but it's wasted on the 980 - which runs cool, quiet, and with plenty of room for overclocking on the high-quality stock blower. It would make a lot more sense on the Titan X.

I think the point of it is to show you its effectiveness. Chances are when they were doing the R&D on this cooler, Titan X didn't exist.
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
if wce can cut down the temp by around 50%. I am sure I would be getting one even if there is a premium of 50$
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
What's you're next card Rs. Know it's impossible to say without reviews but preliminary guess as to your next card/when purchasing

I am planning to start my Masters between August 2015 and Sept 2016 and depending on the program it would last between 15-24 months. The earliest I'd be able to graduate is December 2016 and the latest Sept 2018. I am following this gen simply because I am interested in tech, but there is a strong chance all these GPUs are irrelevant for me unless I buy them on some sale in 2-3 years from now. It's possible I will coast all the way to 7nm CPUs and mid-Pascal generation before I get a new CPU/GPU! :D

I just hope by early 2017 we will start to see 4K at more reasonable prices and better FreeSync/GSync options. It's scary to think how expensive a new build will be for me as I'll need a new SSD/PCIe SSD, new GPU(s), new CPU platform + DDR4, new monitor. Who knows, maybe I'll meet my future wife and become a casual PS5/XB2 console gamer. :p

What about you? Are you going to play around with Maxwell/R9 300 series or are you waiting for 14nm and HBM2?
 
Last edited: