[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 35 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dangerman1337

Senior member
Sep 16, 2010
396
45
91
I don't think we'll see any next gen cards faster than an R9 390X/Titan X until September 2016. That's a lot of waiting to be on an IGP. It's not a bad deal to get an R9 290/970 as hold-over as those cards are unlikely to lose a lot of value over the next 6-12 months.
That's what I'm thinking either a 290X (Nice deal for thoss of us in the UK :D :http://www.scan.co.uk/products/4gb-...hz-gddr5-gpu-1000mhz-2816-streams-dp-dvi-hdmi) or a 970 with maybe a 750Ti as a PhysX for the Witcher 3 and other games perhaps (curious since I have a Z68 Board could a 970 be bandwidth limited by a 750Ti used for Physx?) till I hold off until late 2016/early 2017 for a 4K/VR Skylake-E build.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
At least. In that AMD slide, dating Dec 2013, when dual-link interposer and 8GB option wasn't available, they are quoting 50W.

l29o6zV.png


If they go 8GB of GDDR5 vs. 8GB of HBM1, this will grow even more. Also, since the memory controller would like be smaller/less complex, instead of making say a 550mm2 GDDR5 card, they can make a 530mm2 HBM1 card. As a result, they can add 20mm2 for greater amount of shaders, textures, etc. to release a 550mm2 HBM1 chip that's BOTH more efficient and packs more processing power since you've just used the excess die space that normally would have been allocated for the 512-bit memory controller towards the functional GPU units. I can't even imagine how beastly a 14nm HBM2 550mm2 AMD chip could end up next generation if only AMD ditched all DP functionality out of it and made a pure gaming monster chip.



I don't think we'll see any next gen cards faster than an R9 390X/Titan X until September 2016. That's a lot of waiting to be on an IGP. It's not a bad deal to get an R9 290/970 as hold-over as those cards are unlikely to lose a lot of value over the next 6-12 months.

I would say a 290x since he already had a 970. I just don't see that value being topped, even after the 300 series launches. There are some really great deals.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
At least. In that AMD slide, dating Dec 2013, when dual-link interposer and 8GB option wasn't available, they are quoting 50W.

l29o6zV.png

Yes but to put things in perspective AMD doesn't have a 8 gbps 512 bit GDDR5 controller. I don't think their controller can even hit 8 gbps. While useful, the controller in the figure is pushed far beyond its efficient zone and as AMD has nothing like that in the market, compared to the 512 bit Hawaii controller the power savings will be significantly less than 50W shown in the chart.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That's what I'm thinking either a 290X (Nice deal for thoss of us in the UK :D :http://www.scan.co.uk/products/4gb-...hz-gddr5-gpu-1000mhz-2816-streams-dp-dvi-hdmi) or a 970 with maybe a 750Ti as a PhysX for the Witcher 3 and other games perhaps (curious since I have a Z68 Board could a 970 be bandwidth limited by a 750Ti used for Physx?) till I hold off until late 2016/early 2017 for a 4K/VR Skylake-E build.

Wow, that is a smoking deal for you guys in the UK. Maybe you can wait 19 more days to see if prices on R9 290X/970 cards drop even more. Since TW3 already comes with GTX970, that sweetens the deal more. Without seeing the performance in TW3, you might as well wait until the game launches before you make your purchase if you waited this long already.

I don't think you'll be limited that much by the bandwidth of Z68 but I think it's wasteful to have a 970 + GTX750Ti for PhysX. At that point might as well get a 980. With the 980 you'll get 15%+ more performance in almost all games, not just those with PhysX. For that reason I do not see how GTX970+GTX750Ti (PhysX) is a good combo.

Yes but to put things in perspective AMD doesn't have a 8 gbps 512 bit GDDR5 controller. I don't think their controller can even hit 8 gbps. While useful, the controller in the figure is pushed far beyond its efficient zone and as AMD has nothing like that in the market, compared to the 512 bit Hawaii controller the power savings will be significantly less than 50W shown in the chart.

That's a good point. Either way, it will be very hard to measure the efficiency of HBM1 vs. 290X's 512-bit GDDR5 since the GPUs will be completely different in terms of specs and die size to start with. If AMD uses a different version of GCN and a more mature 28nm node, that will also make it impossible to isolate the benefit of HBM1 vs. GDDR5 without AMD telling us.

Let's take a more practical approach.

72533.png

72536.png

72539.png


If R9 390X uses 50W more power over Titan X (i.e., 300W), when this is added to the total system power usage, it's not that big of a deal since we are already talking about a ~400W rig. If R9 390X is as fast but for less $, runs cooler and quieter, those factors will easily offset the extra 50-60W of power usage imo.

I think OCing performance will be a big piece of the buying decision for enthusiasts though. The Titan X overclocks very well and has excellent scaling.
http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/17

If R9 390X beats a stock Titan X by 5% but only has 5% overclocking headroom, then all NV needs to do is launch a $699 GM200 6GB clocked at 1.2Ghz, and it's game over.

Hot Chips 2015, Sunday-Tuesday, August 23-25, 2015

Nice find! That still doesn't tell us if the card launches before August 23-25 or after though.
 
Last edited:

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
Has AMD ever bought out a reference card that runs cooler and quieter than NV since Fermi?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Has AMD ever bought out a reference card that runs cooler and quieter than NV since Fermi?

The rumours have R9 390X coming in with AIO CLC standard for the WCE edition. Depending on the price, this might be a great option for enthusiasts because the Titan X cooler @ 1.4Ghz isn't exactly quiet while EVGA charges $100 for a separate AIO, effectively making an AIO CLC Titan X a $1100 USD card (for me in Canada nearly a $1700 AIO CLC card D:).

As far as your question, many of us don't buy reference cards because they can't compete against MSI Gaming/Lightning, Asus Strix/Matrix, Sapphire Tri-X/Vapor-X, Gigabyte Windforce 3x series. I am sure you've seen this before but the Gigabyte Windforce 600W cooler crushed the Titan Black's reference blower.

"In the automatic regulation mode, when the fans accelerated steadily from a silent 1000 RPM to a comfortable 2040 RPM, the peak GPU temperature was 78°C. It is about 20°C better than with the reference cooler and much quieter, too!" ~ Source

Since the Titan X ships with a reference blower right now, that's a big negative in my books and this factor alone would make GM200 6GB and R9 390X better assuming their performance is close.
 
Last edited:

iiiankiii

Senior member
Apr 4, 2008
759
47
91
No doubt about it. AIO CLC/3rd party cooler is needed to keep a quiet system on anything above 250w TDP. The noise level of the blowers are ridiculous. The Titan X is a beastly card held back by the stock blower (yes, the stock cooler looks awesome and pretty efficient for a blower). But, lets face it. It sucks for overclocking.

Most Titan X can hit 1500mhz with a better cooler without much effort. The stock blower can hit those numbers, too, at the cost of insanely loud noise. We're talking near 80-100% fan speed. With a modded bios, the Titan X can probably do 1600mhz!! That's nuts! At those clock speed, it should match/beat the 295x2 in most games!

Bottomline, I'm glad the 390x is going with an AIO CLC. It makes reaching its max potential that much easier. With the added bonus of being much more quiet without being throttled.
 

dangerman1337

Senior member
Sep 16, 2010
396
45
91
Wow, that is a smoking deal for you guys in the UK. Maybe you can wait 19 more days to see if prices on R9 290X/970 cards drop even more. Since TW3 already comes with GTX970, that sweetens the deal more. Without seeing the performance in TW3, you might as well wait until the game launches before you make your purchase if you waited this long already.

I don't think you'll be limited that much by the bandwidth of Z68 but I think it's wasteful to have a 970 + GTX750Ti for PhysX. At that point might as well get a 980. With the 980 you'll get 15%+ more performance in almost all games, not just those with PhysX. For that reason I do not see how GTX970+GTX750Ti (PhysX) is a good combo.
Well I was thinking of a 980 but looking on the same site (I get free delivery) they are like £437+ so a 970+750Ti would be cheaper by at least £50 with a good 970 + cheap 750Ti.

Nvidia got incredibly greedy with the 980, questionable VRAM performance for games that fill in VRAM regardless like Total War Attila (which I play) makes me queasy about the 970 anyways but the 980 is just milking at Nvidia's price point. If there was a 980 for 399 I'd agree but £450+ for a good one? Screw that.

I may as well wait until early July for 980 price drop or AMD 300. Most of the bugs for the Witcher 3 will be polished by then anyways.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Yes but to put things in perspective AMD doesn't have a 8 gbps 512 bit GDDR5 controller. I don't think their controller can even hit 8 gbps. While useful, the controller in the figure is pushed far beyond its efficient zone and as AMD has nothing like that in the market, compared to the 512 bit Hawaii controller the power savings will be significantly less than 50W shown in the chart.

The 290X memory controller is at 5gbps. Thats 60% of that 8gpbs AMD show in the slide.

Secondly, their suggestion that GDDR5 draw 35W while HBM draw 15W is also wrong. DDR3 @ 128GB/s draw 6.4W, at 320GB/s (290X) we are maybe looking at 20W tops. GDDR5 draw less than this too, because it runs on lower voltage than DDR3.

5UjpXSf.jpg



In total, I don`t think you a huge power reduction with HBM. Maybe 15W-30W somewhere tops I think.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
The 290X memory controller is at 5gbps. Thats 60% of that 8gpbs AMD show in the slide.

Secondly, their suggestion that GDDR5 draw 35W while HBM draw 15W is also wrong. DDR3 @ 128GB/s draw 6.4W, at 320GB/s (290X) we are maybe looking at 20W tops. GDDR5 draw less than this too, because it runs on lower voltage than DDR3.

5UjpXSf.jpg



In total, I don`t think you a huge power reduction with HBM. Maybe 15W-30W somewhere tops I think.

You do know that voltage has zero impact on power consumption right? Yes lower voltage can limit power consumption as it requires more current to reach a given wattage, so the wiring/traces can then become a limiting factor. But you can just as easily draw 1000W at 12V as you can at 500V.

I cannot find any comparisons between DDR3 and GDDR5. But I did find references saying GDDR5 can draw less than GDDR3 in some cases.
 

thilanliyan

Lifer
Jun 21, 2005
12,064
2,277
126
You do know that voltage has zero impact on power consumption right? Yes lower voltage can limit power consumption as it requires more current to reach a given wattage, so the wiring/traces can then become a limiting factor. But you can just as easily draw 1000W at 12V as you can at 500V.

I cannot find any comparisons between DDR3 and GDDR5. But I did find references saying GDDR5 can draw less than GDDR3 in some cases.

Yeah, I didn't think GDDR5 was really any more power efficient the GDDR3. Actually, I thought it used more power, but had more bandwidth in return.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
You do know that voltage has zero impact on power consumption right? Yes lower voltage can limit power consumption as it requires more current to reach a given wattage, so the wiring/traces can then become a limiting factor. But you can just as easily draw 1000W at 12V as you can at 500V.

I cannot find any comparisons between DDR3 and GDDR5. But I did find references saying GDDR5 can draw less than GDDR3 in some cases.

lol.


lkNBRos.jpg
 

Hitman928

Diamond Member
Apr 15, 2012
6,720
12,420
136
You do know that voltage has zero impact on power consumption right? Yes lower voltage can limit power consumption as it requires more current to reach a given wattage, so the wiring/traces can then become a limiting factor. But you can just as easily draw 1000W at 12V as you can at 500V.

I cannot find any comparisons between DDR3 and GDDR5. But I did find references saying GDDR5 can draw less than GDDR3 in some cases.

Just as clarification so people don't get confused, voltage does effect power consumption. I know what you're saying, but that way you said it isn't true/complete. You're assuming a constant power meaning that voltage decreases will cause an equal current increase but this isn't true in all cases, especially not digital circuits.
 

96Firebird

Diamond Member
Nov 8, 2010
5,743
340
126
The interface retains the single-ended structure of the previous generation, but uses a new clocking technique and new low power modes to consume an average of 2.5 watts at 5 Gbits/second running at 1.5 volts. Macri estimates the new interface reduces power by about 30 percent compared to today's mainstream GDDR3.

Source
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Just as clarification so people don't get confused, voltage does effect power consumption. I know what you're saying, but that way you said it isn't true/complete. You're assuming a constant power meaning that voltage decreases will cause an equal current increase but this isn't true in all cases, especially not digital circuits.

Yeah I could have worded it a bit better looking back.

But my point was, just because voltage is lower, does not mean power consumption is lower, as it is only half of the equation. Current has to be taken into account.
 

Kippa

Senior member
Dec 12, 2011
392
1
81
I saw a rumour link that says that the 390x is 4gb. What is the general feeling, is the 390X going to be 4gb or 8gb of vram? I'm not going for a 4gb card for numerous reasons. I really do hope it is going to be an 8gb beast.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The 290X memory controller is at 5gbps. Thats 60% of that 8gpbs AMD show in the slide.

Secondly, their suggestion that GDDR5 draw 35W while HBM draw 15W is also wrong. DDR3 @ 128GB/s draw 6.4W, at 320GB/s (290X) we are maybe looking at 20W tops. GDDR5 draw less than this too, because it runs on lower voltage than DDR3.

5UjpXSf.jpg



In total, I don`t think you a huge power reduction with HBM. Maybe 15W-30W somewhere tops I think.

That's not how it works at all. You can't just take 6.4W power usage of DDR3 @ 128GB/sec and multiply it until you get 320GB/sec and just arrive at ~20W of power.

If I drop my GDDR5 speeds from 1750mhz to 800mhz, my power usage falls 30W on a single 7970 card which has a 384-bit bus.

You can just ask any R9 290X owner to max overclock their memory and then measure the increase in power usage over a stock 5500mhz memory. There is no way that 8GB of GDDR5 on a 512-bit memory controller running at 8Gbps would only use 20-30W of power. :hmm:

I saw a rumour link that says that the 390x is 4gb. What is the general feeling, is the 390X going to be 4gb or 8gb of vram? I'm not going for a 4gb card for numerous reasons. I really do hope it is going to be an 8gb beast.

Up to 8GB of HBM1 means there will likely be both versions to cater to different market needs. For example, a 1080P-1440P user might decide to save $ and get the 4GB version instead, while a multi-monitor and 4K user will go for the 8GB version.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
That's not how it works at all. You can't just take 6.4W power usage of DDR3 @ 128GB/sec and multiply it until you get 320GB/sec and just arrive at ~20W of power.

If I drop my GDDR5 speeds from 1750mhz to 800mhz, my power usage falls 30W on a single 7970 card which has a 384-bit bus.

You can just ask any R9 290X owner to max overclock their memory and then measure the increase in power usage over a stock 5500mhz memory. There is no way that 8GB of GDDR5 on a 512-bit memory controller running at 8Gbps would only use 20-30W of power. :hmm:

You don`t seem to read any of the posts. Try again.
AMD`s slide say the GDDR5 use 35W and with the controller its 85W.

I say 290X GDDR5 is closer to 20W and the total power consumption with the controller is muuch lower than 85W because:
A) Power consuhption of GDDR5 is lower than the DDR3 calculated in your quote.
B) 290X runs at 5gbps. Thats 1250MHz instead of 2000MHz which is very significant difference in terms of power.

The slide from AMD is exaggerated in power consumption than it really is, to market the HBM better and to boast their GPUs.
Nvidia can do fine with TitanX/980Ti GDDR5 at 336GB/s, and I don`t think thats causing them much less power envelope than with HBM. Which I guess was one of the reasons why they wait with HBM til 2016.
Don`t get me wrong, HBM is great for bandwidth, but for power consumption its sorta like DDR4 over DDR3. Its there but it will not make a significant change.

AMD expects us to believe that their upcoming GPUs needs 570GB/s which is why they clocked the GDDR5 at 2000MHz in the example. Almost double as much bandwidth as 290X, when the shaders have increased by 45%.
Another thing is that the HBM presented in the slide only have 200GB/s which means the power consumption will be higher than shown with more bandwidth and more stacks.

Its a technical presentation mixed with marketing.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You don`t seem to read any of the posts. Try again.
AMD`s slide say the GDDR5 use 35W and with the controller its 85W.

I did read your posts. I am telling you that if you take GDDR5 and start clocking it high, power usage skyrockets over 384-bit and 512-bit buses. Measuring GDDR5's power usage without considering the memory controller is worthless since these 2 components work together. The whole point of the slide comparing GDDR5's efficiency by taking into account the memory controller is exactly because AMD's 384-bit and 512-bit controllers are both larger in die size and more power hungry than the HBM1 implementation. Therefore, in relation to AMD's case, their slide is not marketing fluff as you seem to imply.

Also, you make it sound like HBM2 is some completely different tech from HBM1. AMD decided to spend the R&D and implement it earlier which means for HBM2, they'll have way less work to do than NV. This is just a different approach to adopting new tech - AMD decided to spend the $ now and spend less later and NV did the opposite. What this means is NV will have more risk associated with a new node + new architecture + HBM2 for Pascal, while AMD will only need to deal with new node + new architecture since HBM2 and HBM1 are going to be nearly the same thing, with 1 just faster, but whatever AMD learned with HBM1 will be directly transferred over to HBM2, making this task way easier for them next gen. Again, since AMD also designs APUs, maybe they didn't want to even wait until HBM2 and needed to start the work earlier than NV because a lot more products in AMD's lineup would benefit from HBM memory than in NV's portfolio.

I say 290X GDDR5 is closer to 20W and the total power consumption with the controller is muuch lower than 85W because:

No one is disputing that 512-bit GDDR5 @ 5Ghz uses less power than the same controller over 8Ghz. Your assessment that R9 290X's GDDR5 memory only uses 20W of power is meaningless because without considering the power usage of the 512-bit memory controller, that GDDR5 cannot operate on its own. Therefore, the only comparison that matters for engineers here is bandwidth/watt which is a function of both the power usage of the memory controller + the memory type used. You trying to isolate the power usage of GDDR5 from the memory controller is basically irrelevant since in the context of AMD's R9 390X design, their choice was either to keep their power hungry 512-bit memory controller and clock GDDR5 to 7-8Ghz, or go HBM. Not sure, how this isn't' clear to you. Whatever power usage NV's memory controllers have over 256-bit or 384-bit buses is completely irrelevant for AMD's R9 390X design.

Nvidia can do fine with TitanX/980Ti GDDR5 at 336GB/s, and I don`t think thats causing them much less power envelope than with HBM. Which I guess was one of the reasons why they wait with HBM til 2016.

This isn't about NV's architecture but AMD's. NV doesn't sell APUs, AMD does. AMD needs to adopt HBM for other applications, not just GPUs. For that reason, it's a lot more complex than waiting for HBM2 for NV vs. AMD moving to HBM1 earlier. Also, your comment that NV didn't use HBM because it hardly improves power usage does not need to be true. It could be that NV didn't need to invest into HBM1 because Maxwell's perf/watt architecture was enough. With AMD, it's totally different since their architecture isn't as efficient in perf/watt, so they chose to use HBM1 as a method to improve perf/watt since they can't spend 3-4 years redesigning GCN to be 2X more power efficient. Did you ever think of that?


Don`t get me wrong, HBM is great for bandwidth, but for power consumption its sorta like DDR4 over DDR3. Its there but it will not make a significant change. The slide from AMD is exaggerated in power consumption than it really is, to market the HBM better and to boast their GPUs.AMD expects us to believe that their upcoming GPUs needs 570GB/s which is why they clocked the GDDR5 at 2000MHz in the example.

This entire point you just made contradicts your entire viewpoint. If HBM didn't result in massive improvements in bandwidth/watt and didn't reduce the complexity of the videocard/ASIC, why in the world would AMD even use HBM? You think their engineers just decided on some random Friday morning that they will invest into HBM1 with SK Hynix for 1.5 years and waste tens of millions of dollars because HBM1 marketing sounds cooler than GDDR5? You cannot be serious!

Also, your calculations are way off.

512-bit @ 7Ghz = 448GB/sec and AMD's slide at 8Ghz already shows 512-bit controller + GDDR5 at those speeds uses 50W more power than a 512GB/sec HBM1. However, you missed the part of what happens if we go from 4GB GDDR5 over 512-bit bus to 8GB. That 50W will grow even more. So in fact, even if AMD didn't need to use 8Ghz modules on a 512-bit bus and used 7Ghz modules to give R9 390X just 448GB/sec bandwidth, the use of 8GB GDDR5 vs. 8GB of HBM1 would have meant 50W of power usage anyway.

Even NV will get a power consumption reserve by moving GM200 to 6GB which means there is obviously a penalty associated with using more GDDR5 on a 384-bit and 512-bit controller. You haven't even considered this in your discussion since we are not talking about a 390X 8GB part, not just a 4GB part. So your point is moot.

Another thing is that the HBM presented in the slide only have 200GB/s which means the power consumption will be higher than shown with more bandwidth and more stacks.

The slide shows 512-bit @ 8Ghz vs. 4x1024-bit HBM @ 1Ghz = 512GB/sec for both scenarios, not 200GB/sec. That means at 512GB/sec, an R9 390X 4GB with conventional GDDR5 512-bit controller would have used 50W more power than a 4GB HBM1 version of the same videocard. How are you not understanding that slide? It can't be more clearer. Double the memory and power usage is even more.

Also, it's amazing how you think AMD is using HBM1 as mostly a marketing move because NV got away with not using HBM -- ignoring that NV's and AMD architectures are different, ignoring the possibility that AMD's R9 390X might be as fast or faster than the Titan X at a much smaller die size?

Your analysis ignores that there are way too many factors involved in GPU design here with the move to HBM - reduced PCB complexity, reduced memory controller complexity --> reduced GPU die size --> experience gained for APUs by adopting HBM1 earlier, etc.

Just because NV decided to wait for HBM2 doesn't mean HBM1 is mostly a marketing exercise, with small power consumption reduction and little other benefits. I mean it's remarkable you would think you are smarter than 1000s of engineers who get paid 6 figures at AMD and know GCN architecture better than everyone on this forum combined.

This continuous theme of downplaying any newest technology / advantage that AMD embraces has been around for a long time on these forums. If you are going to provide counter-arguments why it's not that great, at least have a stronger argument.
 
Last edited: