[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 34 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Late June could mean reveal at Computex in early June and availability in late June.

Lets see if the rumor about 980Ti release 16-26th May is true. If it is Nvidia get 1 month alone.

Could be that 390X is announced in late June too though.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
For example, I fully expect the summer 2015 MacBook Pro laptops to be refreshed with Maxwell, not R9 300 series. It's hard to say what's going to happen with the Mac Pro because Maxwell is lacking double precision performance but the 12GB of VRAM would be a huge selling point for Apple to use Quadro GM200 cards in the Mac Pro. Reading on some Apple forums, the R9 M290X/295X got too much negative feedback which makes me think the next iMac refresh will also get Maxwell.

The Retina iMac has complaints about both GPUs, but they're different complaints. The R9 M290X is just a rebranded Pitcairn, and the biggest issue seems to be that it doesn't have enough power to handle a 5K resolution even for light duty. The R9 M295X (Tonga) is strong enough, but has issues with overheating.

I think that Tonga was supposed to be AMD's first 20nm chip, but Global Foundries was unable to execute in time, and it had to be hastily ported to 28nm to make the Retina iMac's release date. That would explain the heat issues - it was supposed to be ~20% more efficient than it really was. If true, this can't have made Apple very happy.

Regarding the Mac Pro, there are several different issues. First of all, AMD was willing to cut Apple a much better deal than Nvidia probably would. Updating the Mac Pro to D700 GPUs costs an extra $1,000 at retail ($500 each). Nvidia sells the Quadro M6000 for $5,000 - more than an entire fully-loaded Mac Pro. Apple isn't a bargain vendor, but even they will balk at an upgrade that doubles or triples the cost of the system. As for the RAM, AMD already offers the FirePro W9100 with 16GB. Nvidia's drivers still lag somewhat in OpenCL, which is what Apple is pushing, so this may have been another factor even if you discount Double Precision performance.

In general, AMD is more flexible than Nvidia on both design and pricing, which is why Apple has chosen them for several purposes. (Their perf/watt was too poor for them to get a design win on portable systems like the MacBook Pro.) For example, the Retina iMac needed a custom TCON. Maybe Nvidia wasn't willing to do that, or wanted to charge a prohibitive fee. If Apple wants a specific SKU, AMD will cooperate, and at a reasonable price; Nvidia might not.
 
Feb 19, 2009
10,457
10
76
Don't you have dual 290/290Xs though? Sounds like you might need dual 390s at least to feel the upgrade as worthwhile.

Wow, you are crazy optimistic, huh :) R9 380X 15-20% faster than a GTX 980 priced at $400-450? I was thinking R9 380X = 980 for $399 would already be amazing given the milking NV has been doing for so long. Even having 980 level of performance at $399 would already be a breath of fresh air but 15-20% faster than a 980? Sounds too good to be true!

It depends on the power consumption, if its not very high, I might grab two for 4K. My electricity rates are around 3-5X more expensive than most USA states, and my summers are brutal so the extra heat = sweaty temps. At some point, it's just not worth it.

Otherwise I'll downscale to 1 OC 390X for 4K and tone down settings to get it playable. Basically waiting to see the 4K FS options, definitely need 30-60hz FS range at that resolution.

Looks like we got a date: http://hardforum.com/showpost.php?p=1041573857&postcount=2496

Late June, after the Witcher 3 and Batman has been released.

*sigh* I'll be away on Holiday and I certainly don't feel like waiting until early July to play the Witcher 3 to surprise me.

AMD GPUs is almost guaranteed to run like crap on this upcoming Batman. In fact, I've boycotted that series ever since the AA fiasco so it doesn't bother me how it turns out. GTA V & Witcher 3 are the big time games this year. GTA V already runs fine on AMD, thanks to Rockstar not going with GameWorks exclusive.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
The Retina iMac has complaints about both GPUs, but they're different complaints. The R9 M290X is just a rebranded Pitcairn, and the biggest issue seems to be that it doesn't have enough power to handle a 5K resolution even for light duty. The R9 M295X (Tonga) is strong enough, but has issues with overheating.

I think that Tonga was supposed to be AMD's first 20nm chip, but Global Foundries was unable to execute in time, and it had to be hastily ported to 28nm to make the Retina iMac's release date. That would explain the heat issues - it was supposed to be ~20% more efficient than it really was. If true, this can't have made Apple very happy.

Regarding the Mac Pro, there are several different issues. First of all, AMD was willing to cut Apple a much better deal than Nvidia probably would. Updating the Mac Pro to D700 GPUs costs an extra $1,000 at retail ($500 each). Nvidia sells the Quadro M6000 for $5,000 - more than an entire fully-loaded Mac Pro. Apple isn't a bargain vendor, but even they will balk at an upgrade that doubles or triples the cost of the system. As for the RAM, AMD already offers the FirePro W9100 with 16GB. Nvidia's drivers still lag somewhat in OpenCL, which is what Apple is pushing, so this may have been another factor even if you discount Double Precision performance.

In general, AMD is more flexible than Nvidia on both design and pricing, which is why Apple has chosen them for several purposes. (Their perf/watt was too poor for them to get a design win on portable systems like the MacBook Pro.) For example, the Retina iMac needed a custom TCON. Maybe Nvidia wasn't willing to do that, or wanted to charge a prohibitive fee. If Apple wants a specific SKU, AMD will cooperate, and at a reasonable price; Nvidia might not.
M295X/Tonga is trash against Mobile Maxwell. 75W GTX 970M is almost 30% faster than 125W M295X.

h2dI9Wq.png


With the countless complaints about temps reaching 95C-100C for the iMac, AMD better have reduced TDP by a great amount or I think Apple have no issues reverting back to Nvidia again.
Reputation is imporyant for Apple, and I dont think the price between M295X and 970M is much to complaint about. The MXM cards are priced the same actually. But I dont know what Apple paid for the M295X though.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
It depends on the power consumption, if its not very high, I might grab two for 4K. My electricity rates are around 3-5X more expensive than most USA states, and my summers are brutal so the extra heat = sweaty temps. At some point, it's just not worth it.

Otherwise I'll downscale to 1 OC 390X for 4K and tone down settings to get it playable. Basically waiting to see the 4K FS options, definitely need 30-60hz FS range at that resolution.

AMD GPUs is almost guaranteed to run like crap on this upcoming Batman. In fact, I've boycotted that series ever since the AA fiasco so it doesn't bother me how it turns out. GTA V & Witcher 3 are the big time games this year. GTA V already runs fine on AMD, thanks to Rockstar not going with GameWorks exclusive.

R9 390X with overclocking should max out any demanding game at 4K without MSAA. Few very demanding games like GTA V or Witcher 3 you will have to tone down a couple of settings. Its going to be one hell of a GPU. :thumbsup:
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
My crystal ball says:
390x wont max out games in 4k

Raghu78 you are really super optimistic..380x with HBM 20% faster than GTX980 for 400usd?
I dont see that happen.
With budget AMD have is more realistic 380x is rebranded hawaii.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
My crystal ball says:
390x wont max out games in 4k

Raghu78 you are really super optimistic..380x with HBM 20% faster than GTX980 for 400usd?
I dont see that happen.
With budget AMD have is more realistic 380x is rebranded hawaii.

It's not all that impossible to think they could go with a Hawaii revision, neither an outright rebadge or completely new architectural design.

Think about it: revise Hawaii, and add HBM. Do you not think it could then beat a 980 when in many ways the 290X isn't THAT far away? Hawaii's bandwidth win over the 980 already places it neck and neck with it at higher resolutions on numerous titles. I'm not saying the 980 isn't better than the 290X, more often than not, it is.

But, revisiting Hawaii, perhaps even using GloFo's SHP 28nm, could enable AMD to extract more out of the base Hawaii XT design. Maybe they don't add HBM due to limited availability, focusing on the 390 series, but they could even take Hawaii and, on new silicon, include the GCN 1.2 improvements. That would put a 380X quite a ways ahead of the 290X, even without new memory.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
It's not all that impossible to think they could go with a Hawaii revision, neither an outright rebadge or completely new architectural design.

Think about it: revise Hawaii, and add HBM. Do you not think it could then beat a 980 when in many ways the 290X isn't THAT far away? Hawaii's bandwidth win over the 980 already places it neck and neck with it at higher resolutions on numerous titles. I'm not saying the 980 isn't better than the 290X, more often than not, it is.

But, revisiting Hawaii, perhaps even using GloFo's SHP 28nm, could enable AMD to extract more out of the base Hawaii XT design. Maybe they don't add HBM due to limited availability, focusing on the 390 series, but they could even take Hawaii and, on new silicon, include the GCN 1.2 improvements. That would put a 380X quite a ways ahead of the 290X, even without new memory.

It would be something if the 380/380x were stock 8GB GDDR5 cards. With performance improvements and that bonus, they'd be a serious consideration. if it matches or beats the 980 with 8GB vram at ~$400 a lot of people would be ecstatic. They can keep HBM for 390s in that case
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Fudzilla is at it again.

1. Fiji XT will be significant smaller than R9 290X due to HBM taking less space.
Either this is a lie or R9 390 doesn`t feature HBM since the XFX R9 390 is just as big as R9 290X

2. AMD will NOT announce the new card(s) at Computex. It will take place in a different date in June.
E3 is in June 16-18th so it could happen there. Or it could happen late June like Kyle Bennet said earlier. So either we get to buy the cards late June or we get them in July/August it looks like
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Fudzilla is at it again.

1. Fiji XT will be significant smaller than R9 390X due to HBM taking less space.
Either this is a lie or R9 390 doesn`t feature HBM since the XFX R9 390 is just as big as R9 290X

2. AMD will NOT announce the new card(s) at Computex. It will take place in a different date in June.
E3 is in June 16-18th so it could happen there. Or it could happen late June like Kyle Bennet said earlier. So either we get to buy the cards late June or we get them in July/August it looks like

Kyle also said that 400 series will be the real new architecture from AMD.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Kyle also said that 400 series will be the real new architecture from AMD.
Thats interesting.
Almost sounds like they got the power reduction from a new process instead of a new architecture. Even for 390X.

Ive seen 300W TDP for 390X thrown around a lot in my usual chinese sites, which fit the power for the XFX R9 390 which is 6+8pin (=Max 300W). So I guess 390X got that power supply as well, like 290X.
Thats 2816 shaders vs 4096 shaders...

The efficiency must come from somewhere. If its not a new architecture, it must be the process
 

Head1985

Golden Member
Jul 8, 2014
1,867
699
136
Thats interesting.
Almost sounds like they got the power reduction from a new process instead of a new architecture. Even for 390X.

Ive seen 300W TDP for 390X thrown around a lot in my usual chinese sites, which fit the power for the XFX R9 390 which is 6+8pin (=Max 300W). So I guess 390X got that power supply as well, like 290X.
Thats 2816 shaders vs 4096 shaders...

The efficiency must come from somewhere. If its not a new architecture, it must be the process
HBM
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Late June/August is pretty late. I'm almost at that "throw my hands up and just wait for next year" phase. Currently using iGPU since I got rid of my 970 (wasn't ok with being one of nvidia's fools).

Maybe better to just wait till black friday or see what deals come around around the holidays.
 

msi2

Junior Member
Oct 23, 2012
22
0
66
Fudzilla is at it again.

1. Fiji XT will be significant smaller than R9 390X due to HBM taking less space.
Either this is a lie or R9 390 doesn`t feature HBM since the XFX R9 390 is just as big as R9 290X

2. AMD will NOT announce the new card(s) at Computex. It will take place in a different date in June.
E3 is in June 16-18th so it could happen there. Or it could happen late June like Kyle Bennet said earlier. So either we get to buy the cards late June or we get them in July/August it looks like

Or maybe he meant: "1. Fiji XT will be significant smaller than R9 290X due to HBM taking less space."
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
LOL, you get like 15W reduction or something with HBM over GDDR5. Hardly anything that would let them ditch power pins :p
The GPU core, thats the important thingymabob
Late June/August is pretty late. I'm almost at that "throw my hands up and just wait for next year" phase. Currently using iGPU since I got rid of my 970 (wasn't ok with being one of nvidia's fools).

Maybe better to just wait till black friday or see what deals come around around the holidays.
Yeah it sucks. If 980Ti launch around May 20th like promised with a good price, AMD is missing a very important launch window :/

Or maybe he meant: "1. Fiji XT will be significant smaller than R9 290X due to HBM taking less space."
Thats what I meant. You know what I mean. But thanks for the notice, changed it :p
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
If anyone got the time, try to calculate the power gain with HBM over GDDR5 on a 4GB HBM 390X vs 4GB GDDR5 290X
I`m too lazy to do it :p

Fd4wfZX.jpg



AMD-Radeon-R9-390X-Specifications-900x508.jpg
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Kyle also said that 400 series will be the real new architecture from AMD.

Thats interesting.
Almost sounds like they got the power reduction from a new process instead of a new architecture. Even for 390X.

Ive seen 300W TDP for 390X thrown around a lot in my usual chinese sites, which fit the power for the XFX R9 390 which is 6+8pin (=Max 300W). So I guess 390X got that power supply as well, like 290X.
Thats 2816 shaders vs 4096 shaders...

The efficiency must come from somewhere. If its not a new architecture, it must be the process

Efficiency can be improved with a revision to architecture, which technically is not a new architecture. Thus, GCN 1.3 can be easily considered "not a new architecture" yet could bring efficiency improvements.

They did that with Tonga - I wouldn't call that a new architecture, yet there are efficiency improvements between Tonga and previous revisions.

Transistor leakage reduction due to a better process (SHP 28nm) is also a way to get a drop in power usage. I think the new chips will be a combination of the revisions Tonga brought, along with better performance out of a new process. Combined, that may offer yet more room to improve without going over silicon budget or requiring a drastic change to the architecture, meaning additional slight revisions to improve performance.

With all the revisions, GCN is still GCN. With the next node shrink, that'll be high time to finally move on from GCN to an entirely new architecture.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Thats interesting.
Almost sounds like they got the power reduction from a new process instead of a new architecture. Even for 390X.

Ive seen 300W TDP for 390X thrown around a lot in my usual chinese sites, which fit the power for the XFX R9 390 which is 6+8pin (=Max 300W). So I guess 390X got that power supply as well, like 290X.
Thats 2816 shaders vs 4096 shaders...

The efficiency must come from somewhere. If its not a new architecture, it must be the process

What staggeres me is thay people don't remeber the first charts that were on ChipHell.

First benchmarks shown 20% increase in performance over GTX980 while using 200W of power.

Second leak shown a bit higher power consumption in relation to performance offered: 210-215W same 20-25% over GTX980.

Which means it was not about full Fiji, but preproduction mule of cut down GPU with HBM.

Here are the results:
y8sAjvf.png
YMUhMS9.jpg


P.S. The result is for a GPU with around 3500 GCN cores. It must be a cut down Fiji/Bermuda GPU.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
LOL, you get like 15W reduction or something with HBM over GDDR5. Hardly anything that would let them ditch power pins :p
The GPU core, thats the important thingymabob

HBM provides the same bandwidth as GDDR5 at 1/3rd the power. btw that power consumption is the memory controller + memory chips put together. see slide 45 of AMD presentation

http://www.microarch.org/micro46/files/keynote1.pdf

As for how much power does the memory subsystem of a GPU take (again memory controller + memory chips) its anywhere between 35 - 50%. here is a stdy which mentions that the HD 6990 used 37% of total board power for MC + GDDR5 chips whereas the Quadro FX 9800 which is basically a GTX 280 with 512 bit DDR3 memory controller uses 60% of total board power for MC + DDR3 chips

https://users.soe.ucsc.edu/~jzhao/files/islped209-zhao.pdf

The power savings for a GPU like R9 390X by using HBM instead of a 512 bit GDDR5 memory controller would be 50 - 60w which is no small number. :whiste:
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
M295X/Tonga is trash against Mobile Maxwell. 75W GTX 970M is almost 30% faster than 125W M295X.

h2dI9Wq.png


With the countless complaints about temps reaching 95C-100C for the iMac, AMD better have reduced TDP by a great amount or I think Apple have no issues reverting back to Nvidia again.
Reputation is imporyant for Apple, and I dont think the price between M295X and 970M is much to complaint about. The MXM cards are priced the same actually. But I dont know what Apple paid for the M295X though.

Its because the AMD chips have better OpenCL performance for a given price,and Apple has been pushing it massively for things like Adobe CS,which a lot of iMac users have been using. The Mac version of CS moved over to OpenCL first which is an Apple backed standard which they invested a lot into.Almost the entirety of iMac owners I know,use them for productivity purposes at home(coding or photography) or in research labs since they take up less space than a desktop,yet have decent quality screens. I don't know any who really games on them TBH.


Also,regarding the temperatures,IIRC it all started with someone(maybe on this forum) referencing a thread on a Mac orientated forum where someone did have an issue with overheating.

But if people read that thread,they would have seen people returning the iMacs and getting replacements which were fine but they run hot. But even applies to the Intel CPUs in them too - people have seen them hit 100C+ for just CPU stress tests.

It appears to have been partly a potential QC problem,and its not the first time QC issues have lead to overheating Mac computers but also because Apple tends to value thinness as much as they can. Its no different than mates of mine who have rMBPs and if they try gaming on them,they run hot and they have Nvidia GPUs in them,not AMD ones.

I can still remember the massive overheating with some of the early Intel MacBooks when the OEM plonked too much thermal compound on CPU,and it was fixed when people re-applied a thinner amount of paste. The G5 iMacs we used to have ran very hot. The first Intel Core based ones didn't and they used ATI/AMD graphics chips and they have been toing and froing for a while now between both Nvidia and ATI/AMD. The first Intel Mac Pros had only NV cards mostly.

Plus I think you need to be careful talking about problems - Apple has had issues with both Nvidia and AMD chips in the past too(the bumps issue being a notable example),so its easy to cherry pick either way. But that is the price you pay,for maximising thinness or quietness which means there is little to no leeway with the cooling system on them.

Even nearly 35 years ago Steve Jobs demanded the Apple III have no cooling fans to make it quieter,and that lead to it having loads of issues. Even the iPad 3 ran hot - my mate jokingly referred to his one as his personal hand warmer. Its been an Apple thing for decades.

Edit to post


If you look here:

http://en.wikipedia.org/wiki/Mac_Pro

With every other release of the first generation Mac Pro they changed primarily from AMD to Nvidia and back again.

First generation Intel iMac:

http://en.wikipedia.org/wiki/IMac_(Intel-based)#Polycarbonate_iMac

ATI and Nvidia cards

Second generation iMac:

http://en.wikipedia.org/wiki/IMac_(Intel-based)#Aluminum_iMac

Two out of three revisions used AMD/ATI cards exclusively.

Unibody iMac first generation:

http://en.wikipedia.org/wiki/IMac_(Intel-based)#Unibody_iMac

AMD/ATI cards only.

Second generation:

http://en.wikipedia.org/wiki/IMac_(Intel-based)#iMac_with_Retina_5K_display

Mostly Nvidia cards

Retina display version is using only AMD cards.

Interestingly Apple does not show much preference for an particular vendor it appears.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
The power savings for a GPU like R9 390X by using HBM instead of a 512 bit GDDR5 memory controller would be 50 - 60w which is no small number. :whiste:

At least. In that AMD slide, dating Dec 2013, when dual-link interposer and 8GB option wasn't available, they are quoting 50W.

l29o6zV.png


If they go 8GB of GDDR5 vs. 8GB of HBM1, this will grow even more. Also, since the memory controller would like be smaller/less complex, instead of making say a 550mm2 GDDR5 card, they can make a 530mm2 HBM1 card. As a result, they can add 20mm2 for greater amount of shaders, textures, etc. to release a 550mm2 HBM1 chip that's BOTH more efficient and packs more processing power since you've just used the excess die space that normally would have been allocated for the 512-bit memory controller towards the functional GPU units. I can't even imagine how beastly a 14nm HBM2 550mm2 AMD chip could end up next generation if only AMD ditched all DP functionality out of it and made a pure gaming monster chip.

Late June/August is pretty late. I'm almost at that "throw my hands up and just wait for next year" phase. Currently using iGPU since I got rid of my 970 (wasn't ok with being one of nvidia's fools).

Maybe better to just wait till black friday or see what deals come around around the holidays.

I don't think we'll see any next gen cards faster than an R9 390X/Titan X until September 2016. That's a lot of waiting to be on an IGP. It's not a bad deal to get an R9 290/970 as hold-over as those cards are unlikely to lose a lot of value over the next 6-12 months.