[BitsAndChips]390X ready for launch - AMD ironing out drivers - Computex launch

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
1. Stock v Stock, the 7970 loss. The 7970 Ghz ed won, but at a great cost in power efficiency and the reference card that's used as a benchmark by most review site? It's horrid. 1.25vcore default, incredibly noisy, very power hungry. It did not matter that custom models had much lower vcore, were cool, quiet and use less power. The damage was done. It was done by AMD, at the launch of the Ghz Ed and the 7950 Boost, I said it was a mistake, it would cost them dearly for going with 1.25vcore + crap reference blower to send that to reviewers.

Yeah it seemed like the 7970 Ghz edition was a marketing mess. Not only did it confuse consumers (in the secondary market there is no price difference between the two really), but it was a middle finger to overclocked non-reference 7970s. I wonder if it would have been called the 7975 if everything would have been better for it/AMD.

Some of the things AMD does doesn't make sense to me at all though. Like what is the point of the R9 285? Less power and RAM than the R9 280X it replaced at the same price point! Now that the 280Xs are cleared out it seems hard to hate on that GTX 960 with its middling power and 2GB of ram when AMD's newest product is almost exactly the same. If AMD had a sub-$200 3GB VRAM GTX960 killer right now they would be doing much better than trying to convince potential GTX 960 customers (who are at the top of their middle range GPU budget) to pay even MORE for the 290 that maybe their PSU can't power. Basically a normal 7970 rebranded would be better at the $200-ish price point than the 285, but since AMD won't give us that the used mining market is the only place in a sub-$200 budget to get value. Hell that 7970 should maybe be put in the GPU hall of fame next to the 8800GTX given how well it has aged over its life compared to the 680 and even AMD's own 285.

This should be AMD's golden age. Many of the top PC games are console ports, and the consoles are their tech. Directx is moving to benefit them, OpenGL is moving to benefit them, and Nvidia is distracted by its mobile failures. They have been putting out the best GPU value in the industry for years, and yet they are still way behind. This feels like the fashion industry and not the meritocracy that is normally the tech industry.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
This should be AMD's golden age. Many of the top PC games are console ports, and the consoles are their tech. Directx is moving to benefit them, OpenGL is moving to benefit them, and Nvidia is distracted by its mobile failures. They have been putting out the best GPU value in the industry for years, and yet they are still way behind. This feels like the fashion industry and not the meritocracy that is normally the tech industry.

It's almost unfathomable. But you have to leave the knowledgeable realms; we do not represent a significant margin of the userbase. Go to places like NeoGAF. Read their GPU discussions, and PC recommendations. It's frightening how some of the masses think.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Some of the things AMD does doesn't make sense to me at all though. Like what is the point of the R9 285? Less power and RAM than the R9 280X it replaced at the same price point!

As a consumer product, R9 285 is a fail. I agree with you that AMD should have spent more time working on an R9 285 so that it completely replaces and is superior to R9 280/280X. I wouldn't buy a 285 or a 960 over 280X or 290 today. I wouldn't even pick the 285 over the 280. I think AMD made a big mistake not refreshing R9 290/290X series as well. Replacing them with R9 295 and 295XT or something with 5-10% faster clocks, after-market coolers ala Sapphire Tri-X would have done wonders to fix some of their horrible image. Sure, they would have still used a lot of power but at least both would be nipping on the heels of the 980 for hundreds of dollars less.

As far as the 285 goes, it has major architectural improvements that very rarely show up since the card seems to be heavily bottlenecked on the shader and texture side. However, when they do show up, 285 blows HD7950 away by miles, even beating HD7970Ghz easily.

bioshock_2560_1600.gif


If AMD used R9 285 as a test-bed for the architectural improvements of GCN, we should see ALL of these GCN 1.2 changes show up in the non-rebadged R9 300 SKUs. I can't see how AMD will throw away 70% increase in pixel fill-rate throughput, 2X the geometry performance increase that R9 285 achieves over Tahiti, and 40% colour fill-rate/memory bandwidth efficiencies. That means fundamentally, GCN 1.2/1.3 in R9 390 series is already miles ahead of R9 290X. What AMD needs though is to push the shader and texture side or like in R9 285, those advantages just won't show up if the card is limited elsewhere. That's actually one of the major reasons I forgot about why a 2x2048 SP card makes little sense - all those efficiencies would be wasted since 2048 SPs and 128 TMUs are too much of a bottleneck.

I think a lot of people don't see what AMD's engineers accomplished with the R9 285's architecture underneath.

With only 918mhz clocks and 32 ROP, 176 GB/sec memory bandwidth, the card has higher pixel fill-rate performance than a 947 mhz, 64 ROP, 320GB/sec R9 290. Conversely, one can say that the 64 ROPs in the 290 series are horribly inefficient if we compare the card to the R9 280X. So imagine 64 or 96 ROPs in an R9 390 series card with 40-70% higher utilization. :biggrin:

67234.png


But as I said if the R9 285 is shader and texture limited, well that insane pixel fill-rate doesn't mater at 1440P and 4K.
 
Last edited:

MeldarthX

Golden Member
May 8, 2010
1,026
0
76
As a consumer product, R9 285 is a fail. I agree with you that AMD should have spent more time working on an R9 285 so that it completely replaces and is superior to R9 280/280X. I wouldn't buy a 285 or a 960 over 280X or 290 today. I wouldn't even pick the 285 over the 280. I think AMD made a big mistake not refreshing R9 290/290X series as well. Replacing them with R9 295 and 295XT or something with 5-10% faster clocks, after-market coolers ala Sapphire Tri-X would have done wonders to fix some of their horrible image. Sure, they would have still used a lot of power but at least both would be nipping on the heels of the 980 for hundreds of dollars less.

As far as the 285 goes, it has major architectural improvements that very rarely show up since the card seems to be heavily bottlenecked on the shader and texture side. However, when they do show up, 285 blows HD7950 away by miles, even beating HD7970Ghz easily.

bioshock_2560_1600.gif


If AMD used R9 285 as a test-bed for the architectural improvements of GCN, we should see ALL of these GCN 1.2 changes show up in the non-rebadged R9 300 SKUs. I can't see how AMD will throw away 70% increase in pixel fill-rate throughput, 2X the geometry performance increase that R9 285 achieves over Tahiti, and 40% colour fill-rate/memory bandwidth efficiencies. That means fundamentally, GCN 1.2/1.3 in R9 390 series is already miles ahead of R9 290X. What AMD needs though is to push the shader and texture side or like in R9 285, those advantages just won't show up if the card is limited elsewhere. That's actually one of the major reasons I forgot about why a 2x2048 SP card makes little sense - all those efficiencies would be wasted since 2048 SPs and 128 TMUs are too much of a bottleneck.

I think a lot of people don't see what AMD's engineers accomplished with the R9 285's architecture underneath.

With only 918mhz clocks and 32 ROP, 176 GB/sec memory bandwidth, the card has higher pixel fill-rate performance than a 947 mhz, 64 ROP, 320GB/sec R9 290. Conversely, one can say that the 64 ROPs in the 290 series are horribly inefficient if we compare the card to the R9 280X. So imagine 64 or 96 ROPs in an R9 390 series card with 40-70% higher utilization. :biggrin:

67234.png


But as I said if the R9 285 is shader and texture limited, well that insane pixel fill-rate doesn't mater at 1440P and 4K.


You're wrong about the 285 being a failed product; what that card was the remaiins of chips that didn't cut it for Apple.......285 was born out of 285X chips that couldn't be used in the new Imacs - which were very much a win for AMD.

The technology that went into 285 is awesome.....we never did see what it really could do because of never being a full chip....lessons learned there will be going into new 300 series for sure.

Those that don't truly understand just how forward thinking GCN arch is....we're only just beginning to see what it can do.
 
Feb 19, 2009
10,457
10
76
You're wrong about the 285 being a failed product; what that card was the remaiins of chips that didn't cut it for Apple.......285 was born out of 285X chips that couldn't be used in the new Imacs - which were very much a win for AMD.

The technology that went into 285 is awesome.....we never did see what it really could do because of never being a full chip....lessons learned there will be going into new 300 series for sure.

Those that don't truly understand just how forward thinking GCN arch is....we're only just beginning to see what it can do.

Doesn't change the fact that for consumers, the 285 is a failed product. It's price poorly for the performance and 2GB vram and it lacks power efficiency to justify its niche, like a 960 can for some low-power builds.
 

B-Riz

Golden Member
Feb 15, 2011
1,595
765
136
Doesn't change the fact that for consumers, the 285 is a failed product. It's price poorly for the performance and 2GB vram and it lacks power efficiency to justify its niche, like a 960 can for some low-power builds.

I would argue that the used market for 3GB 7950 / 7970 / 280 / 280x is what makes the 285 irrelevant to *us* and the fact all of the above mentioned cards only got better with drivers.

Omega 14.12 was a great release for us AMD users. (Beside recurring gremlins for some people, all PC's are different :()
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
Doesn't change the fact that for consumers, the 285 is a failed product. It's price poorly for the performance and 2GB vram and it lacks power efficiency to justify its niche, like a 960 can for some low-power builds.

I would have bought a 285. I wanted to buy a 285. AMD could have added me to the list of people who purchased their neutered Tonga core... If only they would sell me one with 4GB RAM.

Performance & efficiency was fine for me, and I'm OK with paying a bit more for a newer/"longer lasting" arch. I use Linux, & Tonga is the first GPU to use AMD's new driver system. Plus I had dreams of the HSA powa of a new GPU with a Carrizo system.

Now who knows? I never bought a 32nm FX either; but I would have at one stage.
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
I just really hope those latest rumors are just rumors...if the 390X really comes with only 4GB of HBM ram TOPS...without an 8Gig version there is no way it can beat the Titan X...some games simply lack perfect optimization or manage to get too close to the 4GB already...especially in 4K.

So if there's no 8Gig version...they might as well not release at all...because we certainly do not need that card for 1080P gaming ._.".

I can only hope.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,342
265
126
Yeah 4GB is not enough for 4K. Initially people were using yesteryear's games that were designed with 1080p or less in mind to bench VRAM usage at 4K. Now that games are being designed with higher resolutions in mind (and thus higher textures) along with larger pools of memory available on the consoles, the requirements are blowing up really fast.

I'm not concerned because I'm not moving to 4K for a while. The 4GB 390X would still make for a very good high refresh rate 1440P card.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I just really hope those latest rumors are just rumors...if the 390X really comes with only 4GB of HBM ram TOPS...

I am not sure what rumour you heard that from. Almost all recent rumours state up to 8GB via dual-link interposer.

2000x3000px-LL-f5350840_AMD-Radeon-R9-390X-Specifications-900x508.jpeg


I think that's one of the main reasons for the delay too. It's possible AMD is having driver issues via this dual-link interposer interface, on top of HBM yields.

Yeah 4GB is not enough for 4K. Initially people were using yesteryear's games that were designed with 1080p or less in mind to bench VRAM usage at 4K. Now that games are being designed with higher resolutions in mind (and thus higher textures) along with larger pools of memory available on the consoles, the requirements are blowing up really fast.

I'm not concerned because I'm not moving to 4K for a while. The 4GB 390X would still make for a very good high refresh rate 1440P card.

You are right. Since GameGPU's testing revealed that a nearly 1.4Ghz Titan X isn't even fast enough for 4K at 60 fps without turning down settings, the 4GB vs. 6-8GB debate is more suitable for those going CF 390/390X and GM200 SLI at 4K. In all honesty, this generation sounds like a pure stop-gap. With new architecture from NV, HBM2 and a lower node, it's hard to imagine anyone seriously thinking about "future-proofing" with either GM200 or R9 390.
 
Last edited:

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
I am not sure what rumour you heard that from. Almost all recent rumours state up to 8GB via dual-link interposer.

2000x3000px-LL-f5350840_AMD-Radeon-R9-390X-Specifications-900x508.jpeg

I think that's one of the main reasons for the delay too. It's possible AMD is having driver issues via this dual-link interposer interface, on top of HBM yields.


It's probably the obvious fud from the recent tweaktown report if you could call it that. They even reported that it would be a simultaneous release with a dual gpu card. lol, when has that ever happened and on a new platform?
 

Ranulf

Platinum Member
Jul 18, 2001
2,888
2,556
136
I would have bought a 285. I wanted to buy a 285. AMD could have added me to the list of people who purchased their neutered Tonga core... If only they would sell me one with 4GB RAM.

Performance & efficiency was fine for me, and I'm OK with paying a bit more for a newer/"longer lasting" arch. I use Linux, & Tonga is the first GPU to use AMD's new driver system. Plus I had dreams of the HSA powa of a new GPU with a Carrizo system.

Now who knows? I never bought a 32nm FX either; but I would have at one stage.

Ditto. I still would have waited the month or so until the 970/980 was released to compare but it would have been very tempting with 4gb of ram and between $250-300 in price. It was just too hard to ignore the 280/280x given the martket prices even before the 970/980 hit the market.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I am not sure what rumour you heard that from. Almost all recent rumours state up to 8GB via dual-link interposer.

2000x3000px-LL-f5350840_AMD-Radeon-R9-390X-Specifications-900x508.jpeg


I think that's one of the main reasons for the delay too. It's possible AMD is having driver issues via this dual-link interposer interface, on top of HBM yields.
Notice it says "Up to 8Gb", just like the 290x. It'll likely be a 4Gb card with an 8Gb option.

I saw another link, which listed a 390x and a 395x2, with 4Gb and 8Gb respectively. I'm not sure this was the link I saw, but it also shows this: http://videocardz.com/54858/amd-rad...tion-395x2-bermuda-390x-fiji-and-380x-grenada
Here is another link: http://www.overclock3d.net/articles/gpu_displays/amd_rx_300_series_specs_revealed/1
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Notice it says "Up to 8Gb", just like the 290x. It'll likely be a 4Gb card with an 8Gb option.

I saw another link, which listed a 390x and a 395x2, with 4Gb and 8Gb respectively. I'm not sure this was the link I saw, but it also shows this: http://videocardz.com/54858/amd-rad...tion-395x2-bermuda-390x-fiji-and-380x-grenada
Here is another link: http://www.overclock3d.net/articles/gpu_displays/amd_rx_300_series_specs_revealed/1

Ya, I get that and it's understandable. For example, I have a 1080P monitor. If I had to choose between $1000 R9 390 4GB CF and an $800 R9 390X 8GB, I would pick the former. Even a single $500 R9 390 4GB will be great for 1080P. Alternatively, if 8GB costs $100 extra per card, that's $200 extra for something I'd never use. I am just using this as a hypothetical example as I am not in the market for a new card at the moment. The point is if they offer both 4GB and 8GB options, it will be good. However, if they limit their offerings to 99% 4GB cards and only few examples such as the 8GB R9 390X WCE priced at $799, then it would be a big miss for those guys gaming at 4K.

Also, during HD7970Ghz series, we had an idea that 28nm node would likely be with us until 2016 and this node would drag out. With this generation, it screams stop-gap which means way less reason to "VRAM future-proof" imo. I don't see either R9 390X nor 980Ti to have any change of beating a GTX980 Pascal successor come Q4 2016.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
How come 4GB is fine for high end cards people are buying now but in 1-2 months suddenly a card with 4GB is no good?
 

Shehriazad

Senior member
Nov 3, 2014
555
2
46
How come 4GB is fine for high end cards people are buying now but in 1-2 months suddenly a card with 4GB is no good?

It's "suddenly" not enough anymore because stable 60 fps at max settings in 4K on a SINGLE GPU is finally within our grasp.

AAA Games are also slowly being rolled out with higher-than-HD textures in mind.

That's why :p
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
How come 4GB is fine for high end cards people are buying now but in 1-2 months suddenly a card with 4GB is no good?

The more chip power the more ability to turn up settings that eat RAM. Same reason why lower end cards can get by with less.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
How come 4GB is fine for high end cards people are buying now but in 1-2 months suddenly a card with 4GB is no good?

That is pretty funny. Various enthusiast forums such as ours are filled with GTX970 3.5GB / 980 4GB SLI users. 980 SLI actually delivers faster FPS than Titan X OC but since Sept 2014, millions of gamers had no issues buying 970 SLI and 980 SLI setups for 1080P-1440P. I don't see how a hypothetical card 20-30% faster than a 980 needs 8GB of VRAM.

Optimally high-end cards would have 6GB now.

I think it depends on your resolution, if you are going CF/SLI, how long you intend to keep the cards for.

What if we have this situation:

R9 390 4GB $499 = 86% of Titan X's performance
R9 390X 4GB $649 = 97% of Titan X's performance
R9 390X WCE 8GB $799 = 105% of Titan X's performance
GTX980 Ti 6GB $799 = 110% of Titan X's performance
Titan X 12GB $999

Now imagine if a gamer only has a 1080P, 1200P or even a 1440P screen. Does it really make sense for this person to go from a $499 card all the way to a $649-800 card? It's not that clear cut as you are making it imo. With the HD7950/7970/7970Ghz vs. 670/680, the situation was easy because the AMD cards didn't have any price premium for the extra VRAM. But if a gamer is faced to make a decision where 6-8GB actually costs more but a 4GB card hits the sweet-spot for his/her gaming resolution, then spending extra for 6-8GB is a waste. Instead, that money can be used towards Pascal, etc.

I agree with you that if one is going strictly for 4K or is aiming for dual-flagships at $1400 ($700 a pop), at that point I would strongly consider 6-8GB cards, but I still think there is a huge market for $500-550 4GB card that's faster than the 980.

It's "suddenly" not enough anymore because stable 60 fps at max settings in 4K on a SINGLE GPU is finally within our grasp.

I think you were sarcastic? :biggrin: Titan X @ 1.4Ghz can't hit 60 fps at 4k in a lot of games. That means the chance of a stock R9 390X achieving that is 0. We are still 1-2 generations away from a single high-end card getting 60 fps at 4K with highest settings in games. Even users running dual Titan Xs at 4K often hit 30-40 fps spots.

Also, I don't know why this sub-forum keeps ignoring R9 390. HD5850, 6950, 7950 and R9 290 were way more popular than HD5870, 6970, 7970 and 290X. If R9 390 4GB is 20% faster than a 980 at $499, that's going to sell like hot cakes, but why straddle that card with $100+ more expensive 8GB of VRAM version that will be wasted in 99% of gaming situations?
 
Last edited by a moderator:

Mopetar

Diamond Member
Jan 31, 2011
8,510
7,766
136
How come 4GB is fine for high end cards people are buying now but in 1-2 months suddenly a card with 4GB is no good?

Because we all need a way to justify dropping $$$ on a new graphics card.

If 4GB was still okay, why would I need to upgrade? On the other hand if 4GB is just for peasants and filthy casuals playing Peggle, obviously I need to invest in a nice new GPU.
 
Feb 19, 2009
10,457
10
76
Initially people were using yesteryear's games that were designed with 1080p or less in mind to bench VRAM usage at 4K. Now that games are being designed with higher resolutions in mind (and thus higher textures) along with larger pools of memory available on the consoles, the requirements are blowing up really fast.

I'm not concerned because I'm not moving to 4K for a while. The 4GB 390X would still make for a very good high refresh rate 1440P card.

This is the exact reason. It usually takes some time for the new console generation to impact development of games which takes a few years for AAA titles. The last batch of games were still developed for PS3/Xbox. Now that's mostly over, the target is purely PS4/Xbone, this allows developers to include more quality textures & design bigger levels/instances or detailed open world due to its large vram capacity.

I suspect 4GB vram won't even be enough for 1440p in another year, and it'll be for the mainstream 1080p crowd to enjoy games on max textures.

390X definitely needs a 8GB variant on release.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
If the 390x does indeed only ship with a 4GB variant, I see this is a sign of desperation from AMD. 8GB of HBM i'm going to assume is not at all cheap to produce, and given their position in the market comaperd to NVidia, AMD can't charge the same premium and still sell their products as readily as NVidia can, hence the compromise of only a 4GB card.

AMD knows that at the time of the cards release, 4GB will be enough for the resolutions most gamers are concerned with and come review time, the benchmarks will show the card as more than competitive at a substantially lower cost, which will undoubtedly get people to buy the card.

The long term question is going to be how well it stacks up to say a 6GB 980Ti in a years time.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Has there ever been any confirmation whether Fiji is going to have high Double Precision compute performance? Or will it be focused only on gaming loads and Single Precision computing, as Tonga and all the Maxwell chips are?

The reason I ask this is because it wouldn't make much sense to focus on DP performance if the card is limited to 4GB by the restrictions of HBM. Most serious professional users who depend on DP are going to want a lot more. AMD's current HPC flagship, the FirePro W9100 (Hawaii-based), has a whopping 16GB of RAM on board. The original Titan's 6GB would seem to be a bare minimum for these kind of scientific and mathematical tasks.

At least some regression in RAM capacity is almost certainly going to happen on FirePro Fiji. No one has ever suggested that 16GB HBM is even technically possible at this time. The only way around this - and I want to emphasize that this is pure speculation on my part - would be if AMD had a memory subsystem that could treat HBM as, essentially, the next level of cache, and then access GDDR5 on top of that if the HBM runs out. Imagine a graphics card with 4GB of ultra-fast HBM and 16GB of GDDR5 backing it up. Does AMD have the technical ability to do this? I looked for some information on the Xbox One's ESRAM cache, and it's not clear exactly how much of this is automated like a normal cache and how much requires developer involvement. Apparently it's needed for the developers to flush it once in a while, though that could be due to its very small size, and the 'necessity' could be improved performance rather than "it'll break if you don't do this". We know that Intel has this kind of technology, with Iris Pro's EDRAM that basically acts as L4 cache.