[WCCF] AMD Radeon R9 390X Pictured

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Claimed leaked benchmarks. Looks like same old situation from 290/x launch where it was competing with the titan and Ti, tho this looks better. Still more interested in the 390. Highly unlikely I can justify a 390x purchase at the price it will launch at.

http://www.overclock3d.net/articles/gpu_displays/amd_r9_390x_vs_gtx_980_ti_performance_leak/1

14090648884l.jpg


14090648358l.jpg


14090648420l.jpg


14090648172l.jpg
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Those benchmarks are from last year, from ChipHell.

And were pretty much spot on with Titan X...

Also first benchmarks shown a performance of another AMD card that was 20% faster than GTX980 and was using only 8% more power than that card.
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,538
136
If Fiji can OC like Hawaii (~1150mhz) or better than that (it should due to the stock WC)... the X has some headroom left but not much (at least on the stock blower at the cost of noise). But at the same clocks it looks like Fiji is faster.

Looks good.
 

dacostafilipe

Senior member
Oct 10, 2013
810
315
136
Is it technologically possible for HBM1 to have more than 4GB?

Per stack, HBM1 is limited to 4Gb (=1GB) with 4Hi modules.

Just like you use multiple GDDR5 modules per card, you will also be able to use multiple HBM stacks.

How many stacks you can use together with the basic "Base Die" (controller layer on all HBM stacks) provided by Hynix is still unknown.
 

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
Those benchmarks are from last year, from ChipHell.

And were pretty much spot on with Titan X...

Also first benchmarks shown a performance of another AMD card that was 20% faster than GTX980 and was using only 8% more power than that card.


Are they from last year? The Chiphell page is dated March 2015: http://www.chiphell.com/thread-1253102-1-1.html

The main thing I keep coming back to in those charts is that, if true, they mean that Fiji XT is basically twice as fast as the 280x (read: slightly tweaked 7970Ghz). Not that sure I'm impressed by that, given that the 7970 is more than 3 years old.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Are they from last year? The Chiphell page is dated March 2015: http://www.chiphell.com/thread-1253102-1-1.html

The main thing I keep coming back to in those charts is that, if true, they mean that Fiji XT is basically twice as fast as the 280x (read: slightly tweaked 7970Ghz). Not that sure I'm impressed by that, given that the 7970 is more than 3 years old.

Those benchmarks were posted a long time ago; you must have missed them ;) It now takes about 3 years to double the performance of a flagship GPU (580 -> 780Ti), which means the doubling over 7970Ghz is in-line with that. Neither NV nor AMD have been able to double performance in 18-24 months as was the case in the past. Computerbase's charts show that since September 2009, performance of GPUs today roughly increases at a rate of 33-35% per annum. That means we should have a card about 2.35X faster than HD7970Ghz from both NV by now. Both the Titan X and R9 390X are likely to fall short of that mark but 2-2.15X faster is probably a go for the 390X.

Using a GPU limited resolution, Titan X is 2X faster than a 280X but costs $1K. If AMD can deliver 2X the performance at a more reasonable $599-649 price, it's a HUGE win for the GPU industry in terms of moving the price/performance technology curve.

perfrel_3840.gif


Beating 290X by 36% alone would move 390X to within 5% of the performance for the Titan X. A card like that priced at $649 would already make the Titan X "obsolete." If AMD can beat the Titan X and undercut it, that would be bananas! Imagine GTX970 SLI performance at $649 in a single card! For that to happen, R9 390X would need to be 50% faster than the 290X. :)
 
Last edited:

Black Octagon

Golden Member
Dec 10, 2012
1,410
2
81
Those benchmarks are ancient and were posted a long time ago. It now takes about 3 years to double the performance of a flagship GPU (580 -> 780Ti), which means the doubling over 7970Ghz is in-line with that. Neither NV nor AMD have been able to double performance in 18-24 months as was the case in the past. Computerbase's charts show that since September 2009, performance of GPUs today roughly increases at a rate of 33-35% per annum. That means we should have a card about 2.35X faster than HD7970Ghz/680 from both NV by now. Both the Titan X and R9 390X are likely to fall short of that mark but 2-2.15X faster is probably a go for the 390X.


I agree, but it means that:
1) the numbers in these 'old' charts may be feasible, and
2) the state of gaming GPUs sads me
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I agree, but it means that:
1) the numbers in these 'old' charts may be feasible, and
2) the state of gaming GPUs sads me

1) Yes.
2) The state of AAA PC games being console port gimped saddens me more. It's things like these why I think it's better to buy $400-500 cards and upgrade more often than buy $700+ ones. Software is basically 1.5-2 years behind, maybe more. Titan X SLI wasn't even any faster in GTA V vs. 980 SLI per HardOCP due to CPU bottlenecks.

BTW, nice overclock on your 7970 card!
 

alcoholbob

Diamond Member
May 24, 2005
6,390
470
126
Those benchmarks are from last year, from ChipHell.

And were pretty much spot on with Titan X...

Also first benchmarks shown a performance of another AMD card that was 20% faster than GTX980 and was using only 8% more power than that card.

Clearly made up numbers though. For one they overstated the Titan X Firestrike Extreme numbers. A Titan X usually gets around 7600-7700. You'd have to overclock to near 1400MHz to get a score of around 8400.

Also power consumption is overstated, on the stock profile it only uses about 230W, not over 250W.
 
Last edited by a moderator:
Feb 19, 2009
10,457
10
76
Clearly made up numbers though. For one they overstated the Titan X Firestrike Extreme numbers. A Titan X usually gets around 7600-7700. You'd have to overclock to near 1400MHz to get a score of around 8400.

Also power consumption is overstated, on the stock profile it only uses about 230W, not over 250W.

Its relative to the 780ti, as that's its real power usage profile.

Power usage 780ti, 30-35% faster than 980. Bang on.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Clearly made up numbers though. For one they overstated the Titan X Firestrike Extreme numbers. A Titan X usually gets around 7600-7700. You'd have to overclock to near 1400MHz to get a score of around 8400.

Also power consumption is overstated, on the stock profile it only uses about 230W, not over 250W.

Titan X here is OC'ed. Both in gaming performance, and in power consumption tests.

And is OC'ed over 18% from the stock clocks. So where was it mistaken?

But lets don't get dragged away from the topic. Its not about the clocks of TitanX and Chiphell. Its about R9 390X.
 

gamervivek

Senior member
Jan 17, 2011
490
53
91
Why would they OC nvidia cards and not AMD's? It's probably the max clock speed attained by nvidia's card in boost mode, othewise Titan X would be a fair bit faster.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
As the date nears for the "official" launch of the R9 390/390x I'm really interested in the absolute jump from my R9 290s below to R9 390s.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Also power consumption is overstated, on the stock profile it only uses about 230W, not over 250W.

Titan X uses 243W of power at load. Their estimate is fairly close to that.

1189mhz could be their overclocked Base clock, which results in Boost > 1.4Ghz. That would explain their much higher scores that correspond to a 1.4Ghz boosted Titan X and power usage > 250W. NV's Boost clock they advertise is basically a conservative estimate. Sure, the Titan X states 1076mhz but in reality it's more like 1180-1215mhz in games. 1076mhz is the minimum it ever goes.


=======

Computerbase has done a great analysis of what happens to R9 290X's performance when GDDR5 is overclocked 26% to 403GB/sec. Performance barely improves 3.7%, which means Hawaii is not memory bandwidth starved.

AMD would never pair HBM1 with 512GB/sec-640GB/sec spec with a GPU that's only 10-15% faster than a 290X, especially if HBM1 limits VRAM to just 4GB; and they would never price such a 'slow' card at $600-700. If HBM1 was ONLY used to lower power usage and the performance goes up just 10-15%, 390X would barely match a 980 which would accomplish little for AMD. The only logical conclusion as to why AMD would be the early adopter of risky HBM1 tech (because AMD adopted it knowing there would be massive delays from R9 290X's launch date), was to both lower the power usage and provide the necessary bandwidth to feed a MUCH more beastly GPU.

This can only mean 1 thing: AMD's 390X is going to destroy the 290X.
 
Last edited:

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
Titan X uses 243W of power at load. Their estimate is fairly close to that.

1189mhz could be their overclocked Base clock, which results in Boost > 1.4Ghz. That would explain their much higher scores that correspond to a 1.4Ghz boosted Titan X and power usage > 250W. NV's Boost clock they advertise is basically a conservative estimate. Sure, the Titan X states 1076mhz but in reality it's more like 1180-1215mhz in games. 1076mhz is the minimum it ever goes.


=======

Computerbase has done a great analysis of what happens to R9 290X's performance when GDDR5 is overclocked 26% to 403GB/sec. Performance barely improves 3.7%, which means Hawaii is not memory bandwidth starved.

AMD would never pair HBM1 with 512GB/sec-640GB/sec spec with a GPU that's only 10-15% faster than a 290X, especially if HBM1 limits VRAM to just 4GB; and they would never price such a 'slow' card at $600-700. If HBM1 was ONLY used to lower power usage and the performance goes up just 10-15%, 390X would barely match a 980 which would accomplish little for AMD. The only logical conclusion as to why AMD would be the early adopter of risky HBM1 tech (because AMD adopted it knowing there would be massive delays from R9 290X's launch date), was to both lower the power usage and provide the necessary bandwidth to feed a MUCH more beastly GPU.

This can only mean 1 thing: AMD's 390X is going to destroy the 290X.

Why wait to release this supposed beast?

The longer the wait, the less beastly it is relative to whatever NV brings out.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,576
126
The 290X was released October 2013.

I hope a flagship card released in Mid-2015 trashes it in all ways.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Computerbase has done a great analysis of what happens to R9 290X's performance when GDDR5 is overclocked 26% to 403GB/sec. Performance barely improves 3.7%, which means Hawaii is not memory bandwidth starved.

AMD would never pair HBM1 with 512GB/sec-640GB/sec spec with a GPU that's only 10-15% faster than a 290X, especially if HBM1 limits VRAM to just 4GB; and they would never price such a 'slow' card at $600-700. If HBM1 was ONLY used to lower power usage and the performance goes up just 10-15%, 390X would barely match a 980 which would accomplish little for AMD. The only logical conclusion as to why AMD would be the early adopter of risky HBM1 tech (because AMD adopted it knowing there would be massive delays from R9 290X's launch date), was to both lower the power usage and provide the necessary bandwidth to feed a MUCH more beastly GPU.

This can only mean 1 thing: AMD's 390X is going to destroy the 290X.

If the 290X has so much bandwidth in excess and with Tonga's compression improvements AMD should have had no trouble fitting in a similar performance improvement (compared to the 390X with HBM) with a tweaked Hawaii memory bus. IMO it was more for power reasons.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
AMD would never pair HBM1 with 512GB/sec-640GB/sec spec with a GPU that's only 10-15% faster than a 290X, especially if HBM1 limits VRAM to just 4GB; and they would never price such a 'slow' card at $600-700. If HBM1 was ONLY used to lower power usage and the performance goes up just 10-15%, 390X would barely match a 980 which would accomplish little for AMD. The only logical conclusion as to why AMD would be the early adopter of risky HBM1 tech (because AMD adopted it knowing there would be massive delays from R9 290X's launch date), was to both lower the power usage and provide the necessary bandwidth to feed a MUCH more beastly GPU.

This can only mean 1 thing: AMD's 390X is going to destroy the 290X.

They might have done it just for power savings alone. Why not? the gains there are substantial. I just hope it aint similiar to R600 with its state of the art 512bit memory bus and GDDR4 memory but the core itself was a let down (only 30% faster).

Thinking about it.. I don't think bandwidth was EVER the primary bottleneck.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Why wait to release this supposed beast?

The longer the wait, the less beastly it is relative to whatever NV brings out.

Why wait to release GTX470/480 if HD5850/5870 beat you to launch? Why wait to release GTX670/680 if HD7950/7970 beat you to launch? How do you expect people to answer that question?

There are lot of logical reasons. I'll start with the most important one:

1) Designs on paper do not always accord with what's possible in reality.

"Speaking to journalists - and captured on video by Golem.de - Huang explained how the company discovered that designs on paper don't always accord with what's possible in reality. The issue arose in the way that the Fermi architecture is broken down into multiple Streaming Multiprocessor (SM) clusters that are linked to each other via a series of interconnects.

Huang explained that these interconnects are like the fibres in a piece of fabric - densely packed and tightly layered. On paper, this would allow incredibly fast communication between each of the processing cores and any other part of the chip. However, things didn't quite go to plan.

When the first samples were received back from TSMC, all of the SMs seemed to be working normally - but none of them were able to communicate with each other. Apparently the interconnect was so dense that signals were interfering with each other, completely breaking any connections. This led to something akin to a traffic jam where no information was able to pass across the chip."

Source

>>> It's possible AMD needed more engineering time to hit their required perf/watt and performance targets and they ran into engineering issues along the way -- even NV ran into fabric issues with Fermi which was only found out ONLY once the chip was manufactured. That mistake cost NV 6 months launch delay.

2) They needed to clear R9 200 inventory
3) Their 1st or 2nd re-spin was too leaky, didn't hit the targeted GPU clocks
4) They couldn't get good enough yields in case your chip is 500mm2+
5) They ran into issues with HBM1. It probably isn't easy to design HBM1 as it's a far more complex switch than moving from DDR3 to GDDR3 or from GDDR3 to GDDR5
5) It could be AMD miscalculated and they thought 20nm is a GO, started designing the GPU on that process, later found out it wasn't going to work and had to re-engineer everything to 28nm. NV might have been smarter and skipped 20nm from the get go? Lisa Su did say AMD tried 20nm designs but scrapped them.

Also, did people forget what AMD did with HD4850/4870, 5850/5870, R9 290/290X? Those 3 launches literally changed the entire high-end GPU landscape and how many leaks were there before launch? Just because there are 0 leaks from AMD doesn't mean the cards are a failure. They could be or may be amazing.

All I know contrary to some hearsay on these forums that R9 390X must beat GM200 in ALL metrics, if AMD brings out a card 90% as fast as the Titan X for $549, it'll sell out. Until GM200 6GB launches, AMD has 980 which is just 6-8% faster than a 290X at 4K and a $1K Titan X, leaving an enormous gap in NV's line-up where R9 390 and 390X can drop and disrupt the entire high-end market.

They might have done it just for power savings alone. Why not?

Because as I already said a card just 10-15% faster than an R9 290X released 1.5 years from R9 290X launch cannot be priced at $599+ or it'll fail automatically. That's not even better than the GTX980. That's why these rumours don't add up. If there is HBM1 and the card is priced at $600+, it's significantly faster than a 290X that currently sells for just $280-300.

1) HBM1 reduces the complexity of the PCB, which means there is already more than 1 benefit.

2) Per AMD's own slides, power usage reduction is from 85W to about 30W, or a 55W reduction. R9 290X uses 270W+. You can't make a card 40-50% faster than an R9 290X on the same 28nm node from just 55W reduction in power in memory type switch. There have to be other major changes.mYou are not making the right connection here. If AMD reduces power usage 55-70W with HBM1 over GDDR5, they are likely going to use that on shaders, TMUs, ROPs or GPU clocks. They wouldn't release a 190W flagship card. Therefore, a 290X replacement with 250-275w of power is not just going to be a card 10-15% faster than an R9 290X.

l29o6zV.png


It's also not out of the question that HBM1 reduces the size of the memory controller on the die which means more transistors can be used towards shaders, TMUs and ROPs.

I am betting R9 390X is at least 30% faster than an R9 290X at 4K. All it takes is a 3584 chip with 1.05Ghz clocks to hit that.

HD2900XT/3870 -> 4870
HD5870/6970 -> 7970/7970Ghz
HD7970/7970Ghz -> R9 290X

Performance increase is always > 30% for AMD.
I think this forum and basically most of the Internet has short memory that they think AMD is done and dusted just because AMD has been quiet. AMD has never been behind NV by more than 20% since HD4870 series. AMD has also never released a true next gen flagship that wasn't faster than the last one by at least 30%.

I think some people in this thread will be literally stunned if R9 390X beats the Titan X by even 1% because of how conservative some of you guys are.
 
Last edited:
Status
Not open for further replies.