[Sweclockers] Radeon 380X coming late spring, almost 50% improvement over 290X

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tential

Diamond Member
May 13, 2008
7,348
642
121
I'm pretty sure the 390x is a single core video card. After some quick googling, I found that the 390x is projected to be 65% faster than the 290x, which would make it 15% faster than the 380x.

Kind of makes sense, as AMD usually don't have very large deltas between their halo card and their main high end performance part.

So 380x, 390 and 390x are all water cooled?

Also, that'd put 3 cards within pretty close performance of each other?
Generally there is a jump from *80x to *90 right?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
So 380x, 390 and 390x are all water cooled?

Also, that'd put 3 cards within pretty close performance of each other?
Generally there is a jump from *80x to *90 right?

I haven't heard anything about a 390 yet, so one may not even be planned. The only reports have been about the 380x and the 390x..
 
Feb 19, 2009
10,457
10
76
Well there's a few different leaks around.

One is that R380X is a Tonga successor, smaller mid-range die. ~200W and 980 + ~10% performance.

Other is R390X is a massive die, ~300W and 980 + ~50%.

Then there's these more recent leaks, which says there's no R390X.

R380X is the new top single-GPU, ~300W and 980 + 30%, or ~R290X + 50%.

Take it with a grain of salt until you see a die shot. That's when you know for real that AIBs have samples in their hands, which are 99% of the time, the source of leaks.

Edit: I would lean towards 50% above R290X, it's just too unbelievable for it to be +80% R290X given its 28nm.
 
Last edited:

Elixer

Lifer
May 7, 2002
10,371
762
126
Looking very strong, which is good because NVidia will have to lower it's prices.

So how much faster is the 390x going to be compared to the 380x?

Why do you think that will happen ?
What if AMD makes this card $900 and nvidia stays at current prices ?

AMD really needs to bring in $$$, and this would be one way of doing it.
Sorta makes sense, then the current price of the 290/x will fill in the price gaps.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Why do you think that will happen ?
What if AMD makes this card $900 and nvidia stays at current prices ?

AMD really needs to bring in $$$, and this would be one way of doing it.
Sorta makes sense, then the current price of the 290/x will fill in the price gaps.

Well, with the new CEO we can't be sure what marketing they will take.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Why do you think that will happen ?
What if AMD makes this card $900 and nvidia stays at current prices ?

AMD really needs to bring in $$$, and this would be one way of doing it.
Sorta makes sense, then the current price of the 290/x will fill in the price gaps.

That won't happen. A company just can't go from a budget brand to a luxurious one even if the performance is there.. like the backlash with the 7970 release and its price tag. Its quite worrisome since if AMD is able to churn out great products that outdo its competitions, it may not be able to rake in ALL the potential returns due to its budget image.. and every cent matters since we are talking millions in volume.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
My bad. That number was lower than ones I previously saw.

And that EVGA SSC seems to be a very hot example for a 970.

67934.png


67935.png


(No, I am not comparing to the 290X in these images, I realize those are reference design and not the better coolers from AIBs)

Let's be fair here: regardless of what AIBs can make happen, AMD made the card to run at 90+. Can these cards HANDLE that? Sure. Is it good? Eh.. not really. If it's fully within the designed spec, it should be fine. But it's a tremendous waste of power, which means an awful amount of extra heat. If the cards are kept cool enough, they should draw less power. So the AIBs came to the rescue, that's good. AMD is relying on this instead of creating a card that doesn't need extravagant cooling solutions.

You guys take one thing I say and jump on it like rabid wolves, yet ignore the greater points I stress.

It's like you guys can't comprehend that, OMG, I have actually agreed with most of what you have said. I guess I should make it more clear, in that I am playing a bit of devil's advocate and also stressing market consideration, alongside gamer preference and simply an appeal for technological progress.

I assume you know that cards throttle running Furmark? There is zero cred given to using Furmark to show power consumption or temps.
 

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
Chiphell (extremely mixed record so take this one with a truckload of salt) has published what it purports to be new benchmarks.

A summary:

It is that it shows a 64% increase over 290X in 3D Mark.
On a "performance index" of various(and unknown games) the delta over 290X shrinks to 51%.

Interestingly, on the TDP issue, it states that it is not 300 W but instead the same TDP as 290X. However, the TDP of the 290X and the actual usage is typically different(real-world power draw is often lower for AMD cards. Around 210-220 W for the 250 W TDP).
That's good news, if it is true.

The temperature is 73 Celcius, which is thanks to a hybrid water and air cooler.

The thread-starter also claims that there will be air coolers, but it is unknown if this will be provided by AMD or if it is up to AIBs to implement this(I'm guessing the latter).

Commentary:

All in all, rumors are rumors, but if this is true, then we're seeing the performance basically in line with previous leaks but improvements on the TDP front. If we get the same kind of wattage and cooling as on a custom-cooled 290X but with 50% more powah, that's a deal that would be competitive as it stands.

Of course, this card will likely face off GM200, so the question remains if it is enough.
If Nvidia manages a 40% gain over the 980 with the "1080"(or whatever it will be called) then they will inch out this card and likely with a lower wattage to boot.

Also, interesting that the OP doesn't talk about process. I want to see 20 nm but if it is 28 nm, then AMD has blown a chance at a competitive advantage. They'll have to get to 14 nm early next year to have any lead at all over Nvidia.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
That won't happen. A company just can't go from a budget brand to a luxurious one even if the performance is there.. like the backlash with the 7970 release and its price tag. Its quite worrisome since if AMD is able to churn out great products that outdo its competitions, it may not be able to rake in ALL the potential returns due to its budget image.. and every cent matters since we are talking millions in volume.

In this case, since it supposedly comes with a custom water loop, we are talking a much higher price than cards with air cooling.

So, out of the gate, they are catering to users who don't really care of the cost, they just want the fastest card they can get. If they are willing to pay $1000 for nvidia's top of the line, they would be just as willing to pay the same price for something that is (much?) faster than anything else out when this is released.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
I mean, let's be real: is AMD seriously going to release a flagship single-GPU card that has a reference/mandatory hybrid/AIO cooling solution? That should not be happening, plain and simple.
I mean, let's be real: is Nvidia seriously going to release their flagship FX5800 Ultra single-GPU card that has a reference/mandatory heatpipe cooling solution? That should not be happening, plain and simple.

Times and technology change.

Every reviewer of the AMD 295X2 was shocked at how quiet the card was when compared not only to AMD cards, but also Nvidia cards. If AMD can put a near silent AIO cooler on their next gen flagship cards and keep the price reasonable, why is that bad?
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,069
3,886
136
In this case, since it supposedly comes with a custom water loop, we are talking a much higher price than cards with air cooling.
.

no we aren't, i typed a massive post about it but then chrome crashed..... so all i would say is use your brain and think about it, look at the price of AIW water vs air on the cpu side, then think about what actually needs to be cooled on the 380x vs other GPU's etc.........
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
This is on the low end of what I expected. I am kinda disappointed.

If this is it, then it's probably not an HBM design.
 

DiogoDX

Senior member
Oct 11, 2012
757
336
136
This is on the low end of what I expected. I am kinda disappointed.

If this is it, then it's probably not an HBM design.
Why not? For me is in line with 45% more stream processors and Tonga like front end.

More bandwith will only help if the card is bandwith starved. Maybe in a compute bandwith bound application the gains will be larger.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
Why not? For me is in line with 45% more stream processors and Tonga like front end.

More bandwith will only help if the card is bandwith starved. Maybe in a compute bandwith bound application the gains will be larger.

Don't get me wrong, it's still a monumental performance and efficiency increase. Just nothing that couldn't be done without HBM (I feel).

GM200 should be pretty close in performance to this, I think.

Also, that power consumption is probably in Furmark. I'm guessing in games it's going to adhere to the same 250w limit the 290X has.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Don't get me wrong, it's still a monumental performance and efficiency increase. Just nothing that couldn't be done without HBM (I feel).

GM200 should be pretty close in performance to this, I think.

Also, that power consumption is probably in Furmark. I'm guessing in games it's going to adhere to the same 250w limit the 290X has.

Going with HBM over GDDR5, gives you 512GB/s of bandwidth versus 320 GB/s on the 290X, a 60% increase.

Continuing with GDDR5, you would need a 512 bit bus width, which along with the rumored 4096 shader, would likely lead to an enormous die (possibly to large to make on 28nm. Additionally the memory would need to run slightly faster than the current 5GHz, to provide a total increase in bandwidth of at least 45%, and significantly higher to reach 60%

In other words, the increase from HBM (gen 1) is fairly close to the rumored shader/performance increase. HBM gen 2 can potentially quadruple the bandwidth of gen 1, and as such would obviously have been overkill, but I don't think gen 2 is ready yet anyway.
 
Feb 19, 2009
10,457
10
76
Don't get me wrong, it's still a monumental performance and efficiency increase. Just nothing that couldn't be done without HBM (I feel).

GM200 should be pretty close in performance to this, I think.

Also, that power consumption is probably in Furmark. I'm guessing in games it's going to adhere to the same 250w limit the 290X has.

Chiphell and other Chinese review sites tend to use Furmark (which is invalid due to major throttling for some cards) for power loading, and the results for the other cards line up, ie. 780ti using 250W is above what it normally use in games, ~225W which the R290 is close to.

It's a tremendous achievement to squeeze 50% performance from essentially the same power use. It's almost as good as the leap from Kepler to Maxwell.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,069
3,886
136
your ignoring the massive latency decrease of HBM over DDR3/GDDR5. that alone should give a noticeable boost especially to worse case code. Stalled ALU's is wasted performance. Getting them feed again sooner is higher utilization per ALU thus higher performance per ALU.
 
Feb 19, 2009
10,457
10
76
your ignoring the massive latency decrease of HBM over DDR3/GDDR5. that alone should give a noticeable boost especially to worse case code. Stalled ALU's is wasted performance. Getting them feed again sooner is higher utilization per ALU thus higher performance per ALU.

I think HBM is the bulk of where their efficiency gains come from, due to reduced power use on the overall memory subsystem (which from previous slides, uses up to a 3rd of the overall GPU TDP!). In theory the lower latency will ensure the ALUs have higher uptime.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
Don't get me wrong, it's still a monumental performance and efficiency increase. Just nothing that couldn't be done without HBM (I feel).

GM200 should be pretty close in performance to this, I think.

Also, that power consumption is probably in Furmark. I'm guessing in games it's going to adhere to the same 250w limit the 290X has.

I hope it's accurate because going on GM204 performance nvidia won't be able to do much better with GM200 and that will force them to compete against one another.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
What is the current prediction on GM200 performance?

I'm guessing it's something like ~30% over 980? It's got +50% everything but probably wont clock as high yeah?
 
Feb 19, 2009
10,457
10
76
GM200?
It should be +50% at least, because it can clock as high as it needs to, there's plenty of headroom for TDP. If its just +30% over a 980, it's still only around the 225W mark. They can definitely crank it up and go for 250W.
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
GM200?
It should be +50% at least, because it can clock as high as it needs to, there's plenty of headroom for TDP. If its just +30% over a 980, it's still only around the 225W mark. They can definitely crank it up and go for 250W.

50% more chip at the same clocks as 980 would be north of 250w man.

...right?
 
Feb 19, 2009
10,457
10
76
50% more chip at the same clocks as 980 would be north of 250w man.

...right?

Look at the GK104 -> GK110 scaling, power goes up proportionally to the performance.

Assuming identical perf/w (we have no evidence to assume otherwise), the 980 is 180W + 50% = ~270W. But the thing about 980 is it may not be 100% efficient, in that its bandwidth or ROP design doesn't fully saturate its shaders. So we could in theory extract slightly better perf/w from a large bus and front end GM200, so ~250W for 50% more performance is reasonable.

Heck if you believe the 980 is a 170W part, then 170 x 1.5 = 255W.

I would prefer NV & AMD to go for the fastest single GPU they can and use extra power to do it. That way, I can get 2 of those GPUs for enough grunt to handle high res gaming comfortably.