[Benchlife] R9 480 (Polaris 10 >100w), R9

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
If Polaris is very good, price, power use and performance, do you see many top end buyers waiting for Vega?

Many. Those who own 390/X, 980, 980Ti, Fury etc have no reason to upgrade to Polaris unless they really love that perf/w.

If Polaris can match Fury X, that already is a great result for such a small chip with GDDR5. Don't expect too much, but there's always a small chance of being surprised.
 

Mopetar

Diamond Member
Jan 31, 2011
7,902
6,161
136
Bulldozer was unbelievably bad. They literally would have been better off shrinking what they already had and sent Bulldozer off to the key chain supplier.

While I don't think Zen is going to match/beat Intel's latest and greatest. I'm hoping that it at can match Haswell-E. Offer that at the right pricing and they'll sell some CPU's.

Bulldozer was the culmination of many bad ideas all at the same time. It was AMD making the same mistakes as Intel did with NetBurst and chasing clockspeeds with a long pipeline while stripping out hardware to pack more "cores" into their chips.

Whether it was merely a case of design hubris, something to do with GF, or something else entirely, I don't know, but I can't imagine that Zen will be anywhere near as bad.

I doubt that they'll touch Intel's performance crown either, but just being remotely competitive is a win as far as I'm concerned. I think that with a competent CPU, AMD's APUs could be a worthwhile purchase, especially when they incorporate the newer GPU tech.

AMD has been somewhat forthcoming with Polaris, so it will be interesting to see how they handle Zen information in the upcoming months. If Polaris lives up to expectations and AMD handles Zen in a similar way leading up to launch I'll expect similarly good things, but if they keep a tighter lip, it's probably a lot less promising.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
If Polaris is very good, price, power use and performance, do you see many top end buyers waiting for Vega?

anybody with a 980 Ti/Fury X is probably waiting for that generational 2x perf
we will get from GP100/Vega10.
 

beginner99

Diamond Member
Jun 2, 2009
5,211
1,581
136
Bulldozer was the culmination of many bad ideas all at the same time. It was AMD making the same mistakes as Intel did with NetBurst and chasing clockspeeds with a long pipeline while stripping out hardware to pack more "cores" into their chips.

The more cores thing was not such a bad idea in theory. Server load is often integer and optimizing for that makes sense if you want to sell it in servers. However the rest about long pipeline and high clocks was a complete dumb idea as performance/watt is king in servers and maybe GFs process did not help either. Plus serious issues with the cache latency. The L3 cache was pretty much useless while taking a huge amount of die space.
And then software corporations changing licensing to "per-core" fucked them over even more and made the whole idea of BDs design useless.

On-topic:

I still think Polaris is over hyped. It might match or beat fury x in certain scenarios in minFPS due to new features like primitive discard in HW. But in others it will be more on Hawaii level of performance at less power use. Now consider you could buy used 290(x) for $250 or below since over 1 year, it will be hard for Polaris to match that in performance/$. AMD will price it around 390 levels ($300-$350) and focus on performance/watt discussion and entry level VR. Why should they sell it cheaper? I will be their best chip in many metrics. If it actually performs better than fury x, I expect $400+ price tag.

Lisa Su repeated it over and over. No more budget brand. And we saw that with the price increase of 390 vs 290. The 390 offered no real-life benefit over after market 290s.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I believe that AMD just expected software to evolve more and use more cores and it didn't. With the consoles and Mantle they molded the eccosystem to their hardware. I really believe that MSFT would have simply added features to DX11 if Mantle never came along. They know that AMD would give it to Khronos and there would be a superior open source API to take on DX11 and couldn't let that happen.

There is no doubt though that Bulldozer was a failed and badly flawed design. I don't even think API's and software designed to take advantage of more cores would have helped it. It certainly didn't in the server sector. Software remaining almost purely serial in design just magnified the flaws.
 

Adored

Senior member
Mar 24, 2016
256
1
16
Yeah Bulldozer is just bad in every way, no amount of software is saving that. The performance can be there but the power draw is a joke. It's probably the worst CPU arch ever. AMD basically can't lose with Zen simply due to the massively superior node. Even Jaguar on 14nm would beat Bulldozer.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Yeah Bulldozer is just bad in every way, no amount of software is saving that. The performance can be there but the power draw is a joke. It's probably the worst CPU arch ever. AMD basically can't lose with Zen simply due to the massively superior node. Even Jaguar on 14nm would beat Bulldozer.

A 14nm LPP Excavator would make lots of people change their perspective of the Bulldozer CMT design.
 

Glo.

Diamond Member
Apr 25, 2015
5,726
4,604
136
anybody with a 980 Ti/Fury X is probably waiting for that generational 2x perf
we will get from GP100/Vega10.

If we factor that Compute power will determine gaming performance in future DX12 games, there will not be 2x more power in the same thermal envelope.

6.7 TFLOPs 250W Titan X. 26GFLOPs/Watt.
10.6 TFLOPs 300W GP100. 35.3 GLOPs/Watt.

35% better efficiency.

Already in DX12 we see that Compute power of the GPU determines its performance, thats why R9 290X is slightly slower than GTX 980 Ti. 5.6 TFLOPs vs 6.1 TFLOPs.
 

kraatus77

Senior member
Aug 26, 2015
266
59
101
If we factor that Compute power will determine gaming performance in future DX12 games, there will not be 2x more power in the same thermal envelope.

6.7 TFLOPs 250W Titan X. 26GFLOPs/Watt.
10.6 TFLOPs 300W GP100. 35.3 GLOPs/Watt.

35% better efficiency.

Already in DX12 we see that Compute power of the GPU determines its performance, thats why R9 290X is slightly slower than GTX 980 Ti. 5.6 TFLOPs vs 6.1 TFLOPs.

Titan x @ 1480mhz - 9.0 TFLOPs (lets say it's using 275w at this frequency)
so that's 33.0 GLOPs/Watt.

that's 7% better efficiency.

GP100 has 300w tdp with that Efficient HBM2, and titan x didn't have it.

seems like there's not much p/w improvements. as far theoretical TFLOPs goes. but real world perf improvements will be much better.
 

Glo.

Diamond Member
Apr 25, 2015
5,726
4,604
136
40% higher boost clock for 10% more power consumed on 28nm node?

Very unrealistic estimate.
 

kraatus77

Senior member
Aug 26, 2015
266
59
101
40% higher boost clock for 10% more power consumed on 28nm node?

Very unrealistic estimate.

Titan x at default doesn't consume 250watt. more like 225-235w. also it doesn't run at default boost clock but actually around 1200mhz, so that means 23% clock speed bump.

Just for info, every single kepler/maxwell cards run @ higher than their stated boost clock speed. because there are 2 types of boost clocks.
1. stated by nvidia/oems.
2. its added above stated boost clock depending on power/temp/utilization.


anyway, let's say it will use 300watts. still only 17% p/w boost and that's with Hbm2.
 

el etro

Golden Member
Jul 21, 2013
1,581
14
81
TFLOPS is a theoretical number, it don't gives a exact idea of the card performance, but has meaning for compute workloads.
 

dzoni2k2

Member
Sep 30, 2009
153
198
116
Don't forget GP100 is a DP monster compared to TitanX. Cut those out and you have a much more efficient and faster GPU. Pascal based GeForce will probably spank Tesla in single precision and consume less power.
 

Abwx

Lifer
Apr 2, 2011
11,030
3,665
136
Titan x at default doesn't consume 250watt. more like 225-235w. also it doesn't run at default boost clock but actually around 1200mhz, so that means 23% clock speed bump.

Speed increased by 1.23x result in 1.23 x 1.23 = 1.52x more power, and likely more since this is at the extremity of the frequency/voltage curve, besides in application like pro computing one wont play with voltages (read stability) margins to squeeze a few % efficency.
 

Mopetar

Diamond Member
Jan 31, 2011
7,902
6,161
136
A 14nm LPP Excavator would make lots of people change their perspective of the Bulldozer CMT design.

Not really. It's still a bad design that may have sounded good on paper, but didn't work out in reality. Sure, being able to move to a better node would give better performance, but they could just as well make an old K10 chip using the same 14nm LPP process and have something even better.

Just like with Intel's NetBurst, sometimes it's the architecture itself. I wouldn't be terribly surprised to find out that AMD did the same thing with Zen as Intel did with the Core microarchitecture by looking back to an older design that was more effective instead of trying to fix something that turned out to be fundamentally broken.
 

beginner99

Diamond Member
Jun 2, 2009
5,211
1,581
136
Not really. It's still a bad design that may have sounded good on paper, but didn't work out in reality. Sure, being able to move to a better node would give better performance, but they could just as well make an old K10 chip using the same 14nm LPP process and have something even better.

Just like with Intel's NetBurst, sometimes it's the architecture itself. I wouldn't be terribly surprised to find out that AMD did the same thing with Zen as Intel did with the Core microarchitecture by looking back to an older design that was more effective instead of trying to fix something that turned out to be fundamentally broken.

Exactly. 28 nm APUs are slower than 45nm Lynnfields which are almost 7 years old. Single and multi-threaded. Yes, the got much better and more efficient than Bulldozer, but still lag Intel by a huge margin. But OT so I stop here.
 

Mopetar

Diamond Member
Jan 31, 2011
7,902
6,161
136
Exactly. 28 nm APUs are slower than 45nm Lynnfields which are almost 7 years old. Single and multi-threaded. Yes, the got much better and more efficient than Bulldozer, but still lag Intel by a huge margin. But OT so I stop here.

It's not even just Intel though. AMD's Bulldozer (and successive chips) desktop chips and APUs also lagged behind their own K10 chips. Here's a comparison between Phenom X4 (K10) and one of their Piledriver CPUs which was fabbed with a 32 nm process, whereas the older K10 was on a 45 nm node.

Don't forget GP100 is a DP monster compared to TitanX. Cut those out and you have a much more efficient and faster GPU. Pascal based GeForce will probably spank Tesla in single precision and consume less power.

Do we know anything about GP100 at this point? We have information on P100, but that's not worth much considering it had all the ROPs stripped out which means it's not meant for gaming at all.

Given that they're using some of their wafers on P100 and we only have credible information about what's most likely GP104 (~320mm die) that's coming out soon, I don't know when NVidia will start making GP100, because if they made something as large as P100, the price would have to be insane to justify the low yields or we'd be dealing with a massively cut down chip at this stage.

I don't think we see GP100 until the end of 2017 at the soonest. It's more likely we get a stopgap GP102 that's similar in size to Vega 10 (~450 mm^2) if anything at all. I can't imagine NVidia giving over the performance crown to AMD without a fight at all, but there's no way they have enough capacity to ramp up GP100 while doing P100 and getting GP104 launched, never mind any mobile products they'll want to get out. something's got to give here.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
AMD reportedly hosted an event designed to showcase its upcoming Polaris GPUs and the Radeon Pro Duo to journalists behind closed doors in Taiwan recently, ahead of an expected official unveiling in May. The big noise coming out of the event is that the switch to the 14nm FinFET fabrication process means the Polaris 10 GPU performs extremely close to the GeForce GTX 980 Ti, but for a drastically cheaper price point.

http://www.game-debate.com/news/?ne...ly Offers Near 980 Ti Performance For 300 USD
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
AMD reportedly hosted an event designed to showcase its upcoming Polaris GPUs and the Radeon Pro Duo to journalists behind closed doors in Taiwan recently, ahead of an expected official unveiling in May. The big noise coming out of the event is that the switch to the 14nm FinFET fabrication process means the Polaris 10 GPU performs extremely close to the GeForce GTX 980 Ti, but for a drastically cheaper price point.

http://www.game-debate.com/news/?ne...ly Offers Near 980 Ti Performance For 300 USD

I've been calling exactly this for weeks. Writing is on the wall and it's very obvious to anyone who pays attention. 980ti class performance at $299.

Nvidia will give you 25% higher performance for 100% higher price in dx11 with 1080ti. AMD will be the fastest you can get under 1 6-pin.
 
Last edited:

Saylick

Diamond Member
Sep 10, 2012
3,207
6,540
136
AMD reportedly hosted an event designed to showcase its upcoming Polaris GPUs and the Radeon Pro Duo to journalists behind closed doors in Taiwan recently, ahead of an expected official unveiling in May. The big noise coming out of the event is that the switch to the 14nm FinFET fabrication process means the Polaris 10 GPU performs extremely close to the GeForce GTX 980 Ti, but for a drastically cheaper price point.

http://www.game-debate.com/news/?ne...ly Offers Near 980 Ti Performance For 300 USD

That's the same info WCCF "reported" a day ago, mentioning the "175W TDP" and performance around "4000 points in Firestrike Ultra", "but it's actually a bit less than that."

Going to take this "news" with a huge grain of salt until more concrete info comes out.

We also want to share some information we learned from our sources about Polaris 11. AMD recently hosted a event in Taiwan to showcase their Polaris GPUs (Polaris 10 and Polaris 11) along with the Radeon Pro Duo card to journalists. We shared slides of the Radeon Pro Duo from that event yesterday. People were able to get some info out of AMD and it seems like the Polaris 10 can be an extremely competitive product.
AMD Polaris 10 Polaris 11 GPUs
The AMD Polaris 10 GPU has a maximum TDP of 175W but cards will actually consume much less than that. The GPU was initially built to support HBM memory but AMD chose to go the GDDR5/X route since it offers a better value currently. We will get to see HBM on AMD GPUs when Vega launches but until then, only Fury series will have HBM support. The Polaris 10 GPU is said to have 3DMark Firestrike Ultra performance around 4000 points which is about what a Radeon R9 Fury X and GeForce GTX 980 Ti score. By 4000 points, we don’t mean exactly 4000 but it’s actually a bit less than that.
 

S.H.O.D.A.N.

Senior member
Mar 22, 2014
205
0
41
Yeah, salt required, but if Polaris 10 is within 10% of a 980ti, I'm buying one. Actually, I'd buy two and ditch my TX.
 

Mopetar

Diamond Member
Jan 31, 2011
7,902
6,161
136
Yeah, salt required, but if Polaris 10 is within 10% of a 980ti, I'm buying one. Actually, I'd buy two and ditch my TX.

Yeah, it's possible, but only in DX12 titles that already favor AMD. Otherwise it's a pretty big stretch to think a ~230 mm^2 GPU is going to compete head to head with a 601 mm^2 GPU, especially when you consider that GCN still has a lot of compute hardware, whereas NVidia basically stripped out anything that wasn't useful for gaming with Maxwell.

If the general trend of game development favoring AMD because of console ports continues, it probably gives Polaris 10 a better future outlook, but I'm skeptical that it matches Fury X in most cases let alone a 980 Ti.
 

jpiniero

Lifer
Oct 1, 2010
14,655
5,280
136
Yeah, salt required, but if Polaris 10 is within 10% of a 980ti, I'm buying one. Actually, I'd buy two and ditch my TX.

Feels like that would be tough without GDDR5X if it's only 192 bit. Selling cut P10 models as 470 and 480 (and releasing immediately) and then selling full P10 with GDDR5X as the 490 when that becomes more available does kind of make sense.