• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD's next GPU uarch is called "Polaris"

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Did any of you actually read the Anandtech article???

They specifically called out that it was capped at 60 fps. AMD also similarly in the last 60 months released a new driver with frame rate capping capabilities, and boasted about how it increased perf/watt. They picked a 950, not a 960, so that more of the chip would have to be spun up to higher frequencies to hit 60 (wider w/ lower clock = more power efficient, narrower with higher clock = less power efficient). The new chip can likely do more than 60fps in SWBF, and the 950 is probably nearing its cap, and this was intentionally chosen. Do the math.

Those numbers they showed are certainly true, but they've also picked the precisely best light to show it in. As anyone should expect from a products company marketing its product...

I find it amazing how blindly people time after time believes in PR and completely throw away any sense of normal source criticism. And we all know the result, because time after time it never lives up to the PR.
 
It goes against your theories guys. If the Chip was capped at 60 FPS, and draws in that scenario between 30-40W of it can be 75W GPU without the need to be power from PSU.

If that is the case, how much power from it can be extracted? It is a small chip with GDDR5, remember.
 
I dunno, to play devils advocate, some people were hyping up just using minus power tune numbers to get lower power consumption at marginal performance decrease.

What if they chose 60 FPS cap specifically because uncapped it would get 66 FPS but power consumption would jump up 30-40%. Isn't that what they did with Nano?

Either way, eager to see what else they got. Get that bad Fiji taste out of our mouths.
 
AMD-Polaris-Architecture-8.jpg


This image has been going around. Do you guys remember the name of the technique that Nvidia uses to save power that is wasted processing frames that are not needed? I assume that when a game is VSYNC capped at 60 fps, it is saving power vs running at 150fps or whatever max the hardware is capable of hitting. Are there any benchmarks that compare 60fps capped power consumption vs uncapped on a 950? If we had that info we would be able to better assess just how good Polaris is.

Look at this Image , both are capped at 60Fps.

http://www.overclock.net/t/1586552/...t-polaris-gpu-architecture/0_40#post_24760011

850Mhz at 0.8375 volt.

Now if we look at Slide :

AMD Internal Lab testing as of Dec 2,2015 with an Intel Core i7 4790x with 4x4GB DDR4-2600 Mhz memory, Windows 10 64bit.
board manufactures may vary configurations yielding different result.Star Wars Battlefront was tested on the X-wing Training Map using FRAPS.The Polaris card on Med Preset @1080 scored 60fps and consumed 86 with driver 16.10 beta.The GTX 950 card on Med Preset @1080 scored 60fps and consumed 140w with driver 359.06

totally different.
 
What the hell are you talking about? If everyone was saying "WOW" like you mentionned, the market share would be in favor of AMD. But it is not the case.

And also, would you prefer AMD to use a Gamewreck (Gimpwork, I mean Gameworks) title instead? And don't forget that Gamewreck develpper program is the new technology that is actually killing the PC gaming industry and bringing pain to all gamers. I think I understand why AMD uses an AMD game title. Duh 😵
Or just keep it simple. It's a marketing slide... So they should market themselves in the best light.

Wow... That's not evil it's just business. Do we see nvidia use amd games in their marketing slides
Lol the arguments brought up on here sometimes are unreal.
 
Last edited:
Or just keep it simple. It's a marketing slide... So they should market themselves in the best light.

Wow... That's not evil it's just business. Do we see nvidia use amd games in their marketing slides
Lol the arguments brought up on here sometimes are unreal.

Exactly, from your point of view, it is even more ridiculous. :thumbsup:
 
What if they chose 60 FPS cap specifically because uncapped it would get 66 FPS but power consumption would jump up 30-40%. Isn't that what they did with Nano?

Nano is magical. Is not just a TDP capped card. It includes premium materials and hand chosen chips.
 
How much does a 750 Ti consume? 75'ish watt, right?

I wonder how much performance you can get from Polaris in that kind of power envelope. It should pass the 960, but I wonder about the 280X.
 
https://forum.beyond3d.com/threads/...ors-and-discussion.56719/page-22#post-1889628

They were using vsync with medium preset at 1080p, so framerate was locked at 60 FPS(if I remember correctly 16nm FinFet alone provides 2x perf/watt gains over 28nm at iso perf), GTX 950 should be capable of much more, even 50 W 750 Ti can handle Battlefront at those settings with locked 60, especially in an empty X-wing training

http://media.gamersnexus.net/images...h/battlefront/battlefront-gpu-1080-medium.png
 
How much does a 750 Ti consume? 75'ish watt, right?

I wonder how much performance you can get from Polaris in that kind of power envelope. It should pass the 960, but I wonder about the 280X.

More like 65W.

A rough estimate would be ~half power for the same performance.

However more power=more performance without increasing the die. So we have to see how SKUs turn out.
 
So, showing of working silicon for a product released in 6 months is impressive?

nVidia had never shown Maxwell prior the launch and yet Maxwell was impressive at launch.
Nvidia has been talking about Pascal for months and they don't bother showing a wooden card a la Fermi

Of course I think is more impressive what amd is doing with showing a working silicon.
 
at bit OT but I find interesting to mention that all Polaris official slides are branded RTG (Radeon Tech Group) without any mention of AMD. First time it happens. A sign that RTG will become an independent company (aka ATI) ?
 
Last edited:
at bit OT but I find interesting to mention that all slides are branded RTG (Radeon Tech Group) without any mention of AMD. First time it happens. A sign that RTG will become an independent company (aka ATI) ?

Interesting observation. The ATI part does look like its up for sale after the management change as well. And this only enhances that.
 
at bit OT but I find interesting to mention that all slides are branded RTG (Radeon Tech Group) without any mention of AMD. First time it happens. A sign that RTG will become an independent company (aka ATI) ?

RTG doesn't roll off the tongue like ATI did. But I'm down! The faster they shake off the stigmatized AMD acronym, the better.

EDIT:

Interesting observation. The ATI part does look like its up for sale after the management change as well. And this only enhances that.

Hmmmm, this is where a boy can dream Intel snags it. Come on Intel. Those are premiums I'd gladly pay 😀
 
So it looks like AMD is going the Nvidia route by launching bottom up with Polaris GPUs. It makes the most sense since FINFET yields are still a challenge at the foundries especially as die size goes up. The largest FINFET chip we have seen to date is Apple A9X at 147 sq mm fabbed at TSMC 16FF+. Since now its confirmed that AMD's smallest GPU is a 14 LPP GPU fabbed at GF, its very likely that AMD uses TSMC for big die flagship GPUs as they have the better yields compared to Samsung / GF.

I can see AMD's flagship GPU come in at 300 sq mm and use HBM2 and be manufactured at TSMC 16FF+. I predict a late Q3 launch. I am sure AMD will use multiple SKUs by disabling CUs for improving yields. Raja also mentioned only 2 new FINFET GPUs this year. So AMD has to make up the entire GPU stack using two chips. I speculate that the big die FINFET GPU chip will have atleast 3 SKUs (and maybe even 4) depending on yields.
 
Last edited:
So it looks like AMD is going the Nvidia route by launching bottom up with Polaris GPUs. It makes the most sense since FINFET yields are still a challenge at the foundries especially as die size goes up. The largest FINFET chip we have seen to date is Apple A9X at 147 sq mm fabbed at TSMC 16FF+. Since now its confirmed that AMD's smallest GPU is a 14 LPP GPU fabbed at GF, its very likely that AMD uses TSMC for big die flagship GPUs as they have the better yields compared to Samsung / GF.

I can see AMD's flagship GPU come in at 300 sq mm and use HBM2 and be manufactured at TSMC 16FF+. I predict a late Q3 launch. I am sure AMD will use multiple SKUs by disabling CUs for improving yields. Raja also mentioned only 2 new FINFET GPUs this year. So AMD has to make up the entire GPU stack using two chips. I speculate that the big die FINFET GPU chip will have atleast 3 SKUs (and maybe even 4) depending on yields.

I just figured they'd rehash the whole prior line up. Not knowing the actual cost, my guess is beside Fiji it would be cheap enough to manufacture some more Grenanda's and Tongas to satisfy by the <$250 price points if say baby Polaris can handle the $250-350 spot with some Fiji derivatives in the $350-550 spot with Polaris Grande coming in on top sometime later (as you said milking the game plan NV set in motion with record breaking results).
 
So it looks like AMD is going the Nvidia route by launching bottom up with Polaris GPUs. It makes the most sense since FINFET yields are still a challenge at the foundries especially as die size goes up. The largest FINFET chip we have seen to date is Apple A9X at 147 sq mm fabbed at TSMC 16FF+. Since now its confirmed that AMD's smallest GPU is a 14 LPP GPU fabbed at GF, its very likely that AMD uses TSMC for big die flagship GPUs as they have the better yields compared to Samsung / GF.

I can see AMD's flagship GPU come in at 300 sq mm and use HBM2 and be manufactured at TSMC 16FF+. I predict a late Q3 launch. I am sure AMD will use multiple SKUs by disabling CUs for improving yields. Raja also mentioned only 2 new FINFET GPUs this year. So AMD has to make up the entire GPU stack using two chips. I speculate that the big die FINFET GPU chip will have atleast 3 SKUs (and maybe even 4) depending on yields.

This demo is a pretty small die, but it makes a lot of sense in a mobile product. Could make for great gaming laptops, even outside of the desktop market. It remains to be seen if the final die will come in at the same size, but it would be great to get a refresh in that power class as the 4 year old GCN1.0 Cape Verde and Oland GPUs are definitely long in the tooth. AMD's low end competes very well with Nvidia's if power in unconstrained, but they really don't have anything useful for someone with a PSU limite prebuilt to compete with the 750 Ti.

~100mm^2 to 300mm^2 is a pretty huge gap though. It will be interesting to see how long it takes in 2017 to fill that spot in the lineup without relying on cut down dies.
 
I find it amazing how blindly people time after time believes in PR and completely throw away any sense of normal source criticism. And we all know the result, because time after time it never lives up to the PR.

I find it amazing that you find it amazing, considering you do exactly this on a daily(hourly) basis.
 
"Wow"?
Maybe i miss something: AMD is comparing a 16nm FinFet chip to a 12months old chip with nearly 2x times the transistors in a AMD biased game. They archived a 2,5x improvement in efficiency.

nVidia released the GTX980 11 months after the 290X, which has less transistors and archived a 1.9x to 2.0x improvement...
Would have nVidia demonstraded the GTX980 in march 2014 nobody would believed it.

Yet when AMD needs a much longer timeframe, a new process node and nearly twice the transistors, everybody is screaming "Wow".

I guess the hype train is back and now bigger and better than ever. 😀

it will depend on the details of the arch. If they have a GPU that maintains the compute and hardware scheduling in the older GCN then its wow. If they've been downsizing their gaming GPU like nvidia did to achieve their "efficiency" then not too impressive.

I think those power figures are total system (obviously). 86W playing battlefront at medium 60fps 1080p is pretty decent.

Seems perfect for laptops
 
Back
Top