[Rumor]R9 300 series will be manufactured in 20nm!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
No basis? This is not really correct. There have been several articles posted indicating that 20nm GPUs are not going to be released, and that the process is only suitable for small, low-power chips.
Fudzilla: 20nm node broken for GPUs
Fudzilla: 20nm GPUs not happening
ExtremeTech: AMD, Nvidia both skipping 20nm GPUs [...]
Digital Trends: AMD may skip 20nm production, head straight to 16nm FinFET

On the other side, there was a public statement by Lisa Su indicating that AMD would be making some kind of 20nm products at some point in 2015. But this could be anything - die shrunk cat cores, for example.

I did a search for 20nm APU rumors, and all of them seem to come from 2014. In contrast, the "20nm not happening" reports are mostly from 2015. If a console APU can be done on 20nm, then a discrete GPU probably can, too. But the newer reports indicate that AMD experimented with 20nm and wasn't happy with the process. So the older rumors may have correctly indicated that they tried, and the new rumors may also be correct in indicating that they couldn't make it work. I just hope they didn't throw too much time and effort at a failed die shrink at the expense of architectural improvements that may be the only way forward if they're stuck on 28nm.

One thing is for sure: letting the GPU product line stagnate for 18 more months is not an option. AMD will be a joke by then, a punchline, if they come to market with nothing and try to ride the Pitcairn horse into late 2016.

All of these are from Fudzilla except one. The last one say they "may" skip 20nm.

Are we certain that glofo really abandoned 20nm?
No we are not. Neither is it really confirmed TSMC 20nm is a Low Power process. TSMC more like say their 20nm process is suited for both LP and HP. Why we havent seen one yet from Nvidia could be because of cost, not process limitations.
If AMD want to venture down this road to gain against Maxwell which have a technological advantage in terms of new architecture, it can certainly be much cheaper than spending millions and millions on dollar to engineer a new architecture for 28nm since we are stuck with it until 2016 when 16nm is available.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I've registered just to post this:

http://www.chiphell.com/thread-1196441-1-1.html

They were completely spot on with performance of Fiji GPU and Titan X GPU, and also were talking about Global Foundries 20nm process for next gen AMD GPUs.

Looks like they were really well informed.
Thats an excellent find man. Seems like they know a lot indeed.
That post was posted all the way back in 2014 and was more or less spot on about these leaks/releases which wasnt out until March 2015:
1. R9 390X being 65% above R9 290X. AMD`s own slides confirm this
2. GTX Titan X being +35% above GTX 980

And just like my source (which I`m unable to prove to anyone other than give his word) say Fiji and R9 300 is 20nm.



More mention of 20nm:
Global Foundries has made great progress claims Devinder Kumar from AMD at an investor meeting. AMD sold Global Foundries in 2008 to raise cash in difficult times, Kumar stated that they will still fab 'some' products at 20nm and from there on-wards it will move to Finfet-chips. Gobalfoundries is working together with Samsung on 14nm already, Samsung will launch with SoCs based on that fab node next year already.
http://www.guru3d.com/news-story/amd-will-manufacture-gpus-at-global-foundries.html

Translation: We will do 20nm in a brief period until 16nm is ready to manufacture?
Seems like AMD may have some tricks up their sleeves to regain the lead Nvidia have with Maxwell.

Based on market rumours, only select graphics processors and, perhaps, console chips, will be made using 20nm process technology.
http://www.kitguru.net/components/a...ocess-tech-only-certain-products-will-use-it/

20nm capacity the reason behind 300 delay?
AMD wanted to drop from 28nm to 20nm for its new GPUs but ran into the same capacity issue. This has impacted the delivery of AMD's 20nm R9 300 series graphics cards. They were supposed to show between February and March of this year but now they are at least two months behind and it's not AMD's fault. (Spot on! - Cloudfire)

And last November, at its financial analyst meeting, Senior Vice President and Chief Technology Officer Mark Papermaster said there would be 20nm and 28nm products in 2015 but no 14nm or 16nm products until 2016.
http://www.itworld.com/article/2865341/amd-nvidia-reportedly-get-tripped-up-on-process-shrinks.html


Expect the 20nm to align with AMD's next-gen graphics product and the 28nm for its APU products (We know Carrizo is 28nm - Cloudfire), although there will likely be APUs and GPUs featuring a mixture of process sizes depending on the product segmentation.
http://hexus.net/tech/news/graphics/78237-amds-2015-strategy-includes-20-28nm-products/


Advanced Micro Devices said last week that it would ship its first products made using 20nm process technology at TSMC in 2015. The company specifically decided not adopt the 20nm fabrication process among the first in order to ensure high yields and thus low production costs. The chip designer intends to use 20nm fabrication process to make different kinds of products.

“20nm is an important node for us. We will be shipping products in 20nm next year and as we move forward […],” said Lisa Su, senior vice president and chief operating officer of AMD. “If you look at our business, it is quite a bit more balanced between the semi-custom, embedded, […] professional graphics […] as well as the more traditional sort of client and graphics pieces of our business. [20nm] technology plays in all of those businesses.”
This is the first time when AMD confirmed that it will use TSMC’s 20nm fabrication process to make graphics processing units, semi-custom Fusion system-on-chips, solutions for embedded applications as well as code-named SkyBridge accelerated processing units (which will feature either ARM Cortex-A57 or AMD Puma+ cores).
http://www.xbitlabs.com/news/graphi...ows_to_Introduce_20nm_Products_Next_Year.html
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
More mention of 20nm:
http://www.guru3d.com/news-story/amd-will-manufacture-gpus-at-global-foundries.html

Translation: We will do 20nm in a brief period until 16nm is ready to manufacture?
Seems like AMD may have some tricks up their sleeves to regain the lead Nvidia have with Maxwell.

http://www.kitguru.net/components/a...ocess-tech-only-certain-products-will-use-it/

20nm capacity the reason behind 300 delay?
http://www.itworld.com/article/2865341/amd-nvidia-reportedly-get-tripped-up-on-process-shrinks.html

http://hexus.net/tech/news/graphics/78237-amds-2015-strategy-includes-20-28nm-products/

Interesting. I wonder if AMD's contracts with Microsoft and/or Sony specified 20nm die-shrinks, and AMD now has to go through with it despite the difficulties. If that's true, then it would obviously make sense for AMD to spread that cost over its discrete GPU products as well, especially since these probably have higher profit margins. Porting high-power CPUs won't be worth it because AMD's current high-power CPUs are junk.

This Forbes article from November 2014 indicates that Global Foundries will be developing a 20nm process, but "it won’t be a high-volume node" and will be more expensive because it requires two passes to lay on circuits. The only way that AMD would even consider 20nm is if it was with Global Foundries; they're not going to go to TSMC if they can possibly help it. (Does the GloFo-AMD WSA put any obligations on the foundry side? For example, do they have to put out a 20nm node with certain characteristics at a certain time? From what I've read, it seems to be a huge burden on AMD, and I'm not sure what if anything they get in return.)
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Efficiency is the new focus.

R290 with faster than Tonga performance for $240 isn't a bargain compared to 960 for the market. That's how much they value efficiency.

Until AMD can match NV on efficiency, they will be forced to sell faster parts for less.

um... no? I highly doubt the average consumer is looking at power consumption on these cards over performance. The problem with AMD vs Nvidia is that nvidia is the apple of the PC gaming world. The major factors for the consumer are the cost and the performance.

For PC manufacturers it might be efficiency... if they want to cut down on the PSU cost... or have some unique PC form that requires less heat. The power requirements I don't see an issue with since you can get away with 400-600 watts in that segment. Connecting a PCI-e cable is not a big deal. Nvidia probably has the hearts of these companies though. If you can run a 290x with 600 watts and aren't using crappy power supplies in your system, its not an issue. It may be an issue if they have to use reference versions.
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Interesting. I wonder if AMD's contracts with Microsoft and/or Sony specified 20nm die-shrinks, and AMD now has to go through with it despite the difficulties. If that's true, then it would obviously make sense for AMD to spread that cost over its discrete GPU products as well, especially since these probably have higher profit margins. Porting high-power CPUs won't be worth it because AMD's current high-power CPUs are junk.

This Forbes article from November 2014 indicates that Global Foundries will be developing a 20nm process, but "it won’t be a high-volume node" and will be more expensive because it requires two passes to lay on circuits. The only way that AMD would even consider 20nm is if it was with Global Foundries; they're not going to go to TSMC if they can possibly help it. (Does the GloFo-AMD WSA put any obligations on the foundry side? For example, do they have to put out a 20nm node with certain characteristics at a certain time? From what I've read, it seems to be a huge burden on AMD, and I'm not sure what if anything they get in return.)
Im not sure what foundry AMD will manufacture the 20nm products in. We have reports saying both TSMC and GloFo. My source didnt specify which but Glom's Chiphell link that have been accurate this far say GloFo. And the APUs for the consoles say GloFo. Plus Lisa Su said they will order wafers from GloFo worth 1billion in 2015, so I assume most will come from GloFo yes.

I dont think that GloFo not being a high volume 20nm process will hurt AMD anyway. Apple are TSMC only and producing their chips in 20nm. Qualcomm have moved on to 14nm from Samsung/GloFo and have abandoned TSMC. Altera use TSMC but have also moved to Samsung for 14nm designs.
So I think AMD would get the production the need for the 300 cards there without much competition. Price will probably not be the cheapest I assume, but I assume many will buy AMD cards if they are in 20nm.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Anyone remembers the R9 370 card specs that was leaked months ago by Videocardz with power draw from 110W to 130W?

We later found out it was a R9 270X rebrand through driver listing.
Except 270X is at 180W.

Guess thats another sign for 20nm.... ;)
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
17,197
7,570
136
um... no? I highly doubt the average consumer is looking at power consumption on these cards over performance. The problem with AMD vs Nvidia is that nvidia is the apple of the PC gaming world. The major factors for the consumer are the cost and the performance.

They have to considering that it's extremely rare now for an OEM to ship a desktop with a PSU big enough to handle a power hungry GPU. And it's only going to get rarer. You might be able to get away with a 970 with some OEM machines but that would be pushing it. Any higher? Forget it.

Anyone remembers the R9 370 card specs that was leaked months ago by Videocardz with power draw from 110W to 130W?

We later found out it was a R9 270X rebrand through driver listing.
Except 270X is at 180W.

Guess thats another sign for 20nm.... ;)

'Course they could have dropped the clocks.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
'Course they could have dropped the clocks.

R9 270
1280 @ 925MHz - 150W

R9 270X
1280 @ 1050MHz - 180W

Makes extremely little sense to make the R9 370 rebrand slower than R9 270.
Even there it is 30-40W reduction. Most likely its atleast as high clocked as 270X or atleast very close imo

TSMC claim 25% power reduction with 20nm compared to 28nm. 180W * 0,75 is 135W. 150W * 0.75 is 112.5W. Falls exactly in line with what the specs for the R9 370 card is.
Ergo R9 370 have different clocked models of 270 and 270X but in 20nm.

Boy this is exciting :)
 
Last edited:
Feb 19, 2009
10,457
10
76
Thats an excellent find man. Seems like they know a lot indeed.
That post was posted all the way back in 2014 and was more or less spot on about these leaks/releases which wasnt out until March 2015:
1. R9 390X being 65% above R9 290X. AMD`s own slides confirm this
2. GTX Titan X being +35% above GTX 980

And just like my source (which I`m unable to prove to anyone other than give his word) say Fiji and R9 300 is 20nm.

You cannot be so selective in your belief in rumors, because you have been constantly shifting the goal post with all these 390X rumors, anywhere from matching Titan X, only 20% faster than 980..

As I've said to you in those post, the only leak that has been spot on accurate thus far is from the poster on CH which has Titan X performance, power use, down to the T, many months before its release.

From the chatter from AIBs, I get the hint they already have ES samples of 390X to play with for a long time now, but volume is non-existent requiring months to build up. Why is volume so poor? 2 possibilities:

1. HBM yields are terrible.
2. GPU yields are terrible.

#1 is likely since its a new tech. #2 is not likely if its on 28nm, which is already so mature. #2 is therefore likely only if AMD tried to pull a trick by going 20nm or rushing early 14nm (they had test/risk production in 2H 2014 according to a few articles).

The worse case scenario is both 1 & 2. o_O
 

jpiniero

Lifer
Oct 1, 2010
17,197
7,570
136
R9 270
1280 @ 925MHz - 150W

R9 270X
1280 @ 1050MHz - 180W

Makes extremely little sense to make the R9 370 rebrand slower than R9 270.
Even there it is 30-40W reduction. Most likely its atleast as high clocked as 270X or atleast very close imo

They might be feeling the pressure to reduce the power consumption.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
You cannot be so selective in your belief in rumors, because you have been constantly shifting the goal post with all these 390X rumors, anywhere from matching Titan X, only 20% faster than 980..

As I've said to you in those post, the only leak that has been spot on accurate thus far is from the poster on CH which has Titan X performance, power use, down to the T, many months before its release.

From the chatter from AIBs, I get the hint they already have ES samples of 390X to play with for a long time now, but volume is non-existent requiring months to build up. Why is volume so poor? 2 possibilities:

1. HBM yields are terrible.
2. GPU yields are terrible.

#1 is likely since its a new tech. #2 is not likely if its on 28nm, which is already so mature. #2 is therefore likely only if AMD tried to pull a trick by going 20nm or rushing early 14nm (they had test/risk production in 2H 2014 according to a few articles).

The worse case scenario is both 1 & 2. o_O

Rant rant rant. Are you always so angry? Ive never said 390X will match Titan. Neither did the recent article from TweakTown say. But who cares.
The chiphell link Glom posted have been spot on.

Look above in one of my quotes: AMD say they have fallen behind February/March release for the 300 series by atleast 2 months because of 20nm capacity issues. We are now soon 2 months behind and AMD is apparantly ready to show the cards in June.
Spot on!

They might be feeling the pressure to reduce the power consumption.
Rightfully so. The reviews doesnt put them in good light against Maxwell. I said it from the beginning that 370/370X/380/380X being rebrands and put against the power efficient Maxwell cores would be one big disaster for them. But if they are 20nm this changes things greatly. Maybe why they changed the codenames for the cards too?

Its a very good way to get some positive spins on reviews and to be able to still sell current cards without spending millions on a new architecture on 28nm. :)
 
Last edited:
Feb 19, 2009
10,457
10
76
Rant rant rant. Are you always so angry? Ive never said 390X will match Titan. Neither did the recent article from TweakTown say. But who cares.
The chiphell link Glom posted have been spot on.

Look above in one of my quotes: AMD say they have fallen behind February/March release by atleast 2 months because of 20nm capacity issues. We are now soon 2 months behind and AMD is apparantly ready to show the cards in June.
Spot on!

The last time you claimed 390X is going to be only 20% faster than 980, I posted to let you know how spot on that CH leak was for Titan X, which also includes 390/X results. You dissed on it, and now you want to go with that?

Currently, anything is possible because AMD has been very tight with leaks. The only sure thing we know of is that its HBM and its very late.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
The last time you claimed 390X is going to be only 20% faster than 980, I posted to let you know how spot on that CH leak was for Titan X, which also includes 390/X results. You dissed on it, and now you want to go with that?

Currently, anything is possible because AMD has been very tight with leaks. The only sure thing we know of is that its HBM and its very late.

Go look at the graphs from his link. Titan X is faster and its about 20% faster than 980 in 1440p. I never believed the power draw of the graphs earlier since it was suppose to be 28nm but since we are looking at maybe 20nm now, a power draw with even less than that cut down GM200 makes sense. Even dual Fiji does. :)

Exciting times ahead
 
Feb 19, 2009
10,457
10
76
Go look at the graphs from his link. Titan X is faster and its about 20% faster than 980 in 1440p. I never believed the power draw of the graphs earlier since it was suppose to be 28nm but since we are looking at maybe 20nm now, a power draw with even less than that cut down GM200 makes sense. Even dual Fiji does. :)

Exciting times ahead

Depends whether you think the labeled Fiji XT is the 2nd tier die or not. If you think Bermuda XT is a dual-Fiji, then you need to put your logic in action and compare the R290X v R295X2 and look at the performance gap between that dualGPU and a dual-Fiji, which would be a LOT faster than R295X2.

It looks like the Fiji XT in that chart is a R380X and Bermuda XT as R390X, 65% faster than R290X. Also from CH, there's a leaker with benches on the 390 (non-X) which shows its a little faster than Titan X.

Note that the leak is very old on ES samples, I doubt AMD has drivers ready at the time for the new design. I put more trust in journalists & gamedevs who have first hand experience with 390X, they say its a lot faster than leaks have it.

Edit: IF it is 20nm, with HBM, improved GCN & with drivers ready, I think its not unreasonable to be expecting ~80% faster than R290X. On 28nm, ~50% is about the limit within reason, unless AMD pulls a miracle.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
17,197
7,570
136
Rightfully so. The reviews doesnt put them in good light against Maxwell. I said it from the beginning that 370/370X/380/380X being rebrands and put against the power efficient Maxwell cores would be one big disaster for them. But if they are 20nm this changes things greatly. Maybe why they changed the codenames for the cards too?

It's possible but I don't think they would spend the money to shrink anything that's not Tonga.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Anyone remembers the R9 370 card specs that was leaked months ago by Videocardz with power draw from 110W to 130W?

We later found out it was a R9 270X rebrand through driver listing.
Except 270X is at 180W.

Guess thats another sign for 20nm.... ;)

If you are going to specify power consumption data to back up theories of 20nm, then at least the facts have to be accurate. The 180W power usage for R9 270X that you sighted is so off the mark, it's not even funny. R9 270X uses about 110W of power on avg and peaks at 122W.
https://www.techpowerup.com/reviews/AMD/R9_270X/24.html

power_peak.gif


That means if that chip was shrunk from 28nm to 20nm and got another 20-30% reduction in power, it would be an 85-90W card.

R9 270
1280 @ 925MHz - 150W

R9 270X
1280 @ 1050MHz - 180W

All wrong. I am guessing you are sourcing TDP ratings? TDP does not equal power usage. How many times does this have to be repeated? Despite how many times it's been repeated that TDP does not stand for power usage, people still use the term incorrectly.

Considering HD7950 (R9 280) uses < 150W of power, how in the world would an R9 270/270X use 150-180W? Impossible.

power_peak.gif


R9 270X uses < 125W of power and R9 270 uses slightly less.

An entire Core i7 3770K system with an R9 270 uses 195W of power, while R9 270X (7870) uses about 213W. Your entire analysis falls apart since all of your sighted power usage data is wrong.

power-load.png

http://techreport.com/review/25642/amd-radeon-r9-270-graphics-card-reviewed/8
 
Last edited:
Feb 19, 2009
10,457
10
76
AMD's rated TDP is usually well above their actual power usage in games. I think they rate for worse case scenario, like full power virus or compute/bitmining.

Note how low lower the 7950 and 7970 was (not the Ghz ed, that was stupid mode setting 1.25vcore default, well above most custom cards around 1.1).
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
It's possible but I don't think they would spend the money to shrink anything that's not Tonga.

If they do go with 20nm, they certainly won't simply do a blind die-shrink of an old architecture like Pitcairn that lacks modern features. They will want to take this opportunity to ensure all the new chips have at least GCN 1.2 across the board, and hopefully add new features like the HEVC decoder that will be featured in Carrizo.

I think there is room for at least one die-shrunk chip below Tonga. If AMD did a 20nm chip with 1024 SPs and a 192-bit bus (down from Pitcairn's 256-bit because of the GCN 1.2+ delta color compression), there's a good chance this could fit in a 75W TDP and therefore compete with GTX 750 Ti in the 'no external power connector' market niche. The reference HD 7850, with 1024 SPs, only consumed 101W under FurMark, so there's a good chance that shrinking to 20nm and cutting down the bus width can get it under 75W.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
If you are going to site power consumption data to back up theories of 20nm, then at least the facts have to be accurate. The 180W power usage for R9 270X that you sighted is so off the mark, it's not even funny. R9 270X uses about 110W of power on avg and peaks at 122W.

You're looking at the wrong value in those charts. The correct chart to determine TDP is "Maximum" (which uses FurMark to force the card to its limits). That comes in at 172W, just a bit below the rated TDP, as expected.

Now, the 7850 (130W TDP) and 7950 (200W TDP) do indeed come in noticeably below their TDP (actual consumption maxes at 101W and 179W, respectively, and you can do even better if you undervolt). But that's not true of the R9-series rebrands. I think the higher memory clocks are the culprit: most of them have similar core clocks to the 7000-series equivalents, but the RAM speed is increased from 1250 MHz to 1400 MHz.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
AMD's rated TDP is usually well above their actual power usage in games. I think they rate for worse case scenario, like full power virus or compute/bitmining.

It's not even about AMD's TDP. You can have 5 GPUs all rated at 250W TDP and all use a different amount of power. That's because TDP does not stand for power usage but most PC gamers assume TDP = power usage.

GTX480 had 250W TDP, real world power usage = 272W :)

GTX580 had 244W TDP, real world power usage = 229W

GTX Titan had 250W TDP, real world power usage = 238W

GTX780Ti had 250W TDP, real world power usage = 269W :)

HD7970 925mhz had 250W TDP, real world power usage = 189W :D

HD7970Ghz had 250W TDP, real world power usage = 238W (*BTW, TPU made a mistake in one review where they used FurMark power usage of HD7970Ghz of 273W and carried this mistake for 1.5 years without correcting it once.* :sneaky:)

You can have a card with 250W TDP use < 250W, 250W, > 250W of power. TDP is a useless metric for gamers. It's rarely accurate but it keeps being used to mean "something". I mean really after the faux TDP ratings of 970 and 980, people still rely on TDP for GPU power usage? D:

GTX 970's TDP of 145W was the biggest GPU marketing joke ever as far as "power usage" goes. That card uses almost 200W of power in games but the clueless average Joe gamer sees 250W TDP R9 290 vs. 145W TDP GTX970. :hmm:

Maybe AMD should just give R9 390X series a TDP rating of 200W for the heck of it. It's not as if TDP ratings today have any real meaning wrt to factual power usage.

Rant rant rant. Are you always so angry? Ive never said 390X will match Titan. Neither did the recent article from

That's because you throw out 10 opposing rumours/theories every other month. Eventually one of them becomes true and you say "I called it!"

For months you kept talking about R9 300 series being re-brands and now they are all made on 20nm? Snapdragon 810 is made on 20nm node and it ended up a thermal throttling under-performing POS that is hardly better than the 28nm 805. Are we supposed to believe AMD is getting some magical 20nm node that no one else in the world could use? Considering Qualcomm couldn't even get the tiny 810 right on 20nm node, what are the chances AMD can get 300-500mm2 20nm GPUs with HBM1?
 
Last edited:

nsavop

Member
Aug 14, 2011
91
0
66

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You're looking at the wrong value in those charts. The correct chart to determine TDP is "Maximum" (which uses FurMark to force the card to its limits). That comes in at 172W, just a bit below the rated TDP, as expected.

No. Furmark is a power virus. Power viruses have nothing to do with real world power usage of a real world program for a graphics card. That's why almost all websites abandoned using FurMark as a measure of the card's power usage. We've been over this discussion years ago on this forum and most people agreed that FurMark is a worthless program to measure power usage (much in the same way as Intel's IBT or LinX is for CPUs). You know how a power virus actually works? In the real world you cannot have any application that utilizes L2 cache, shaders, textures, ROPs and memory bandwidth to 100% simultaneously. That's what a power virus does. I am using the correct measure - a real world rating for maximum power usage based on demanding videogames. FurMark is not a real world test of power usage. Not mining, not distributed computing, not any game in the world loads each sub-component of a graphics card like a power virus does, which is why using Power Virus such as FurMark as an indication of the ASIC's maximum real world power usage is meaningless.

Also, your example is just a coincidence that FurMark's power usage comes in around TDP. One of the major reasons NV and AMD introduced driver control to throttle GPU clocks in FurMark is because FurMark induces an unrealistic workload on the GPU which FAR exceeds the TDP ratings of flagship cards.

We know for 100% fact, there is no real world application that people use which will induce 326W power usage on a 580, 361W power usage on a GTX480, and 270W on a 925mhz HD7970.
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_680/images/power_maximum.gif

FurMark => worthless synthetic power virus. AMD and NV both agree:

"Nov 9, 2010 - AMD and Nvidia have long despised FurMark for its ability to inflate consumption figures, and now it looks like they're finally going to get to see it phased out."
http://www.tomshardware.com/reviews/geforce-gtx-580-gf110-geforce-gtx-480,2781-15.html

Artificially inflating a GPU's power load via a power virus has nothing to do with its real world power usage. I don't know how many times this needs to be repeated.
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Snapdragon 810 is made on 20nm node and it ended up a thermal throttling under-performing POS that is hardly better than the 28nm 805. Are we supposed to believe AMD is getting some magical 20nm node that no one else in the world could use? Considering Qualcomm couldn't even get the tiny 810 right on 20nm node, what are the chances AMD can get 300-500mm2 20nm GPUs with HBM1?

Well, the Snapdragon was done on TSMC 20nm. It's possible that GloFo 20nm is designed more with GPUs and APUs in mind (the way that GloFo 28nm SHP was), and would therefore be better suited for graphics cards. We don't know for sure. What we do know is that Lisa Su said last year that there would be 20nm products (exactly what products was unspecified) sometime in 2015. 20nm console APU die-shrinks were rumored, but not officially confirmed. We also know that there were some later rumors that neither AMD nor Nvidia would be making 20nm GPUs. We don't know which, if any, of these rumors are accurate. At this time, we're all basically just engaging in semi-informed speculation.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
No. Furmark is a power virus. Power viruses have nothing to do with real world power usage of a real world program for a graphics card.

That's not what TDP is about. TDP is about the maximum power the card can pull under any circumstances. By all means it should be included alongside real-world loads (as TechPowerUp does), but it's an important figure because it indicates how strong your PSU needs to be, and tells AIBs how good the GPU cooler needs to be. You don't want to build a system that can be crashed and melted by software.

FurMark is not a real world test of power usage. Not mining, not distributed computing, not any game in the world loads each sub-component of a graphics card like a power virus does, which is why using Power Virus such as FurMark as an indication of the ASIC's maximum real world power usage is meaningless.

I can run it, therefore it is "real" power usage. It's really physically drawing that power (in fact, I was just running some tests on my 7870 using FurMark and a Kill-A-Watt tester).

Also, your example is just a coincidence that FurMark's power usage comes in around TDP. One of the major reasons NV and AMD introduced driver control to throttle GPU clocks in FurMark is because FurMark induces an unrealistic workload on the GPU which FAR exceeds the TDP ratings of flagship cards.

We know for 100% fact, there is no real world application that people use which will induce 326W power usage on a 580, 361W power usage on a GTX480, and 270W on a 925mhz HD7970.
http://tpucdn.com/reviews/NVIDIA/GeForce_GTX_680/images/power_maximum.gif

FurMark => worthless synthetic power virus. AMD and NV both agree:

"Nov 9, 2010 - AMD and Nvidia have long despised FurMark for its ability to inflate consumption figures, and now it looks like they're finally going to get to see it phased out."
http://www.tomshardware.com/reviews/geforce-gtx-580-gf110-geforce-gtx-480,2781-15.html

Artificially inflating a GPU's power load via a power virus has nothing to do with its real world power usage. I don't know how many times this needs to be repeated.

I don't care what AMD and Nvidia think. They were upset that they got caught lying about TDP on older generations of video cards. The GTX 480 used 360W under FurMark, thus the card's real, physical TDP was 360W, whether Nvidia admitted that fact or not.

The eventual solution used by both vendors was to put hardware power monitoring so the card can't get above TDP. Which is why FurMark is a good test of what the card's real TDP actually is. (In some cases, it's lower than the official figure, as we saw with the 7850 and 7950.)