[Rumor]R9 300 series will be manufactured in 20nm!

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
TDP is not typical load dude lol. Power measurements for AMD cards have shown that, tons of other tests for many Nvidia have also shown that. You didnt seem to read my post at all where I proved it. Try reading it again.
TDP is the worst a card can come across under realistic scenarios, not including Furmark which is as far from reality as you can come.

OEMs doesnt control TDP. The chip does. They make cooling and power based on that. They can`t overrun a 200W GPU and put 150W as a limit. They can, but say goodbye to any potential customers once they dump vbios and read a power limit of 150W. If AMD market the card as 200W you dont put a 150W limit.

PCIe can go over 75W, sure. GTX 750Ti miners is fresh in memory. But you are nitpicking on details. Most AIBs run by specifications, and add pins based on the above.

I read what you wrote, the problem is that you don't know what you're talking about, you've now changed your argument, and you proved absolutely nothing.

Just as one example:
tom's hardware tests power usage on cards by actually measuring the power being delivered to the card, isolated from the rest of the system. They measured the reference 980 with a 165 W TDP while gaming
Power Consumption while Gaming
GeForce GTX 980 Reference = 185.70 W
That's a 12.5% increase over TDP while gaming.

Here's Wikipedia, just replace CPU with GPU
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the CPU that the cooling system in a computer is required to dissipate in typical operation. Rather than specifying CPU's real power dissipation, TDP serves as the nominal value for designing CPU cooling systems.[1]
The TDP is typically not the largest amount of heat the CPU could ever generate (peak power), such as by running a power virus, but rather the maximum amount of heat that it would generate when running "real applications." This ensures the computer will be able to handle essentially all applications without exceeding its thermal envelope, or requiring a cooling system for the maximum theoretical power (which would cost more but in favor of extra headroom for processing power).[2]
Now the biggest question is, which applications are used to characterize TDP? You test 5 games and each gives a different power usage, which do you use? For the sake of argument, say you use the game with the highest usage, what happens if in 2 months another game comes out that pushes the card even further? Further complicating the matter is the issue of boost bins and base clocks. All of this leads to the very simple conclusion that TDP is a nominal value under a certain load and frequency, not max power.

Why do you think throttling on CPUs and GPUs can become an issue if it was as easy as TDP = max power? Once again, you have proved nothing except that you don't understand what TDP is and then you went around being condescending when others try to inform you. During one of my internships, one of my responsibilities was characterizing power usage for digital IC's and creating a power spec for them, including TDP, I know what I'm talking about. You're just arguing already convinced you know how it works when you don't.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
I read what you wrote, the problem is that you don't know what you're talking about, you've now changed your argument, and you proved absolutely nothing.

Just as one example:
tom's hardware tests power usage on cards by actually measuring the power being delivered to the card, isolated from the rest of the system. They measured the reference 980 with a 165 W TDP while gaming
That's a 12.5% increase over TDP while gaming.

Here's Wikipedia, just replace CPU with GPU
Now the biggest question is, which applications are used to characterize TDP? You test 5 games and each gives a different power usage, which do you use? For the sake of argument, say you use the game with the highest usage, what happens if in 2 months another game comes out that pushes the card even further? Further complicating the matter is the issue of boost bins and base clocks. All of this leads to the very simple conclusion that TDP is a nominal value under a certain load and frequency, not max power.

Why do you think throttling on CPUs and GPUs can become an issue if it was as easy as TDP = max power? Once again, you have proved nothing except that you don't understand what TDP is and then you went around being condescending when others try to inform you. During one of my internships, one of my responsibilities was characterizing power usage for digital IC's and creating a power spec for them, including TDP, I know what I'm talking about. You're just arguing already convinced you know how it works when you don't.


To be fair, I can claim to know what I'm talking about too. Do you have any published work or any proof that you know what you are talking about -besides what could be taken from common sense?
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
I read what you wrote, the problem is that you don't know what you're talking about, you've now changed your argument, and you proved absolutely nothing.

Just as one example:
tom's hardware tests power usage on cards by actually measuring the power being delivered to the card, isolated from the rest of the system. They measured the reference 980 with a 165 W TDP while gaming
That's a 12.5% increase over TDP while gaming.

Here's Wikipedia, just replace CPU with GPU
Now the biggest question is, which applications are used to characterize TDP? You test 5 games and each gives a different power usage, which do you use? For the sake of argument, say you use the game with the highest usage, what happens if in 2 months another game comes out that pushes the card even further? Further complicating the matter is the issue of boost bins and base clocks. All of this leads to the very simple conclusion that TDP is a nominal value under a certain load and frequency, not max power.

Why do you think throttling on CPUs and GPUs can become an issue if it was as easy as TDP = max power? Once again, you have proved nothing except that you don't understand what TDP is and then you went around being condescending when others try to inform you. During one of my internships, one of my responsibilities was characterizing power usage for digital IC's and creating a power spec for them, including TDP, I know what I'm talking about. You're just arguing already convinced you know how it works when you don't.

Neverending stoooory, tralala tralala...Always something on forums. :rolleyes:

GTX 670 TDP 170W
Peak: 152W

R9 270X TDP 180W
Peak: 122W

7970 GHz TDP 300W
Peak: 273W

etc etc etc

Tomshardware are not the only one that can measure using oscilloscope. There are tons of examples out there where cards go below TDP during gaming.

Do you even read the quotes you are posting?
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the GPU that the cooling system in a computer is required to dissipate in typical operation
ie worst case scenario under a operation that is not Furmark (unrealistic).
I gave you examples before where power draw is less than TDP under gaming above.

Seriously give it a rest. The card doesnt hit the TDP roof on all tasks. Its the worst case, and what AIBs design power and cooling around. Nvidia put the cards through throttling if it sense that you run benchmarks like Furmark to protect it from go over specs and potentially fry the chip/card.

Now, seriously, stop replying. I`m not gonna waste anymore time on discussions about TDP. Talk R9 300 and 20nm instead. I`m out
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
To be fair, I can claim to know what I'm talking about too. Do you have any published work or any proof that you know what you are talking about -besides what could be taken from common sense?

Nothing I worked on in my internships can be shared publicly, it was all internal design and verification work that I don't even have access to any more even if I wanted to piss off my former employee and share it, lol. Typically interns don't do journal publications ;)

Beyond that, after my undergrad work, I transitioned to RFIC / Microwave design research in grad school (currently working on Ph.D. in this area) so I'm not trying to speak from a veteran digital designer level of expertise or anything, but I do have some experience here from undergrad work (study and internships). Furthermore, like I said, for one internship (at TI) I specifically had responsibility in this area so I understand how it works rather than just reading about it on the internet. If you don't believe me, that's fine, I really don't care, but if you don't believe what I've posted, then show me where it's wrong (not necessarily directed at you, monstercameron).
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Maxwell is more efficient for anything except Double Precision computing. Other GPGPU tasks work just fine, usually more efficiently than with GCN and Kepler. It's true that Nvidia cards have often had lower benchmark scores in OpenCL applications than corresponding AMD cards, but that's really a driver issue (and one that Nvidia is in no hurry to fix, because they want to push proprietary CUDA). It has nothing to do with the underlying architecture.

GCN isn't as far behind as some people seem to think (the gap is exacerbated by AMD's insistence on overclocking and overvolting its chips) but it is behind Maxwell in efficiency.

So now DP doesn't count? It wasn't that long ago it made a gaming card worth 2x as much as another card that happened to outperform it in every other metric. This even though almost nobody that bought the card was ever going to take advantage of it's DP capability.

How long is nVidia going to be given a bye on OpenCL compute tasks? You ever think that GCN performs better because it's superior? Interesting that nVidia is supposed to be so superior with drivers, unless it's OpenCL and that's only because they don't try.

So let me understand this. Anything that nVidia is faster in matters but if they are worse it either doesn't matter (DP) or it's because they don't care (OpenCL). That way they can perpetuate the Maxwell is so superior and GCN is older tech that's not competitive.

AMD has actually updated their uarch more times since GCN first came out than nVidia has. It's not old or behind in anything like some people perpetuate. Hawaii kills GM104 in most compute tasks and Fiji will likely do the same to GM200. We'll have to wait to find out on that one to be certain, but there's nothing pointing to any other outcome, at this point.
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
Neverending stoooory, tralala tralala...Always something on forums.

GTX 670 TDP 170W
Peak: 144W

R9 270X TDP 180W
Peak: 122W

7970 GHz TDP 300W
Peak: 273W

etc etc etc

Tomshardware are not the only one that can measure using oscilloscope. There are tons of examples out there where cards go below TDP during gaming.

Do you even read the quotes you are posting?
ie worst case scenario under a operation that is not Furmark (unrealistic).
I gave you examples before where power draw is less than TDP under gaming and more above.

Seriously give it a rest. The card doesnt hit the TDP roof on all tasks. Its the worst case, and what AIBs design power around.

Now, seriously, stop replying. I`m not gonna waste anymore time on discussions about TDP. Talk R9 300 and 20nm instead.

I really think you've lost yourself in your own arguments here, I don't know what to say anymore because you're not even following a logical line. Of course measured power can be below TDP, that's part of the original argument that you quoted and were criticizing. I always read thoroughly what I quote, you clearly didn't read what I said after otherwise you'd realize what you're saying now is useless. I'll drop it as well because it doesn't matter, even for the rumored cards until we see something substantiated anyway, good day.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
One of the things that makes me smile is how some users slag off other users when they make a comment about a rumour, and how quickly some users "memory" seems to fade when that rumour turns out to be true.

As for the 20nm rumour there might be a chance that it is true, how likely? Who knows. We'll probably find out very soon anyway. Personally I am agnostic on the issue. I don't believe or disbelieve the rumour, I just keep an open mind.

I'm keeping a keen eye on the benchmarks as the new gfx card is released, as I will probably want to invest in a 4K monitor and want some kind of vsync whether it be gsync or freesync for both monitor and gfxcard. I'm not jumping yet until I get the lay of the land.

I've seen people argue against a rumor with another rumor and want to disregard the other person's position because it's a rumor. :D

I`m out of this discussion for a while. Taking up way too much time.
Lets wait and see if my source was right on this one :)


I think these quotes are worth reposting. It seems that Nvidia used money on 28nm and a new architecture because they didnt want to wait. While AMD seems to have waited it out for available capacity which is what pushed the releases back from Feb/March to May/June

You're right, we'll have to wait. I'm leaning on the 300 series not being 20nm. AMD has been working on other ways to improve efficiency.

To be fair, I can claim to know what I'm talking about too. Do you have any published work or any proof that you know what you are talking about -besides what could be taken from common sense?

I'm not sure your point. Are you claiming what he is posting is inaccurate? TDP doesn't require a degree to understand. Cloudfire777 is simply wrong. It has nothing to do with maximum power draw, medium power draw, average power draw, etc. Also the number of connectors don't always apply either. The 295x2 has 2x8pin (150w+150w+75w (pcie) = 375w) but has a TDP of 500w.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
I've seen people argue against a rumor with another rumor and want to disregard the other person's position because it's a rumor. :D















I'm not sure your point. Are you claiming what he is posting is inaccurate? TDP doesn't require a degree to understand. Cloudfire777 is simply wrong. It has nothing to do with maximum power draw, medium power draw, average power draw, etc. Also the number of connectors don't always apply either. The 295x2 has 2x8pin (150w+150w+75w (pcie) = 375w) but has a TDP of 500w.


Nope, just being wary of people who make claims. He might be right for all I know.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
So now DP doesn't count? It wasn't that long ago it made a gaming card worth 2x as much as another card that happened to outperform it in every other metric. This even though almost nobody that bought the card was ever going to take advantage of it's DP capability.

DP counts - if you need it. >95% of users don't. This is why it would have been silly to buy the Titan Z for gaming instead of the R9 295 X2 (and why Nvidia didn't even bother to send Titan Z to mainstream reviewers).

The original Titan was bought by some gamers who thought - incorrectly, as it turned out - that GK200 performance would be limited to that high price point. Once the GTX 780 came out, buying a Titan for gaming would be pretty dumb.

For the tiny percentage of users who really need DP computing, it's still a tossup between Kepler (Titan/Titan Z/Tesla K80) and GCN (FirePro W8100/W9100). Either would be a defensible choice. For the majority of gamers, Maxwell is going to do better.

How long is nVidia going to be given a bye on OpenCL compute tasks? You ever think that GCN performs better because it's superior? Interesting that nVidia is supposed to be so superior with drivers, unless it's OpenCL and that's only because they don't try.

They shouldn't be given a pass. I'd like to see each review include at least one or two OpenCL benchmarks, to try to shame Nvidia into doing better.

OpenCL performs much the same tasks as CUDA, just with a different API. If CUDA performance is excellent but OpenCL on the same card is subpar, then common sense says it's probably drivers and not hardware deficiencies causing this.

It's worth pointing out that Apple went with GCN for the cylindrical Mac Pro at least in part because of its excellent OpenCL support.

AMD has actually updated their uarch more times since GCN first came out than nVidia has. It's not old or behind in anything like some people perpetuate. Hawaii kills GM104 in most compute tasks and Fiji will likely do the same to GM200. We'll have to wait to find out on that one to be certain, but there's nothing pointing to any other outcome, at this point.

Yes, FirePro W9100 seems like a reasonably competitive professional card. It's $2000 cheaper than the Quadro M6000, has about the same TDP, and does better in Double Precision and OpenCL. (M6000 has an advantage in Single Precision tasks, plus it can use CUDA, and for some people this will be important - but it also costs 66% more.) How this is actually affecting real-world sales is something I don't know. Are companies still buying Quadros even when FirePro might offer more perf/dollar?

Regarding how competitive GCN is with Maxwell overall, we can only judge on the basis of released products. Tonga (especially the R9 285 which is about the only version reviewed in-depth by anyone) is really a poor testbed for GCN 1.2. I suspect a respin even on 28nm (GloFo) could do much, much better in perf/watt with some tweaking.

That said, I think that in non-DP tasks, Maxwell is currently ahead of GCN. But you're right that it's not nearly as far ahead as some people think. I noted before that I got my old 7870 to get power usage down to only 125W with some minor voltage and clock tweaks. AMD could do much better with dedicated binning and a manufacturing process specifically designed with GPUs in mind - whether that is GloFo 28nm SHP or 20nm, we will see.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
In Retina 5K iMac there is full(2048 GCN core) chip that has 125W TDP. And its 28 nm.

Also, Mac Pro has Tahiti Chip with 2048 GCN cores, and wider memory bus, and it also has 129W of TDP.

Its not a problem. Its matter only of voltage and clocks.

Not only that, but the FirePro W7100 is Tonga (1792-shader version) and despite having the same core clock and four times the RAM, it has a rated TDP of 150W. Now, I haven't tested this card (and apparently neither has anyone else) so I can't know whether it is throttling the clock more aggressively or whether it's actually better binned. But I think it's quite likely to be the latter. Maybe R9 285 performs so poorly because it's the bottom-of-the-barrel trash silicon for Tonga.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Yes, FirePro W9100 seems like a reasonably competitive professional card. It's $2000 cheaper than the Quadro M6000, has about the same TDP, and does better in Double Precision and OpenCL. (M6000 has an advantage in Single Precision tasks, plus it can use CUDA, and for some people this will be important - but it also costs 66% more.) How this is actually affecting real-world sales is something I don't know. Are companies still buying Quadros even when FirePro might offer more perf/dollar?

This compares GM200 vs. Hawaii (W9100). The M6000 will have to compete with Fiji. I also wonder if the card will maintain it's max boost clocks while doing compute. The 7TFLOPS SP is assuming 1.12GHz.

Besides, all I'm saying is that overall GCN is not inferior to Maxwell. You don't seem to really disagree with that on any particular metric.

As far as sales go, that's nothing I was attempting to address. I'm sure more Quadro cards are sold. It's likely to stay that way until AMD can figure out how to pry the Quadros out of the workstations at Autodesk, etc... and get them to actually design their software with their cards included in the workflow, instead of just AMD optimizing drivers as best they can.
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Well something is coming in 20nm from AMD. I found many entries of 20nm from engineers working for AMD in linkedin

Successfully completed 6 tapeouts in 28/20nm technologies.

GPU ATE test program development based on Advan T2000 and Verigy 93000 ATE platform.28nm/20nm/14nm process improvement.

Main Responsibilities:

Manage a team of highly skilled engineers in Physical Implementation of APUs, Discrete GPU chips. Tapeouts in 90nm, 65nm, 40nm, 32nm, 28nm, 20nm, 16nm

Malta (Dec. 2013~)
Development and Evaluation of 20nm, 14nm BEOL process, DFM rules, and PEX parameters

Test chip yield engineer for 32nm, 28nm and 20nm processes:
• Analysis and debug of design and process yield signals has lead to faster bring-up of GPUs and accelerated product ramp of APUs.

•Custom analog layout for AMD Fusion APU, GPU
•Standard cell libraries development from 40nm to 20nm

20nm CMOS High-performance Standard Cell Development
--Transistor-level schematic and layout design on 20nm CMOS process.

Also Globalfoundries does have 20nm process. Many entries of that as well.
 
Last edited:
Feb 19, 2009
10,457
10
76
Ofc something is coming on 20nm, their CEO has already confirmed that for investors.

I can be 99.99% confident that something is also coming on 14nm ff... that's about as useful as your constant & shifting rumor posts. Really, keep it to one thread would be nice!!
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Well their next APU Carrizo is in 28nm. Since they mention both APU and GPU and 20nm, Id say the odds of the discrete GPUs being 20nm is getting bigger
:)

I especially wonder what 20nm product they have already had tapeout on ;)
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Ofc something is coming on 20nm, their CEO has already confirmed that for investors.

Let's take a step back before all the HBM rumours even started. What was the most telling slide? This one:

synapse-design-500mm-ynsu8.jpg


Synapse Design is a company responsible for AMD’s chip floor designs. Synapse back then announced new GPU tapeouts, including two 28HPM silicon designs.

Remember how people started to talk about R9 390X being a 550mm2 chip? That rumour really probably originated from 2 ideas:

1) 28nm is the only viable path for this generation
2) Because of #1, the only way for AMD to increase performance > 30% is to grow the die size.

Once that Synapse slide leaked, the idea of a 550mm2 R9 390X built on 28nm became a lot more reasonable. If R9 390X is built on 20nm, then for what purpose was Synapse doing designing > 500mm2 28nm chip? It wasn't for consoles or HDTVs. :)

The current rumours of 300W TDP, water-cooled edition, performance about 50-55% faster than R9 290X, they make a lot more sense for a 500-550mm2 28nm HBM1 design.

The rumours wouldn't align with a 500mm2 20nm chip because it would crush R9 290X by > 75% at 300W. Alternatively, if they did use 20nm, why would they need 300W to be "only" 55% faster? There is fallacy with the expected 300W TDP+ water cooled edition and the idea of using a 20nm chip.

Finally, for AMD to move to 20nm for the entire R9 300 series would require an immense amount of financial and engineering resources since it essentially means a chip redesign at every single market segment. If 20nm node was mature, a company with a lot of financial resources might even consider that. However, since 20nm node for GPUs is completely untested, even a company with a lot of financial resources would be unlikely to take on such a risky move. AMD? It has neither the financial resources, nor the human capital resources to even try to pull something like this off imo.
 
Feb 19, 2009
10,457
10
76
@RS
Fully agreed.

The Synapse Design is a public release, it's not a rumor or leak. It's 100% reliable.

The tapeouts occurred a long time ago, but looks like HBM yields is the cause of this delay.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Besides, I am sure a many of engineers worked on 20nm silicon. This was the original plan. You can safely bet thousands of individuals worked endless hours on design, test silicon, etc. You can bet your rear that they even had 20nm test silicon and actual chips. There is every reason to believe that 20nm was the plan forward.

Chips are years in development. They dont happen overnight. AMD having to abandon 20nm for GPUs, that is a huge blow. Cause most of the time, there is no plan B. So there would have been a massive effort up and to the very end. All the work done, no one would want to waste it. Having to drop 20nm would be the very last thing they would want to do. So, obviously there were people working on 20nm design.

I think nvidia had a terrible time with fermi, the node shrink went very bad for them. They had specifically talked about putting a team together to make sure something like that never happened again. They would have a team specifically tasked to future nodes. This might have helped them get an earlier start on pushing maxwell back to 28nm.
But even thought we know nvidia went 28nm for maxwell, you better believe that they had many many people involved in the push to 20nm for GPUs.

I think some people might be amassed by how many chips dont make it to production. There are not only chips, but entire GPUs setting on engineers desk that get scrapped. That never see the light of day
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I think some people might be amassed by how many chips dont make it to production. There are not only chips, but entire GPUs setting on engineers desk that get scrapped. That never see the light of day

How many actually...?
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Well if my korean guy is correct on 20nm or if its really 28nm, as long as they get the TDP down, either two processes are fine by me.

TweakTown say their industry sources say AMD have a few surprises about the upcoming cards that does not involve HBM. That could be 20nm.

Cant wait to see what 390X is about. It better be good, because all eyes are on it.
http://www.tweaktown.com/news/44771/amd-surprises-store-upcoming-radeon-300-series/index.html
Exclusive: According to our industry sources, AMD has a few surprises in store for us when it comes to the Radeon R9 390X, and the other GPUs that will arrive with the Radeon 300 series.

Our source wouldn't elaborate, but they did say that the new Radeon R9 390X will arrive with specifications and possibly features that are different to what the rumors currently suggest


Zol.com a chinese site list 20nm for 390X as well,. Not sure if they know anything though.
http://vga.zol.com.cn/517/5177568_param.html
 
Last edited: