DrMrLordX
Lifer
- Apr 27, 2000
- 23,223
- 13,303
- 136
Are we certain that glofo really abandoned 20nm?
No. People are still saying Amur will be a 20nm chip (originally it was Nolan and Amur, now it's just Amur . . .)
Are we certain that glofo really abandoned 20nm?
TDP is not typical load dude lol. Power measurements for AMD cards have shown that, tons of other tests for many Nvidia have also shown that. You didnt seem to read my post at all where I proved it. Try reading it again.
TDP is the worst a card can come across under realistic scenarios, not including Furmark which is as far from reality as you can come.
OEMs doesnt control TDP. The chip does. They make cooling and power based on that. They can`t overrun a 200W GPU and put 150W as a limit. They can, but say goodbye to any potential customers once they dump vbios and read a power limit of 150W. If AMD market the card as 200W you dont put a 150W limit.
PCIe can go over 75W, sure. GTX 750Ti miners is fresh in memory. But you are nitpicking on details. Most AIBs run by specifications, and add pins based on the above.
That's a 12.5% increase over TDP while gaming.Power Consumption while Gaming
GeForce GTX 980 Reference = 185.70 W
Now the biggest question is, which applications are used to characterize TDP? You test 5 games and each gives a different power usage, which do you use? For the sake of argument, say you use the game with the highest usage, what happens if in 2 months another game comes out that pushes the card even further? Further complicating the matter is the issue of boost bins and base clocks. All of this leads to the very simple conclusion that TDP is a nominal value under a certain load and frequency, not max power.The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the CPU that the cooling system in a computer is required to dissipate in typical operation. Rather than specifying CPU's real power dissipation, TDP serves as the nominal value for designing CPU cooling systems.[1]
The TDP is typically not the largest amount of heat the CPU could ever generate (peak power), such as by running a power virus, but rather the maximum amount of heat that it would generate when running "real applications." This ensures the computer will be able to handle essentially all applications without exceeding its thermal envelope, or requiring a cooling system for the maximum theoretical power (which would cost more but in favor of extra headroom for processing power).[2]
I read what you wrote, the problem is that you don't know what you're talking about, you've now changed your argument, and you proved absolutely nothing.
Just as one example:
tom's hardware tests power usage on cards by actually measuring the power being delivered to the card, isolated from the rest of the system. They measured the reference 980 with a 165 W TDP while gaming
That's a 12.5% increase over TDP while gaming.
Here's Wikipedia, just replace CPU with GPU
Now the biggest question is, which applications are used to characterize TDP? You test 5 games and each gives a different power usage, which do you use? For the sake of argument, say you use the game with the highest usage, what happens if in 2 months another game comes out that pushes the card even further? Further complicating the matter is the issue of boost bins and base clocks. All of this leads to the very simple conclusion that TDP is a nominal value under a certain load and frequency, not max power.
Why do you think throttling on CPUs and GPUs can become an issue if it was as easy as TDP = max power? Once again, you have proved nothing except that you don't understand what TDP is and then you went around being condescending when others try to inform you. During one of my internships, one of my responsibilities was characterizing power usage for digital IC's and creating a power spec for them, including TDP, I know what I'm talking about. You're just arguing already convinced you know how it works when you don't.
I read what you wrote, the problem is that you don't know what you're talking about, you've now changed your argument, and you proved absolutely nothing.
Just as one example:
tom's hardware tests power usage on cards by actually measuring the power being delivered to the card, isolated from the rest of the system. They measured the reference 980 with a 165 W TDP while gaming
That's a 12.5% increase over TDP while gaming.
Here's Wikipedia, just replace CPU with GPU
Now the biggest question is, which applications are used to characterize TDP? You test 5 games and each gives a different power usage, which do you use? For the sake of argument, say you use the game with the highest usage, what happens if in 2 months another game comes out that pushes the card even further? Further complicating the matter is the issue of boost bins and base clocks. All of this leads to the very simple conclusion that TDP is a nominal value under a certain load and frequency, not max power.
Why do you think throttling on CPUs and GPUs can become an issue if it was as easy as TDP = max power? Once again, you have proved nothing except that you don't understand what TDP is and then you went around being condescending when others try to inform you. During one of my internships, one of my responsibilities was characterizing power usage for digital IC's and creating a power spec for them, including TDP, I know what I'm talking about. You're just arguing already convinced you know how it works when you don't.
ie worst case scenario under a operation that is not Furmark (unrealistic).The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by the GPU that the cooling system in a computer is required to dissipate in typical operation
To be fair, I can claim to know what I'm talking about too. Do you have any published work or any proof that you know what you are talking about -besides what could be taken from common sense?
Maxwell is more efficient for anything except Double Precision computing. Other GPGPU tasks work just fine, usually more efficiently than with GCN and Kepler. It's true that Nvidia cards have often had lower benchmark scores in OpenCL applications than corresponding AMD cards, but that's really a driver issue (and one that Nvidia is in no hurry to fix, because they want to push proprietary CUDA). It has nothing to do with the underlying architecture.
GCN isn't as far behind as some people seem to think (the gap is exacerbated by AMD's insistence on overclocking and overvolting its chips) but it is behind Maxwell in efficiency.
Neverending stoooory, tralala tralala...Always something on forums.
GTX 670 TDP 170W
Peak: 144W
R9 270X TDP 180W
Peak: 122W
7970 GHz TDP 300W
Peak: 273W
etc etc etc
Tomshardware are not the only one that can measure using oscilloscope. There are tons of examples out there where cards go below TDP during gaming.
Do you even read the quotes you are posting?
ie worst case scenario under a operation that is not Furmark (unrealistic).
I gave you examples before where power draw is less than TDP under gaming and more above.
Seriously give it a rest. The card doesnt hit the TDP roof on all tasks. Its the worst case, and what AIBs design power around.
Now, seriously, stop replying. I`m not gonna waste anymore time on discussions about TDP. Talk R9 300 and 20nm instead.
One of the things that makes me smile is how some users slag off other users when they make a comment about a rumour, and how quickly some users "memory" seems to fade when that rumour turns out to be true.
As for the 20nm rumour there might be a chance that it is true, how likely? Who knows. We'll probably find out very soon anyway. Personally I am agnostic on the issue. I don't believe or disbelieve the rumour, I just keep an open mind.
I'm keeping a keen eye on the benchmarks as the new gfx card is released, as I will probably want to invest in a 4K monitor and want some kind of vsync whether it be gsync or freesync for both monitor and gfxcard. I'm not jumping yet until I get the lay of the land.
I`m out of this discussion for a while. Taking up way too much time.
Lets wait and see if my source was right on this one
I think these quotes are worth reposting. It seems that Nvidia used money on 28nm and a new architecture because they didnt want to wait. While AMD seems to have waited it out for available capacity which is what pushed the releases back from Feb/March to May/June
To be fair, I can claim to know what I'm talking about too. Do you have any published work or any proof that you know what you are talking about -besides what could be taken from common sense?
I've seen people argue against a rumor with another rumor and want to disregard the other person's position because it's a rumor.
I'm not sure your point. Are you claiming what he is posting is inaccurate? TDP doesn't require a degree to understand. Cloudfire777 is simply wrong. It has nothing to do with maximum power draw, medium power draw, average power draw, etc. Also the number of connectors don't always apply either. The 295x2 has 2x8pin (150w+150w+75w (pcie) = 375w) but has a TDP of 500w.
So now DP doesn't count? It wasn't that long ago it made a gaming card worth 2x as much as another card that happened to outperform it in every other metric. This even though almost nobody that bought the card was ever going to take advantage of it's DP capability.
How long is nVidia going to be given a bye on OpenCL compute tasks? You ever think that GCN performs better because it's superior? Interesting that nVidia is supposed to be so superior with drivers, unless it's OpenCL and that's only because they don't try.
AMD has actually updated their uarch more times since GCN first came out than nVidia has. It's not old or behind in anything like some people perpetuate. Hawaii kills GM104 in most compute tasks and Fiji will likely do the same to GM200. We'll have to wait to find out on that one to be certain, but there's nothing pointing to any other outcome, at this point.
In Retina 5K iMac there is full(2048 GCN core) chip that has 125W TDP. And its 28 nm.
Also, Mac Pro has Tahiti Chip with 2048 GCN cores, and wider memory bus, and it also has 129W of TDP.
Its not a problem. Its matter only of voltage and clocks.
Yes, FirePro W9100 seems like a reasonably competitive professional card. It's $2000 cheaper than the Quadro M6000, has about the same TDP, and does better in Double Precision and OpenCL. (M6000 has an advantage in Single Precision tasks, plus it can use CUDA, and for some people this will be important - but it also costs 66% more.) How this is actually affecting real-world sales is something I don't know. Are companies still buying Quadros even when FirePro might offer more perf/dollar?
Successfully completed 6 tapeouts in 28/20nm technologies.
GPU ATE test program development based on Advan T2000 and Verigy 93000 ATE platform.28nm/20nm/14nm process improvement.
Main Responsibilities:
Manage a team of highly skilled engineers in Physical Implementation of APUs, Discrete GPU chips. Tapeouts in 90nm, 65nm, 40nm, 32nm, 28nm, 20nm, 16nm
Malta (Dec. 2013~)
Development and Evaluation of 20nm, 14nm BEOL process, DFM rules, and PEX parameters
Test chip yield engineer for 32nm, 28nm and 20nm processes:
• Analysis and debug of design and process yield signals has lead to faster bring-up of GPUs and accelerated product ramp of APUs.
•Custom analog layout for AMD Fusion APU, GPU
•Standard cell libraries development from 40nm to 20nm
20nm CMOS High-performance Standard Cell Development
--Transistor-level schematic and layout design on 20nm CMOS process.
Or canceled.. :sneaky:Of course there is the possibility that what Su was predicting has been delayed.
Ofc something is coming on 20nm, their CEO has already confirmed that for investors.
I think some people might be amassed by how many chips dont make it to production. There are not only chips, but entire GPUs setting on engineers desk that get scrapped. That never see the light of day
Exclusive: According to our industry sources, AMD has a few surprises in store for us when it comes to the Radeon R9 390X, and the other GPUs that will arrive with the Radeon 300 series.
Our source wouldn't elaborate, but they did say that the new Radeon R9 390X will arrive with specifications and possibly features that are different to what the rumors currently suggest
Yep, and project Skybridge. . . Amur?
