Someone already linked this nice aggregate roundup from 3dcenter
We're talking about efficiency now.
And btw, peaks as single maximum values are quite error prone. Average values are always more reliable since more information is used in their calculation, not just one data point that may or may not be a quirk.
So what is the performance estimates now? I hear some people are saying 45%~ faster than a GTX 680 around the interwebz.
I think you're wrong here, the idea that Crysis 2 at 1200p /w max settings isn't riding high on 99% the entire time without vysnc would only mean that the limiting factor is the CPU.
All my posts were never related to efficiency, but power usage. Not sure why you assumed I was talking about performance/watt.
You keep missing this: some people use their GPU at 99% load for hours/days/weeks at a time. For those people the peak rate is not a single error prone value, but their 95th percentile distribution, if not greater. There is nothing wrong with saying that GTX680 uses 166W of power on average in games from review ABCD, while and HD7970 uses 163W. However, that included many CPU limited games and cases where the GPU is not loaded. A lot of people on this forum are looking at peak load in games because some run 99% GPU intensive programs such as distributed computing, etc. You ignoring peak as irrelevant is quite telling because it means you are assuming this group of PC enthusiasts who use their GPUs for things other than games does not exist. Performance/watt should be looked at for peak values as well for those users.
If most of your usage patterns involve playing CPU limited games, then sure look at the average power usage for yourself. You keep claiming that you love using downsampling. That generally means 99% GPU load, or peak values, not averages. In that case the average power usage will approach peak reported at TPU/HT4U, etc.
Average heat dissipation while doing heavy lifting is what defines TDP.
Precise TDP definition proly differs between AMD/Intel/NV, but it always revolves at
"What kind of cooling solution do I need"
As for computing, you're right. But I think most people will game on Titan since you can get more compute power for cheap with a 7970 or 7990.
100% incorrect.
The thermal design power (TDP), sometimes called thermal design point, refers to the maximum amount of power the cooling system in a computer is required to dissipate.
It is such cooler that is able to continuously take away amount of heat equal to maximum sustained chip power draw
because essentially all P=U*I ends up "wasted" as a heat.
* In some cases the TDP has been underestimated in real world applications, such was the case with the GTX480. That was most likely the case of NV intentionally low-balling real world TDP of the 480 to save face. The real TDP of the 480 should have been 280W.
One revolutionary change that allows GeForce GTX 680 to aim high, is an extremely smart self-tuning logic that fine-tunes clock speeds and voltages, on the fly, with zero user intervention, to yield the best possible combination of performance and efficiency for a given load scenario. The GTX 680 hence reshapes the definition of fixed load clock speed, with dynamic clock speeds. Think of it as a GPU-take on Intel's Turbo Boost technology, which works in conjunction with SpeedStep to produce the best performance-per-Watt for CPUs that feature it.
Btw this begs the question:
What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?
But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.
Nvidia has hardware and software that monitor the tdp and temperatures. Go back to the gtx 680 launch reviews. This is why people see/will have higher boost clocks in some games.
The TDP of the 680 is 225W.
NVIDIA's official TDP is 195W
Btw this begs the question:
What are real applications for a graphics card that are marketed as gaming cards under the brand "Geforce" or "Radeon"? I would say it's primarily games. Sure you can run other stuff on them, but that is not the primary use case, so I would somewhat understand if that were not incluced in TDP calculation. Do you have a source that explains how Nvidia and AMD actually do this?
But this is a slippery slope I guess, no one could say for sure what AMD and Nvidia are thinking about this. I would assume they want people to buy their professional products if you're doing this type of workload.
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.
The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-reviewNVIDIA’s official TDP is 195W, though as with the GTX 500 series they still consider this is an average number rather than a true maximum. The second number is the boost target, which is the highest power level that GPU Boost will turbo to; that number is 170W.
Too bad, $2000 is way too much for what two single GPU cards are worth for my buying habits. Will wait for the price to drop. Even 50% more than a 680 is still really impressive, it's just the price that isn't.
It can't be primarily games since HD7000 was already designed for HPC to begin with, which right away means using those chips in more intensive apps than games. NV/AMD both talked about this when the whole issue of HD4870-4890 and GTX200 cards being blown up in Furmark began. They started first with software and then hardware thermal throttling for apps they felt didn't represent real world usage patterns. Other real world apps that load the GPU more than games are still considered.
The TDP of the 680 is 225W. If NV only looked at power consumption in games, they could have clocked the GPU at 1200-1300mhz. They didn't. A 1058mhz 680 peaks at about 186W in games which leaves almost 40W of extra headroom based on the TDP. NV clearly considered the design around more intensive real world applications than games when setting GPU clock speeds of the 680. The reference design can cope with 225W of power usage but games do not even get there.
I've seen values of 170W and 195W for GTX680 TDP, never 225W though. 225W is just what you get when you add up the power connectors.
There is another thing wear and tear.Transistor like everything else "ages" so you can't really build a chip based on "best case scenario" loads.
And so you see again - peaks are irrelevant when it comes to TDP.
Agreed.
Notice what I said earlier in this thread how people overhyped GTX480/580/680's specs/real world gaming performance increase? We are seeing history repeating itself 4th time in a row.
