I brought it up, but the predominant AMD crowd that is this thread no longer cares about perf/w, even though they don't have performance either. Now it's perf/$ that matters and nothing else (not even smoothness).
Not sure if serious. Performance/$ has always been either the most important or one of the most important metrics for GPU buyers. This isn't the first generation it's being talked about. If it wasn't, we'd all be buying $1-3K Titan setups. If performance/$ didn't matter, what's even the point of researching GPUs? We would all be rolling to the next fastest card out.
Regarding performance/watt, no one
seriously cares when a GTX680 system is using 40-50W less on a system that's already drawing
nearly 400W but GTX680 costs $450 and 1Ghz 7970s are going for $380 with free AAA games. Let me know what the break-even point is on the electricity costs with that 680 card after paying an $80 premium and getting a slower card.
🙄
And honestly, if you care that much about heat/costs as you keep talking about it so often referring to HD7970 as a "Fermi" (even though an 1180mhz HD7970 uses just 10-20W more power than a stock 580 while being 60% faster), why are you running GTX470 @ 960mhz in Tri-SLI? It's interesting how people buying $500 GPUs suddenly started to care about 40-50W of extra power and yet bought Fermis over HD5000/6000 series. Performance/watt is nice but I'd take a card with 600W of power if I could if it was 2x faster than the Titan for $500. Performance/watt is a nice metric but under no circumstances can it somehow justify GTX680's price premium or the Titan's unless you are running hundreds
😉. HD7950 OC for $280-290 will match a GTX680 or even beat it and still use just about 225W of power doing it. Try doing break-even analysis on power consumption costs on that!
I'm surprised not much has been said about Titan's perf/watt efficiency in this thread. It's pretty damn incredible, putting gk104 and Tahiti to shame.
From an engineering point of view indeed it is impressive. Most enthusiasts who have overclocked i5/i7s and 185W GPUs (GTX680 and higher) don't really care about GPU performance/watt if the differences are so small as they are between 7970 and 680. It matters more if you are going Tri-SLI/CF and you are stuck with hot and loud reference blowers. For after-market cards, you fix the noise levels and temperatures. 40-50W of extra power is nothing in costs or heat on an annual basis unless you are gaming/using the GPU 24/7. In that case as I said before, those people better be running 92% platinum PSUs.
The difference in idle/load power consumption between motherboards on Z77 chipset with IVB @ 4.6ghz is greater than the difference between GTX680 and HD7970. I don't see people caring about 40W of extra power their high-end Z77 board uses vs. a lower end one.
http://www.xbitlabs.com/images/mainboards/asrock-z77-extreme3/power-5.png
http://www.xbitlabs.com/images/mainboards/asrock-z77-extreme3/power-7.png
Why don't people compare motherboards on a performance/watt basis too? Going from a 1st generation Core i7 @ 3.6ghz to 4.2ghz
increased power consumption 100W. I never saw anyone here caring. I agree though that from an
engineering perspective it's very impressive that a 7.1B 551mm2 chip is using barely more power than an HD7970GE. The Titan's near 1.0V GPU voltage vs. 1.256V on the HD7970GE is a big factor here.