Depends on the review, and probably the game... Anandtech found them to be quite similar using total system power:
392/301 = 30% more power for the Titan X, while:
http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/6
Performance is
33.3% faster for the Titan X. This is the average of all results, since AT does not specify which was used for the power consumption number. Worst case, the Titan X is 35% faster.
The Titan X is faster by more than 33% against the 980 when you use GPU limited games/resolutions. At 4K the Titan X is nearly 40% faster than the 980.
Per the AT chart you linked, AT specified Crysis 3. The Titan system uses 30% more power but it's 35% faster in Crysis 3. Also, the ONLY 980 card that can exhibit that power usage per AT is the reference blower card. Not everyone buys those which basically means the Titan X rig should end up efficient than just about any GTX980 setup unless one plays a lot of CPU limited titles.
I think you're mixing up the words "effectiveness" and "efficiency", because the 980 is a more power efficient card than the Titan X, however, Titan X is the more effective card because it's faster. Why should I look at performance per watt? I don't spend most my time gaming. It's not a more efficient card, it's a less efficient card as it pulls more power.
I never said you should care about perf/watt. I am simply stating perf/watt should be measured on a Total System basis. Stating that a reference 980 is more power efficient than a reference Titan X ignores the
actual efficiency a PC gamer "pays for" / experiences when he is gaming.
Do people drive a car's engine or the entire car? That's why the car's overall fuel economy efficiency is dictated by a combination of factors: Engine's efficiency, coefficient of drag, rolling resistance of the car's tires, etc.
In a similar fashion, comparing the efficiency of a GPU on a card basis is something an engineer might care about. As a
gamer/user, what we want is the total system efficiency because our entire rig is used to generate IQ/FPS. That makes the Titan X
rig both more effective and more power efficient.
Well, in NVIDIA's presentation for the Titan X, they pretty much focused on computation usage. The idea of the 12GB of on-board memory is to satisfy large data sets, and frankly, that benefit will not disappear even if the consumer-oriented card is just as fast. Now, if someone is fine with a 6GB memory buffer, then they might go for the cheaper card.
What I meant in the gaming scene, GM200 6GB (MSI Lightning/EVGA Classified treatments, etc.) or R9 390X will make the Titan irrelevant. For those rendering and doing semi-professional work, sure the Titan X will have its place.
...and we should see even higher efficiency for the rumored 980Ti given less memory chips that will need power (12GB -> 6GB). Albeit, I'm not sure how much juice these chips take, but that would be an interesting number to know.
Agreed. That's why once the Titan X launched, I right away called it that a 1200mhz+ GM200 6GB with after-market coolers is really what we want. The Titan X is a compromise on 2 fronts: (1) shoved arguably useless amounts of VRAM for gaming at the expense of power usage and GPU clocks (2) limited air cooling options to the reference blowers which resulted in compromised noise levels in overclocked states. GM200 6GB should address both of those points.
Just months after the OG Titan's launch, AIBs released factory pre-overclocked 780 cards that made the Titan obsolete for gaming.
And they managed to hit 30 dBA at load (!), while operating at
76C in max overclocked states (!)
If NV doesn't neuter the consumer GM200 6GB card too much, MSI Lightning and EVGA Classified GM200 6GB cards should beat the Titan X for $200+ less.