I think it is completely relevant. Users are going to want to know the Power consumed by their machine in real world situations. I don't think the point of the power article is to discuss specifically the efficiency of performance/frame but instead the power consumption of a system utilizing the card in games at 1080p.
		
		
	 
I tend to strongly agree with your point of view. You cannot play a game with a videocard without the rest of the system. By looking at the total system power usage it actually tells the user roughly what PSU they would need with a given CPU. So many times we hear online of people being afraid to buy a 290X or 780Ti since they read AIB requirements of needing a 700W-850W PSU and get all paranoid. These articles that focus on a total system power usage and especially on a per game basis are very useful for those types of gamers to calm their fears that you can run a top of the line GPU with a top of the line i7 on Socket 1150 on a solid 450-500W PSU. You do not need a 750W PSU to run a 290X/780Ti style card!
However, since not everyone is running 4770K/4790K, we often quote single card power usage because it would be hard to find comparable total system power usage online for an i5 2600K + 980 as such system is rarely tested. 
	
		
	
	
		
		
			Good article, but one problem is the usage of system power consumption for the comparison, is invalid due to the massive performance deltas. Higher fps = more power use on the CPU & motherboard/memory subsystem.
		
		
	 
I feel the opposite. It actually tells a user with a modern i7 how much power they can save with the upgrade to a 980. While many on this forum disagree, to me total system power consumption is also a meaningful metric for single GPU power usage cases since you cannot run a GPU on its on in a vacuum. Single GPU power usage measurement (TPU style) is helpful when we want to compare the efficiency of a GPU architecture/SKU. However, it has lead to many many users incorrectly using the multiplier factor when discussing GPU setups with 2/3/4 GPU setups (i.e., they take the power usage of a single card and triple it for 3-GPU setups, which is NOT how Tri-SLI works). For example at 4:37 minute mark:
https://www.youtube.com/watch?v=UnS0xWtoRzk
Despite how useful single GPU power usage can be, it is exactly because of the factors you mentioned and diminished SLI/CF scaling that using total system power usage is often a way better method to tell us about what PSU we need and what it will actually cost us to run the total rig. For marketing performance/watt reasons it also sounds A LOT more impressive to say that GTX980 uses 90W less power than a 780Ti vs. stating that one system uses 250W and another 340W.
Too bad they didn't include the other recent games such as COD: AW, Far Cry 4, Ryse Son of Rome and AC Unity in their test but used oldies such as Bioshock Infinite or used The Crew that they didn't even count in the conclusion. One point that I always talk about is how little there is to gain by buying the fastest NV/AMD card/refresh as we see 580 didn't really last any longer than a 480, and I would imagine 670 and 680 would be similarly close on that chart. 
	
		
	
	
		
		
			Are you reading the same article I am?  Saying the 980 isn't an upgrade to the 780Ti?
How can you say that when the 980 is faster?  If it was slower, then it wouldn't be an upgrade, but it is faster.
		
		
	 
Because who in their right mind would sell their $650-700 780Ti for $350-400 (their market prices), to get 12% more performance at 2560x1600 as per the review? It's doubtful a user would feel a 12% differences in gameplay on average without resorting to actual benchmarking. While it is an upgrade in a sense if you skipped 780Ti and got the 980 which means you do end up with a faster card than a 780Ti, the 980 is not a legitimate worthwhile upgrade for 780Ti users, unlike moving from 580 to a 680 was. 
Grooveriding covered this point above. My biggest gripe is the naming convention. In the past NV would never label a mid-range GeForce Ti 4200 as 4600, 5600Ultra as 5800Ultra, 6600GT as 6800GT, GTX460 1GB as GTX480 despite those next gen mid-range cards outperforming the previous gen flagships. However, since GTX680 they are basically labelling x60/x60Ti series cards as x80 series. It's grossly misleading since they end up marketing 2 flagship cards in what essentially is just 1 GPU architecture generation. AMD isn't much better sometimes creating new generations out of thin air with HD6970 which was basically an HD5890 or 3870 which was really just a 2970XT if you will.
Wish more reviews focused on inter-generations performance increases as that data is often hard to find.