I understand his point, I just don't think it is valid. My response was mostly poking fun at him, since the example he gave was extreme. So, let me elaborate:
What's the difference between GameGPU's scoring system and TPU's overall performance? They just add up all the scores for each card and divide by the number of games.
There is a very big difference, and you're mistaken about the method TPU uses. While the actual error in using GameGPU's method relative to the correct way may be small, the GameGPU adding up FPS method doesn't work - it's not correct, no matter how close it comes to the correct answer.
I'll give you a relevant example. Let's compare two hypothetical scores (made up)
In Game 1 (sort of like DOOM Vulkan): RX 480 gets 110 FPS, while GTX 1060 gets 100. RX 480 ahead by 10%.
In Game 2 (sort of like Watch Dogs 2): RX 480 gets 30 FPS, while GTX 1060 gets 33. GTX 1060 ahead by 10%.
Using these two games, which card is faster on average? Neither - they're equal, of course. But, what if we use GameGPU's method?
RX 480 = 140 (110 + 30)
GTX 1060 = 133 (100 + 33)
RX 480 appears ahead by 5.2%, when in fact they're equal. Games with higher FPS are weighted higher, and this is incorrect.
Techpowerup uses a different method, one that correctly shows relative performance. You choose one card as a reference (the card being reviewed), and compile data showing the relative performance of each card in turn, then you take the
geometric mean (which ensures correct relative performance across all cards) of the relative differences, and that paints an accurate picture of hierarchy, not a messy, incorrect one like GameGPU uses.
Going back to my example, this is how you do it correctly. We'll choose the RX 480 as reference, and we'll throw in a third and fourth card to really demonstrate what's going on
Game 1: 1070 gets 140 FPS, RX 470 gets 95 FPS
Game 2: 1070 gets 50 FPS, RX 470 gets 27 FPS
Game 1: RX 480: 100, RX 470: 86.36, GTX 1060: ~90.909, GTX 1070: ~127.27
Game 2: RX 480: 100, RX 470: 90, GTX 1060: 110, GTX 1070: ~166.67
Then we take the geomean of (90.909, 110) of (127.27, 166.67) and of (86.36, 90) while the 480 is 100.
Relative performance:
RX 480: 100%
RX 470: 88.16%
GTX 1060: 99.95% (100 in truth, as 90.909 is a decimal approximation of 100/110)
GTX 1070: 145.64%
Now, let's say we chose a different reference card, the RX 470 in this case. We get this performance chart:
Relative performance (470 as reference):
RX 470: 100%
RX 480: 113.43%
GTX 1060: 113.423~113.43%
GTX 1070: 165.19%
So now, we have two references. Can we convert between them correctly? Let's try finding what the GTX 1070 is relative to the 480 using the second performance chart.
165.19/113.43 = 145.63 ~ 145.64 - same as the result from above. We get the same result whether the charted values are relative to one card or another.
This will not work if you use arithmetic or harmonic means. If I calculated the average relative performance using arithmetic means, then attempted the same conversion as above, I would get a different answer from the one in the top chart, rather than the same as I did when using geometric means.
All the small differences are due to using decimal approximations. Using rational expressions (or more precise approximations), these would be exact. This is the correct method and it works across all cases. GameGPU's method might sometimes give us a somewhat accurate picture of relative performance, sometimes it might not - i.e. it's incorrect and it doesn't actually mean much.