Actually, we tend to make a lot of assumptions about graphics cards that we take for granted when benchmarking. Here is what we are REALLY saying.
On an <insert rig specs here> running at <insert resolution here> and with game settings at <insert game settings here>, product XYZ produces <insert average frame rate here>. The assumption we are making is that 1. the higher the number the better. 2. That our test bed accurately reflects a "general" system. 3. That the videocard is actually doing what we tell it to (i.e. the whole "brilinear filtering brouhaha).
In this case your independent variable is your game settings (resolution, AF, AA, etc) and arriving at a dependent variable of an average frame rate. In other words, you plug in specific game settings (1600x1200, 2XAF, 4XAA) and you arrive at an average frame rate of some value.
All HardOCP is trying to do is reverse the situation. They are (or should be) making the average frame rate the independent variable and having their game settings be the dependent variable. In other words, you take an average frame rate of XX and determine what the maximum levels you can set your card to, and still get an average frame rate of XX.
It certainly is a different way to look at it. Of course, most peoples knee-jerk reaction to anything new is to bash it mercilessly with statements that don't "prove" anything (It sucks!, Kyle is an idiot, What kind of morons..., etc.) as opposed to actually THINKING about it and coming up with reasons why its not a good way to assess video cards.
I personally think it is an interesting viewpoint. What is more useful? Knowing that the GeForce 6XXX is 2X as fast as a Radeon 9800XT at 2056x1920? or knowing that at a minimum frame rate of 60fps, the Radeon 9800XT can handle 1600x1200, 2xAF, 8xAA; while the Geforce 6XXX can handle 2056x1920 2xAF, 8xAA. Since I don't ever play at 1600X1200 (don't have the monitor for it). The fact that the Geforce kicks tail is great! but totally worthless to me.
I think the major problem with the current review is that they try and reference previous stuff. You CAN'T!!! You just changed your testing methodology! None of your previous data is relevant! Furthermore, because the previous data is irrelevant, YOUR PREVIOUS CONCLUSIONS BASED UPON THAT DATA ARE ALSO IRRELEVENT! In that article, they repeatedly reference the huge jump the 9700 Pro had. WHAT huge jump. There WAS no huge jump, because you have no test data for that jump. Your huge jump, was a jump in frame rates using the old methodology. For all they know, they may find the same "underwhelming" setting changes between the Ti and the 9700 Pro, as they are finding in this case. But they don't know, because all of their old data is useless for comparison purposes. So, they really don't know if this is a "big" jump or not. Because they have ZERO other jumps to make a comparison based upon their "new" methodology. But they try and make these types of comparisons anyway, and THAT is where they run into problems.
Whew! Hopefully, that just made sense.
<Picks up his quarter and replaces it with a dollar

>
P-X