Anandtech useas oudated bechmark softwares on pupose(?)

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Not only Anand uses outdated software but for games they don't even show the full HD results and they don't state what video card and settings they used in the comparison bench.

And in the Sandy Bridge review Anand used low resolutions (even 1024 x 768) and no AA/AF in order to make the Phenom II look bad compared to Intel. .
Yeah obviously we want the GPU to factor in heavily in a CPU benchmark. After all the standard for GPU benchmarks is also to use the weakest CPU you can find to make sure nobody has any idea what factors in how much. But this just shows that if you can't understand a nice colorful picture it'll do more harm than good - but then AT's target audience is hopefully different from idiots with no clue.

Granted they could note that in every article, but then - different target audience.
 

evilspoons

Senior member
Oct 17, 2005
321
0
76
And in the Sandy Bridge review Anand used low resolutions (even 1024 x 768) and no AA/AF in order to make the Phenom II look bad compared to Intel.

No matter that nobody runs a fast CPU and video card at those settings.

Copied and pasted from another thread where I had to make this same explanation:

It's not "showing off" or "making a CPU look bad", it's demonstrating the contribution the CPU makes to the task. Obviously both the CPU and GPU need to be fast for most games to work properly, but some games just don't care and are highly CPU or GPU dependent. If you bottleneck the game by having GPU settings that are too high, you can't see potential CPU bottlenecks.

Remember that benchmarks are an average of a run.

Let's say half a benchmark is easy on the GPU and half is hard on the GPU when you're at high resolution (say... indoors and outdoors). You end up with an average frame rate of 30 FPS because indoors is getting 50 and outdoors is getting 10. This will hide the fact that the CPU may be supplying enough data for some theoretical faster video card to run at 50 fps outside versus if you turned the GPU-heavy detail items down and let it 'run free'. Once the GPU-bottlenecking details are turned down/off, you can see whether the CPU will allow the game to run any faster or if it's still only running at 10 FPS - in which case the CPU is also holding it back, and you should have a faster CPU (and GPU) to run the game better. On the other hand, if the cheapest CPU on the market runs the 10 FPS section at 400 FPS when the GPU-dependent detail's turned down, it's safe to say a faster CPU is not needed to improve the game.

This is why you must try to isolate the components. Make a CPU decision based on CPU-isolated tests, and make a GPU decision based on GPU-isolated tests.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
I was going to post the same thing, but then I realized that he already knows that. The problem is that most of the people reading the reviews don't.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
I was going to post the same thing, but then I realized that he already knows that. The problem is that most of the people reading the reviews don't.
Only if we assume that AT's target audience is idiots with no idea about the matter - and really I don't see any reason to cater to the lowest denominator on a site such as AT. Why should we lose an useful metric just so that we can be sure to not confuse anyone else out there? Where would you think that should stop? No more SSD benchmarks because someone only doing webbrowsing all day long won't see any differences? No more overclocked CPUs in reviews because you can't guarantee that everyone will reach the shown frequency?
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
I think there should be more of a disclaimr and, generally, less focus on games in reviews.

I'm all for doing comparisons of sc2 and civ v and other cu bound games, but, really, who cares which cpu is better for l4d or portal?
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
I think there should be more of a disclaimr and, generally, less focus on games in reviews.

I'm all for doing comparisons of sc2 and civ v and other cu bound games, but, really, who cares which cpu is better for l4d or portal?

People who play L4D or Portal? If you're building a high-end gaming machine, you can hit a CPU bottleneck.

Tomshardware said:
What concerns us, though, is that in a direct comparison to a similarly-priced platform based on Core i5-2400 and Z68 Express, the Phenom II X4 980 Black Edition hit performance ceilings in a number of benchmarks where the GeForce GTX 570s in SLI still had performance left to offer. Intel’s higher frame rates proved that the graphics cards weren’t to blame.

Link

Some might claim that not many people are able to bring to bear that much GPU power, and that is true. But it is also true that most people are happy with a Core i3 or Athlon II X2.
 

drizek

Golden Member
Jul 7, 2005
1,410
0
71
Even the slowest CPU can run Portal 2 and L4D2 at over 120hz. So who cares which his faster? 150Hz versus 170Hz is irrelevant.

If, for example, the extra cores in Bulldozer give it an edge in Civilization, letting it run at 45fps instead of 40fps on Intel, that is relevant. If Intel has better single threaded performance and gets 180fps instead of 160fps in Half Life 2, that is just of academic interest and not indicative of the fitness of these chips either for modern games(which are neither GPU nor CPU limited, but VSync-limited), nor future games(which will almost universally by highly threaded).
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
If, for example, the extra cores in Bulldozer give it an edge in Civilization, letting it run at 45fps instead of 40fps on Intel, that is relevant. If Intel has better single threaded performance and gets 180fps instead of 160fps in Half Life 2, that is just of academic interest and not indicative of the fitness of these chips either for modern games(which are neither GPU nor CPU limited, but VSync-limited), nor future games(which will almost universally by highly threaded).
Well first of all it gives us a nice insight in how the architecture scales with typical gaming workloads, further it indicates how well the engine works together with the specific architecture (and game engines aren't newly developed each year so that's important for future games)


And if you really think that most games in the near future (and why would I care about games release 13/14 for a CPU that's released in a few months?) will be highly threaded I think you'll be sorely disappointed. Lots of games still work perfectly fine on dual cores - and the first ones of those were released about 7 years ago. We're just finally starting to see more than a handful games where quadcores are really useful (and even then there's still lots of stuff that can hardly be multithreaded in games)
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Bump because AT added R11.5 to the benchmarks in the recent Llano mobile article, as well as R10.

http://www.anandtech.com/show/4444/amd-llano-notebook-review-a-series-fusion-apu-a8-3500m/9

Performance in R10 single thread is 40% of a 2500K. Performance in R11.5 is slightly over 50%.
Multithreaded performance goes from 64.7% to 67% (Llano vs 2520K).
So it does look like AMD does close the gap marginally in R11.5 over R10, but not by a huge amount in multi-threaded tests.
 

Gundark

Member
May 1, 2011
85
2
71
Reading this thread, i wanted to do some tests on my own. In 3dsmax r8 i created 512 cubes and add volumetric light with raytraced shadows. This rendering speed i compared with 3dsmax 2010 ( this one is x64 so comparison is not fair but still interesting ). Test is done on Athlon 250 stock speed, 2Gb ram, Vista x64.
3dsmax 9 - 13.86 sec.
3dsmax 2010 x64 - 9.92 sec.
I don't have 32-bit version of 3dsmax 2010 to make fair comparison, but this difference is huge ( 40% ). It's certainly worth of trying.