a little off-topic, but I tried to suggest that reviews be done of all current hardware (top 10 models of each mfg) for both games and production type loads as well as power levels, so I could actually find them of use, and the reply make no sense. If I am evaluating a new cpu for possible purcharge I want to see how it compares to current (from other mfg) and older of same mfg and how it compares in power. Otherwise,, its not worth my reading.
You're more than welcome to not read. We have focused on FPS/gaming performance more than anything. Typically CPU reviews are once and done upon launch, compared to relevant chips at the time (and selection is often constrained by launch window time and what we are sampled). We go back and revisit later if we think there's enough interest in a particular chip, but quite frankly, our GPU and motherboard reviews get far better traffic and we've got bills to pay, such as staff to do the reviews.
Now, I am working on a back end that will allow for far more flexible data presentation to the extent a user should be able to select specific products, benchmarks and runs of the benchmark and do a DIY comparison of any data that we have. This is a vibe code in my spare time sort of project, but it will necessitate us refreshing our dataset across many generations of products.
Not sure about CPU reviews but how the GPU reviews worked in hardocp (on which fpsreview is based) was:
- Take 1 GPU to review
- Take a couple of more nearby in same brand
- Take a couple nearby in competitor brand
- Now compare everything at a setting optimized for the card under review
- This concept is completely different from the AT bench which is standardized across all cards
Pretty much. We do a lot of actual manual runthroughs on the games as opposed to automated benchmarks - the trade of, of course, is the volume of data that we can generate.