Hey guys, I work with the team that built cpuboss so I thought i'd chime in. One of the goals of our site is to make it easier for people to compare how cpus are different, so although we do try and attribute 'overall scores' to help rank processors, we don't mean for these to be a one-size-fits-all ranking of the best cpus. We think the most value for our users will be in the way we breakdown and compare how cpus are different from each other, ranging from pure specs, to performance benchmarks, and reviews (we aggregate this data from other sites and try and organize it in a more user-friendly UI).
I want to emphasize that we tried to create a flexible system where someone who has a particular need (multithreading, performance per dollar, brand, etc.) can use our filters to create a list like the one that was posted at the top of the thread, except tailored towards specific criteria (the default ranking is pretty generic). So for example, we've created a few pre-made lists on the home page (e.g. best gaming cpus) that filter out some of the noise, like having the server + desktop cpus mixed in one list, as noted in this thread.
One person mentioned the i7 3770T and the 3770K contradictory results, and I will admit we don't do a great job at explaining where our data comes from (we're working on this!), but here's what's happening: the 37770T benchmarks better on the performance data that we have shared data across processors (we need consistent and common benchmarks in order to have a reference point). However, for the in-depth comparison, we take a lot more factors into account (specs, various benchmarks, value, etc.), and show exactly how each cpu is different in each area where we're able to collect data.
Hope that helps provide a bit more clarity. We're still a pretty new site (launched in January), so we're always on the looking for suggestions and feedback. I know there's a few hurdles that we still have to overcome, so thanks for sharing your thoughts here.