For most projects roughly 99% is floating-point, so in theory the computer with highest floating-point-benchmark (Whetstone) should be the best.
But in practice this doesn't hold, since cache-size, memory-speed and so on can also greatly influence the speed in some of the projects, and it's also possible a particular cpu-type gets an advantage/disadvantage due to how the application is programmed/compiled...
Example, my computer A benchmark... 1965 MFlops but uses 60 s/TS in Seasonal Attribution.
B benchmarks 1508 MFlops, 45 s/TS in Seasonal Attribution.
After the benchmarks, A should be 30% faster, but in practice B is 33.3% faster in Seasonal Attribution, largely due to A being stuck with mediocre memory-speed.
In some other projects that doesn't rely so much on memory-speed, A is faster than B.
Many projects doesn't use the BOINC-benchmark at all to decide credit, example SETI, CPDN, Einstein, QMC, SIMAP. A couple has their own systems there benchmark-variations is largely averaged-away, Rosetta and WCG. The rest mosty relies on the normal quorum-rules, there granted credit is either lowest claimed, or average after highest/lowest claimed is removed...
So, depending on that projects you're running, how good/bad your computer is crunching the wu's has more influence on credit/day than whatever benchmarks your computer gets.