• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Did Microsoft artificially cap WEI HD scores, just to boost SSD sales?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Just wondering. I've heard that they intentionally capped magnetic disc drives at 5.9 on the WEI HD score. If they didn't do that, do you think that perhaps a modern HD might score higher? It just seems to me to point people towards an SSD, perhaps unnecessarily.

How to read WEI scores
X.Y
X = Features
Y = Performance

Features include: Size, DX version, Technology generation (DDR vs DDR2 vs DDR3), SSD vs HDD, IDE vs SATA, and a certain minimum (rather low) of speed, RPMs (is a drive above or below 5400RPMs), MHZ, etc.
Perfomrnace: Actual benchmarked speed relative to others in the same feature class (aka, a 5.5 is not necessarily the same speed as a 4.5)

Win Vista had a max of 5.9 while win7 has a max of 7.9

A 5.9 has different values for win Vista and Win 7. (which actually creates the illusion that Win7 is a slower OS based on WEI scores)

While it is true that SSDs count as a "feature" and thus get an overall higher score automatically, to claim that MS is doing it to "cheat" to promote SSD sales is pure lunacy.
MS has no skin in the SSD market nor in the HDD market. And besides SSDs are about 2x sequential and 50-100x random speed (except those early j-microns which were hundreds of times slower than HDD in random speed). Your SSD is capped at either 6.9 or 7.9 based on its size. Your HDD is capped at 5.9 (unless its an ancient tiny drive which is capped lower). Going from a 5.Y to a 6.Y or a 7.Y doesn't even come close to doing justice to how much faster SSDs are. If anything you should be accusing MS for "cheating" in order to favor HDD and downplay SSD. But its obvious that they are simply incompetent rather then malicious in this case.
 
Last edited:
LOL.. I'm with Taltamir on this one.

and I think there should be a similar term to "planned obsolescence" for companies like MS. Maybe "planned incompetence"?
 
PS. most the details about HOW MS calculates a score are secret, but I got some examples from interviews with MS engineers as well as some KB from the MS website which gave vague and non exact explanations. Which is yet another critical issue with WEI and by itself a reason to completely ignore WEI.

The reasons to completely ignore WEI:
1. Exact methodology is secret.
2. Scores are capped.
3. Scores do not increase linearly, but based on "features".
4. Tests lump completely separate scores into single "overall" score (sequential + random speed of an SSD lumped together).
5. Actual measured score (the .Y part) is normalized for hardware of the same "class"
 
Last edited:
reminds me of a rig I built back about a year ago...phenom II 965 o/c and all. It had very fast elpida hyper memory in it, and at stock settings that the motherboard detected, I can't remember now but was probably 1333 or soemthing lame at high timings, it scored like a 7.4 or 7.5 on memory speed. I later o/c the thing to a much faster fsb, tightened the timings way up to like 6-6-6-18, 1T @ well over 1600mhz and reran the score and it dropped to a 5.9 or something on the memory score 🙄. Countless other benchmarks though showed increases in the memory speed so yeah it's a pile of crap and very often wrong.

For another funny example of what crap it is, my lenovo netbook with a 5400 rpm drive scores a 5.9...same things as my f3 samsung will show on wei:biggrin:
 
i thought they did, too, but then this 640gb in this pre-built HP slimline got 7.4
You probably had Intel Write-back volume caching enabled, which means if you benchmark the drive you'll partly benchmark the RAM write-back buffercache. That means that you can get 2000MB/s speeds on the drive because the RAM is in between, which may trick the WEI benchmark into believing you got an SSD rather than a mechanical drive with high latencies.

For example, see this HDTune benchmark:
HDTune_Benchmark_Velociraptor_RAID_0_stamped.png


Note the high burst throughput, which is because of repeatedly accessing the same area which is then cached and thus the 2GB/s comes from RAM rather than from disk. The same trick can be performed with CrystalDiskMark and using a 50MB test size; the random reads would be way too high for a mechanical disk.

I'm not sure this RAM volume write-back works on single disks though.
 
Actually I think WEI is a pretty good attempt integrating multiple performance aspects in a single number. But that would mean a pretty long discussion I'd rather skip today. ;-)

Suffice to say, if you want more accurate prediction of realistic performance, you would have to use more complex benchmark tools that give you multiple scores on multiple performance aspects.
 
Are you using Intel RAID? If you run HDtune do you see high burst values like in the screenshot I posted?

No I needed the system for an ESXi rig so I built it with a card I got from FS/FT. The array runs off of a LSI Megaraid 8308ELP SAS PCI-E 4x card. I get high burst but the card has a BBU, so that gives me *alot* of cache (256MB + 32*4 on the drives themselves). Works great for my fairly sequential work loads though. (VMWare Workstation 8 on Windows 7).
 
Well like Intel RST, a Hardware RAID with dedicated memory also acts as buffercache and as such you cannot benchmark the drive directly but a memory write-back engine is between the disk and the software. So you get the same effect and this most likely is the reason you get a higher score than 5.9 using WEI.

When benchmarking your controller, you need to make sure to have a sufficiently high test size, since otherwise the dedicated memory and write-back engine would artificially yield too high results which do not tally with real-life performance. For example, test:
- CrystalDiskMark 1 run of 50MB
- CrystalDiskMark 1 run of 5000MB (or the maximum size you can set)

You should see a difference in random reads at the very least. Without a write-back engine like a normal controller you should NOT see a major difference between these two tests.
 
I can do another test when I get the chance, but I did my testing with crystal mark, fairly sure I used the biggest or next to biggest settings. 4 or 5GB at 5 times testing.
 
Use 1 run, not 5. CrystalDiskMark uses the highest score out of all runs; which of course is stupid and distorts the results. So run multiple benchmarks of 1 run; not 5 in one run.
 
Back
Top