raid-10 with sata drives sucks balls for esxi - its nearly unusable. any sort of load and a vmotion will fail (timeout) to vcenter. dunno what you use your vm's for but i can fit 8x as much on a sas 15K solution than a sata 7200rpm solution and i only use flash back write cache controllers with either drive.
sata drives are okay for backup (D2d2d) but other than that man they are too slow
To put it lightly, you may have something else going on.
To get into more detail....SATAII has been dubbed the norm these days. It's basically the name the industry has started calling SATA drives that have 3Gb/sec transfer rates. Now...that we all know this....let's look at bandwidth requirements for FC/iSCSI since those are the two most common VMware disk connectors. If you have a busy volume, you need a pipe of about 600Mb/sec for iSCSI...this is why gig copper works ok for it...and why FC scales as high as it does (VM-wise) @ 2GB or 4GB (8GB if you have money). In most of my DRS clusters, I can have 24 VMs per server without pipe I/O issues....but certain applications are write hungry, so I have to spread out the array access or use FastCache to deal with hotspots in the array.
I've been using SATA II drives with VMware for years. It was 'cheap' storage and worked fine for things without a lot of writes. My current employer originally set their VMFS volumes up on RAID 5 (12-14 spindles tho on an old EMC CX500). The new stuff is a mix of SATA II & 10k SAS. Typically, 15k SAS and SSD are only used for FastCache & Oracle/MS SQL servers. In any case, they all vmotion just fine with SATA...but it can be really slow when client-based backups are running during a transfer...b
If you're running into problems with VMotion, make sure you've got it on a private VLAN without any other traffic. If you're doing this local, you can use a crossover cable, but it won't correct for dropped packets. You really need it to go through a private switch. It doesn't play well with other traffic. I typically just put it on a private, non-routed class C and all's well. The few failures I've seen have been virtual-machine specific..where it was installed originally in an ESX3 or ESX3.5 environment and needed to be powered down and cold migrated to change its disk type or hardware profile. (thin-provisioned disk/thick provisioned,etc)
Some tuning pointers though for a low-resource VMware/Citrix environment, if you're simply running linux...you should be good. Just size the ram appropriately on the VMs (low) so you don't end up with a lot of swap eating your disk IO. You can even look at non-journaling file systems like EXT2 because they scale the best...EXT3/4, ReiserFS, and most of the newer ones do a lot more touching of the files and end up causing a performance hit in comparison to EXT2. Same goes for Windows...adjust your page file/system cache and try turning off the versioning crap where possible. It creates too many restore points. I've actually had a lot of luck on standard systems disabling the modified time stamp since it tends to touch a lot of files unnecessarily. It won't be something you want to do on a system you need to audit. (modified stamp is a security feature!)