Long winded response, sorry.
running server grade hardware is awesome, been doing it for years in my home lab. However, I recently switched to some good consumer grade hardware and get better overall performance.
Noise :
server grade hardware is LOUD. you have to cool everything well especially the memory, lots of fans/heatsinks. Ecc/FBdimm's burn hot and they are not cheap to get in dense configs. a single 2gig fbdimm will use 20-40 watts depending on load. Have a bunch and the bank of ram is a space heater. 64gigs of ECC/fbdimm will cost a fortune and depending on density will still generate a lot of heat.
CPU :
really doesn't matter which CPU you choose unless you are worried about mobility between ESX hosts. Either AMD or Intel will be fine unless it doesn't support the feature set you want. x64 and vt-x, vt-d in the intel world and the corresponding AMD spec...
Also, i would rather have a single cpu with four cores than 2 cpu's with 2 cores.
Power :
my two servers were idling at around 500watts. The Current setup uses less than half of that (175 watts at the ups). I dont use monster power supplies now but good quality units and i dont have two rack mount 3kv UPS's, 1 1.5kw tabletop UPS is all i need for proper shutdown and 20 or so minutes of runtime.
Raid :
Well i still use hardware raid since bios level (software raid) kinda sucks. OS raid, depending on implementation can be okay. I guess i am a little old school with this.
Cost :
Expensive to power all that server grade hardware, and all the heat then needs to be taken care of by your house AC unit. i noticed a considerable difference in my power bill in not only being more efficient but my AC units dont run all the time (even at night).
Other considerations:
Computer case :
server motherboards usually are not atx,matx, etc. they are CEB, eatx (old spec), basically 12x13 inchs. You would need a server grade case to handle a server grade motherboard. This is a pain in the ass and can get really pricey especially when you get into hotswap bays and such. $200-$400 per case would be a decent price range for this with 6 to 8 sata hotswap bays. They are very nice though...
Power supply :
You can use generic PS's with server grade motherboards but they will require some additional power connector whips and/or converters. I have seen the Generic PS's blow up when attached to server boards, not fun when you just invested a crap load of money into your very nice server grade hardware. Also server grade PS's are a bit pricey.
Software :
I use both Microsoft and ESX hypervisors. 2008r2 runs perfectly on the below hardware, no issues. ESX runs perfectly also except for the setup, you have to put in a supported NIC to get it installed. You can take out the extra nic after the onboard nic drivers are installed. one neat feature that i got this hardware setup for is the support of VT-d. pretty cool to give a raid adapter or video card to a VM at a hardware level.
With dynamic Hypervisor memory utilization there is not a direct 1 to 1 correlation between unless you hard set it. you can basically oversubscribe your ram in hopes that not all of the machines will get busy at once. If they do you will swap like mad, but overall it is a home test lab
current setup:
-Windows 2008 R2 hyper-v server/file server
Intel i3-2100T
Intel Desktop Executive mATX DQ67SWB3 Motherboard
Antec 650watt 80 plus power supply
16 gigs of ram
3ware SATA 8 drive raid5 setup.
-Vmware ESX 5.1 server
Intel i5-2500T
Intel Desktop Executive mATX DQ67SWB3 Motherboard
Antec 650watt 80 plus power supply
32 gigs ram
no raid setup. all individual drives. backed up to the raid set on the other server then tape.