If you are going to try this I think you best be up-to-date on concepts of virtualization, parallelism, the apps being used for your microarray analysis and how they are compiled.
First off, can your microarray analysis software/processes be run by 'distributed computing' on a gigaLAN ?? As exciting as a 2p/4p server with 32Gb of RAM sounds - if your software permits - a 'desktop farm' may be faster, cheaper and less of a hassle.
I think your best success would be with an enterprise 'nix OS and Optys but as OdiN noted you may not have this option depending upon your apps and expertise. I think Red Hat RHEL has been talking about "Para-virtualization" (did I just make up a word?
) but I've never seen it in action.
To make all this work at the level you want you will need some serious disk I/O !
And though I think the Opty's may reign supreme, config setup is a more serious issue to gain performance advantages. You may get near linear scaling with AMD on certain processes but NUMA systems will kick yer arse with page faults if you don't ?localize" processes to specific cpu/memory banks.
This is not an issue with the 'northbridge' Intel/Xeon` arch.