Need a PCI-Express RAID controller for an ESXi host

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I have an ESXi host at home for various reasons... built on my own hardware. My motherboard has 6 onboard SATA ports, however, it does not have a RAID controller and now that I'm doing more with the VM's on the box, I'd like to replace all my hard disks with like drives and use some level of RAID to protect against disk failures.

I'm having a hard time finding a card that is known to be compatible with ESXi 5.1. Anyone have any ideas? All the cards I'm finding on VMware's compatability list are enterprise grade drives that cost thousands of dollars and have battery backed cache. I don't need/want all that, just want a 4-port card that's under $100.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
PERC6i, Various LSI based cards can all be had for $100 or less with battery in most cases.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I'm running HP Smart Array P800/512mb cards in my ESXI boxes, works fantastic. $88 with free shipping on Amazon right now. Just be aware they are very sizable cards and you'll need a SAS/SATA breakout cable.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I'm running HP Smart Array P800/512mb cards in my ESXI boxes, works fantastic. $88 with free shipping on Amazon right now. Just be aware they are very sizable cards and you'll need a SAS/SATA breakout cable.

Cool, I don't know if I'll have enough room in my case for one of those. The PERC6 looks like a much shorter card.

Found one on eBay for $60 with a 4-drop cable. No battery, but the host is on a UPS, so I'm not too concerned about that. I may add a battery at some point, especially if the write cache is disabled without a battery installed.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Cool, I don't know if I'll have enough room in my case for one of those. The PERC6 looks like a much shorter card.

Found one on eBay for $60 with a 4-drop cable. No battery, but the host is on a UPS, so I'm not too concerned about that. I may add a battery at some point, especially if the write cache is disabled without a battery installed.

It disables the write cache by default. I vaguely recall that you can override that. Even with a UPS you are looking at data loss unless you figure out how to get ESXi to shutdown the VMs on power loss.

You can use the LSI management tools if you need to to bios patch it since the Dell installers may not work.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
It disables the write cache by default. I vaguely recall that you can override that. Even with a UPS you are looking at data loss unless you figure out how to get ESXi to shutdown the VMs on power loss.

You can use the LSI management tools if you need to to bios patch it since the Dell installers may not work.

I have scripts that shut down the VM's and the host server before the UPS battery runs out.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Well this is disappointing, the PERC6i won't enter the configuration. Locks up the computer.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Well this is disappointing, the PERC6i won't enter the configuration. Locks up the computer.

Update your BIOS.

Asrock Z77 was doing the same thing. It is a motherboard issue and not the card. If it is UEFI you also may need to set it as a boot device. It doesn't have to be first but the UEFI won't "boot" the BIOS if the card is not at least in the boot list.
 
Last edited:

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
That did the trick. I considered that when I was troubleshooting but figured the BIOS wasn't THAT old. It was a year and several versions old. Seems to work great now, although I haven't connected any disks and/or tried to boot to any disks. Now to find a solution for how hot the PERC6 gets... maybe my new case will have enough airflow that I won't need to change the heatsink or add a fan directly to the card.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
That did the trick. I considered that when I was troubleshooting but figured the BIOS wasn't THAT old. It was a year and several versions old. Seems to work great now, although I haven't connected any disks and/or tried to boot to any disks. Now to find a solution for how hot the PERC6 gets... maybe my new case will have enough airflow that I won't need to change the heatsink or add a fan directly to the card.

Oh you need a fan for it. (I honestly forgot to mention that sorry.) I bought a small 40MM and just zip tied it on. Let me see if I can find the fan link. Any 40MM will zip tie right on top and work though. The server cards expect you to have ducting which nearly all rack mount cases do, ducting air through the PCI-E tract.

--edit--

I used something like this:
http://www.ebay.com/itm/Evercool-40mm-Low-Noise-12V-Case-Fan-3-Pin-OEM-/270975559964
or
http://www.amazon.com/CASE-40X40X10...=UTF8&qid=1371317023&sr=1-4&keywords=40mm+fan
And a Molex -> fan adapter I had sitting around.
http://www.amazon.com/Alpha-Omega-3-.../dp/B000BSJGL0

Just dont zip it on so tight that you break the mount. It just needs to hold it in place.
 
Last edited:

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Oh you need a fan for it. (I honestly forgot to mention that sorry.) I bought a small 40MM and just zip tied it on. Let me see if I can find the fan link. Any 40MM will zip tie right on top and work though. The server cards expect you to have ducting which nearly all rack mount cases do, ducting air through the PCI-E tract.

--edit--

I used something like this:
http://www.ebay.com/itm/Evercool-40mm-Low-Noise-12V-Case-Fan-3-Pin-OEM-/270975559964
or
http://www.amazon.com/CASE-40X40X10M...words=40mm+fan
And a Molex -> fan adapter I had sitting around.
http://www.amazon.com/Alpha-Omega-3-.../dp/B000BSJGL0

Just dont zip it on so tight that you break the mount. It just needs to hold it in place.

This is the case I have on the way... no ducting, but plenty of fans. Think it'll be enough? Maybe I'll fix a temperature probe to the heatsink and see how hot it gets without ducting or a fan mounted to it.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
This is the case I have on the way... no ducting, but plenty of fans. Think it'll be enough? Maybe I'll fix a temperature probe to the heatsink and see how hot it gets without ducting or a fan mounted to it.

Honestly, if you can, just pick up the fan for a few dollars. I had to put one on mine when the card crashed during backups.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Got the case and transferred the guts to it. I have the PERC6 installed as well. There's a 120 mm fan blowing right across the heatsink and my IR temperature probe says the heatsink on the PERC6 is a little above 50 degrees C with it idle, no drives attached. Think I'll find a little fan, like you suggested and see if that gets it any cooler.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Got the case and transferred the guts to it. I have the PERC6 installed as well. There's a 120 mm fan blowing right across the heatsink and my IR temperature probe says the heatsink on the PERC6 is a little above 50 degrees C with it idle, no drives attached. Think I'll find a little fan, like you suggested and see if that gets it any cooler.

It might be ok, the dell cases have a plastic channel that blows air over it which directs a lot of air flow down the card. However this cools the CPUs and the RAM first in the R610 / R710 line where you see this card a lot. The higher temp might be pretty normal for that card.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
PowerPC processors have always run pretty hot. I'm assuming the PERC uses a PowerPC processor like the HP's.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
PowerPC processors have always run pretty hot. I'm assuming the PERC uses a PowerPC processor like the HP's.

The big issue is the PERC6i was designed to have a front to back ducted cooling system in the server it was to be installed in so the PERC6i only has a passive heat sink but it is considered "actively cooled" because of that ducting. If you plug that card right in to a normal computer case, the passive cooling can't handle the job since it is requires the active cooling.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Yeah... the only temp related things I've been able to find on the PERC6 is that the servers they typically come in allow a max ambient case temp of 60 degrees C... and the heatsink isn't even that high, so it would have some headroom. With the amount of air flowing through the case with all the big 120mm fans in it, ambient case temp is basically room temp.
 

Ratman6161

Senior member
Mar 21, 2008
616
75
91
I'm in the same boat and today I ordered an LSI 9260-41 RAID card. I'm using an Asrock 990FX Extreme motherboard that has 3 full length PCIE slots. The manual says the first two are X16 and the third is x4. The first slot has a graphics card.

Meanwhile the manual for the LSI card says it is x8 and can go in an x16 slot - so my second full length slot which is x16 should work. But...the LSI documentation says that on some desktop motherboards the PCIE slots only work with graphics cards. Back to the
Asrock documentation, it doesn't specifically say it won't work with anything but graphics cards but on the other hand doesn't really address anything else.

Finally, the LSI does have a similar Asus board with the same 990FX/SB950 combination as I have listed as compatible (but does not mention any other desktop boards for AMD).

Am I over thinking this? Can I assume that since I have an x16 slot available then it should work?
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Prob are over thinking it. 16x for the most part should just work. I actually had the perc in a slot with less lanes (4x I think) and it worked fine. The asrock I have all this in has full length 16x slots with only the pins for the size of the slot.

I just use the on cpu video and slots for other stuff.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I have the perc in a 4x slot now and it's working fine. Saturating my GigE network while transferring 1.2tb of files to it.

PCI-E 1x provides roughly 1 Gbps, so if you're looking for max throughput 4x may become a bottleneck if you have enough spindles.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Something else just occurred to me, imagoon. How am I going to monitor the status of the array? If the host were a standard 'nix flavor or Windows, I could find open source software to monitor it I'm sure, but what about with ESXi being the host? It doesn't appear to be able to monitor the array with default drivers.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Something else just occurred to me, imagoon. How am I going to monitor the status of the array? If the host were a standard 'nix flavor or Windows, I could find open source software to monitor it I'm sure, but what about with ESXi being the host? It doesn't appear to be able to monitor the array with default drivers.

Install the LSI vibs. Different card but still applies:

http://www.tinkertry.com/lsi92xx-health-under-esxi-51/

Once you do that you can see the health in the host system health tab. You also should be able to "connect" to it using a local copy of the LSI tool to update firmware, reconfigure disks etc.
 

Ratman6161

Senior member
Mar 21, 2008
616
75
91
I have the perc in a 4x slot now and it's working fine. Saturating my GigE network while transferring 1.2tb of files to it.

PCI-E 1x provides roughly 1 Gbps, so if you're looking for max throughput 4x may become a bottleneck if you have enough spindles.

Thanks. In my case though, performance and throughput are secondary issues so as long as the card is properly recognized, I should be good to go.

FYI the issue I'm trying to solve is as follows: My ESXi box has 1 2TB SATA Drive and 4 identical 500GB SATA drives. The 2TB drive has ESXi on it as well as a file server. When creating Virtual machines in my development and test environment (which is what this box is for) I have to make decisions about which of the 4 500 GB disks to put the VM's on. I don't usually have all the VMs running at the same time so I try to put VM's that are likely to be used at the same time on different disks.

So, if I had those 4 500GB drives in a RAID 5 of about 1.5 TB (which is plenty of space), I could get away from having to manage space very closely. The RAID 5 should handle multiple VM's accessing it simultaneously a whole lot better than a single disk would.

PS: I have not been an AMD fan of late, but for anyone considering this kind of setup, the AMD FX 8320 I'm using works really well. For my purposes single threaded performance (and performance in general) are not big issues. I just need lots of cores. Of course an i7 such as the 2600K I have on my desktop would do the job nicely with 8 virtual cores but the FX 8320 was only $149 at Microcenter so its lots of cores for little $$. Then I slapped 32 GB of cheap RAM in there and it really works great. Note - as I mentioned this is a development/test system with a fair number of VM's but only a maximum of 4 people using it at any one time which is why I say performance is not a big factor. Just need to have multiple VM's running at the same time - hence the RAID card to complete the package.
 
Last edited: