• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Help with RAID config

Cr0nJ0b

Golden Member
Hey all,

I have a new (used) ARECA 1260 RAID card. It's a 16 port card and i have like 11 drives on it now. I'm trying to figure out the best way to set it up for performance. The drives are not matched, so I'm not expecting a ton, but i want to set it up in a way that is more optimal.

My first shot was to take 10 of the drives and put them all into the same RAID group and then break out 10 separate 500GB LUNs (RAID 6). It worked fine, but I don't think it's optimal. I'm using the LUNs for my ESXi server and they seem to do a lot of thrashing. I will get great peak performance, but then it just drops like a sawblade.

My next shot today, is to build a new really big volume for testing only...10 drive RAID 6 one LUN under linux. just to see what bonnie gives me.

My other option would be to setup 2 x 5 drive sets and stripe those at the OS...

I'm just not sure what the optimal would be.

any thoughts?
 
Without an expected workload there is no way to give an answer. If you tell us what you're actually doing with the storage, other than playing, then we can tell you what would be a good config.
 
Sorry, this will be general purpose storage. Mostly larger files 4MB+. I will have a few servers running on ESX, but they will be mostly appliances like firewall etc...

I'm just wondering in general what people do to carve things up.
 
In general people don't run ESX and server class storage at home. 🙂

Me, I run an SSD for my OS and VMs (xen, mail/web, file server, winXP abuse box). The servers get their own NICs, the file server has all the HDDs assigned to it. ATM that's 3x 2TB drives and a 500GB drive. The big disks are for media, the 500GB for backups of the mail/web and my desktops. Nothing is RAID'd because my disks get replaced before they fail (knock on wood), and nothing is irreplaceable that isn't already backed up.
 
I'm running ESX just because I can. I got some licenses from work for free, and I have a server class system HPDL380 on which to run it...so I figured I could replace my other servers with one big server that has lots of VMs and just go that way.

After some experimentation, I've been able to setup a 100GB test LUN on my raid controller with 10 x 500GB drives in RAID 6. The volume only took like 5 minutes to initialize and i loaded the liveCD for ubuntu with some added packages for testing. From windows I'm now getting an average of 100MB/s, which I figure has to be close to the ethernet max for the NIC. I'm happy with that (as i should be), but now I'm wondering what will happen when i put a VMDK on that and try to run it from a virtual machine....that will be the test in my mind. If the VMDK takes too much out of the performance, I might just start over and repartition the server for linux and run workstation on that. I'll let you know how it goes.
 
well test two is done. I have configured the same disk on ESX as a datastore. I did the same thing with ubuntu LiveCD Desktop and ran a simple disk copy from the desktop to the server. I got 70MB/s max. So we're talking a 30% hit right out of the gate. Next I'll load up the os completely and then load VMware tools and the VMXNET3 drivers to see if i can improve on this. It's possbile that this is a network issue, since I don't think everything is fully supported until i load the tools...we'll see.
 
Don't forget, you are also limited by the network and speeds of the system pushing the file... Not just the write speeds of the server. Some systems, 70m/sec is average for read speeds of a single drive.

I myself have two 8TB RAID 6 setups. I'm working on another 12TB server. (All 8 drive setups) Internally to the server read/write speeds are blindingly fast, but over the network, the client speeds are the limiting factors... Just something to think about.
 
yeah, I'm not expecting to get a lot more than 100MB/s over a single Gigabit link. but the issue is that ESX is putting a ton of overhead on the speed. I can get 107MB/s from host to host with a straight Linux host, but when I put a linux VM on ESX that drops to 75MB/s. I'm trying to figure a way to create a raw device with my local RAID set that i can share out without having to build a VMFS disk first. I'm not having luck though.
 
Back
Top