• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

6x 500GB RE4 or 250GB Velociratpros - RAID 10 for Virtualization datastore?

smakme7757

Golden Member
Oh damn, excuse the "Velociratpros" in the title 😉

At the moment I'm running quite literally a Redundant Array of Inexpensive Disks - A real mixed bunch of drives I've thrown together in my Hyper-V server which has been working really quite well.

Although not unexpected I've had a drive get rejected from the RAID twice now (640GB WD Black) which is most likely due to poor/no RAID support.

So i figured it was time to completely swap out all of those old dieing disks for some new ones. SSD are out due to capacity constraints, but within my budget i can afford the following:

(RAID 10)
6x 500GB WD RE4 = 1500GB capacity
or
6x 250GB WD Velociraptors = 750GB capacity

Both come to the same price more or less (Raptors $20 more expensive)

Now the first big question is if anyone knows for sure that the VelociRaptor line supports RAID?

The 2nd big question is, which should i choose? Would the raptors make a big enough difference to outweigh 750GB less in storage capacity?

Again this will be used for virtual machines - Email, Blog, Fileserver, Domain controller etc... No heavy database stuff (I'd move that over to it's own SSD if that was the case)

Thanks for the feedback people

🙂
 
In my experience, a drive either works or it doesn't. If it sometimes works, the
drive is probably on its way out.

That said, I've run raptors for years in raid setups without a single issue. Still have my 74G drives in a couple setups.

What controller are you using? I've seen onboard intel controllers be flaky compared to a proper standalone.

And to which you should choose...how much capacity do you need? Why 10 vs 5? The latter will essentially double your cap.
 
How many VM's? How large? What roles do they have? Production or lab?
You're limited on IOPS either way. If you're limiting it to 4-5 VMs then both RE4s and VRs are more than enough space wise. You gain some IOPS with the VRs. And since you're trying to decide between the two options, I guess that the half storage space of the VRs is enough for you, so I'd go with them, BUT...!!
I think you're close, budget wise, to a Micron/Crucial M500 960GB and that's what I'd choose for running 4 small VMs (lab only).
 
In my experience, a drive either works or it doesn't. If it sometimes works, the
drive is probably on its way out.

That said, I've run raptors for years in raid setups without a single issue. Still have my 74G drives in a couple setups.

What controller are you using? I've seen onboard intel controllers be flaky compared to a proper standalone.

And to which you should choose...how much capacity do you need? Why 10 vs 5? The latter will essentially double your cap.
I'm just using the Intel controller on my motherboard at the moment, so that's why I've chosen RAID 10. The drives that keep getting ejected from the RAID are WD Black drives which don't have TLER. They were also getting rejected in my NAS before i moved over to some WD RED drives. So it's just that the blacks are crippled for working in a RAID intentionally by WD.

Raid 10 will give nice performance and redundancy over RAID 5 seeing as i don't have any other raid unit too offload the parity calculation too.

At the moment i'm using around 200GB, so i'm not in need of lots of storage capacity, but I presume that i will in a year or two. This thing will hopefully last me a quite a few years. 🙂

Thanks for the feedback.

How many VM's? How large? What roles do they have? Production or lab?
You're limited on IOPS either way. If you're limiting it to 4-5 VMs then both RE4s and VRs are more than enough space wise. You gain some IOPS with the VRs. And since you're trying to decide between the two options, I guess that the half storage space of the VRs is enough for you, so I'd go with them, BUT...!!
I think you're close, budget wise, to a Micron/Crucial M500 960GB and that's what I'd choose for running 4 small VMs (lab only).
It's email, webserver, domain controller and a few lab enviroments for various software packages. So nothing major.

The Crucial M500 960GB looks like a decent option. It's slightly more expensive, but I'd be able to stretch to that at the end of the summer break.

I'll have a think about it, i'll loose the redundancy which is the only problem. But otherwise it looks like a nice idea as long as i keep recent backups.

Thanks!
 
Last edited:
Are the email, webserver, db and dc each a Windows VM instance? If so, how come you're only using 200GB so far (dynamic VHDs?)

To give you an idea of the performance gain:

Using 4x VR RAID 10 will give you a 1.2x perf compared to 4x RE4 RAID 10.
Using 1x M500 will give you (worst case) 16x write and 90x read increase compared to the VR RAID 10.

Are these for that 2600K? I see RAID 0s there so no redundancy loss compared to what you have right now.
 
Are the email, webserver, db and dc each a Windows VM instance? If so, how come you're only using 200GB so far (dynamic VHDs?)

To give you an idea of the performance gain:

Using 4x VR RAID 10 will give you a 1.2x perf compared to 4x RE4 RAID 10.
Using 1x M500 will give you (worst case) 16x write and 90x read increase compared to the VR RAID 10.

Are these for that 2600K? I see RAID 0s there so no redundancy loss compared to what you have right now.
Most of my VMs are running either CentOS or Ubuntu Server. The only Windows "Server" VMs I have is the domain controller and System Center. All the disks are thin provisioned so most of them haven't grown all that much.

From a rough look at my storage utilization I'm at around 260GB at the moment.

I was referring to redundancy with regards to what I'd originally considered with 6 HDDs. However, right now I'm running most of the VM's off an iSCSI target on my NAS (RAID5 ) until i fix up my storage woes. The host was initially running RAID 0 until i started getting errors which is when i took it down and started looking at my options.

At the moment it's running on my old 2600K, however I have a newer host which I'll be putting this new equipment into. That host is running:

Intel Xeon E3-1240V2
32GB RAM
With 3 NICs

The only VM of importance is the webserver, but i keep that backed up often with VEEAM as well as dumping the web directory and database every now and again.

The rest of the resources are used with regards to uni where we have a bit of Hyper-V and vSphere/vCenter so i keep my own lab at home so i can experiment on my own hardware without all the slow downs and hiccups one gets using shared hardware with other 🙂.

But the M500 does seem like a good buy. It definately ticks all the boxes. It's at the top of the list at the moment. I'll most likely pick one up in a few weeks.
 
Okay now I got a better picture about your setup and that makes me recommend you a few things.

If the only online VM is the webserver (and from your comments I guess the db runs in this particular vm also) I'd setup it a bit different. Get 2 of those RE4s (don't know in Norway but where I am there's almost no price difference between RE4 250GB, 500GB and 1TB), RAID 1 them and run the Webserver VM on it. Then run any other VM you think of (lab) on the M500. With the M500 you can grow the VM count (assuming low storage needs for them) until your 32GB of RAM bottlenecks them.

Can you share a bit more about the new hardware? 1240v2, great. 32GB RAM (what kind?) maxed out. 3 NICs (what kind, bandwidth, controllers and also layout (3 different PCIe x1 cards, 2 cards and 1 onboard, one dual?)). What chipset? SATA 3/6Gbps, how many.

The 1240v2 is overkill for your blog! And I also assume it doesn't get hit like crazy 24x7 given the circumstances and the hosting location 🙂 If image serving is not a problem (and I guess it's not in your case) the 2x RE4 RAID 1 is enough for it. This way you can continue to serve it like that and use all the benefits of the M500 for your work AND the 1240v2 😉 The performance of running those VMs over iSCSI on your NAS must be terrible.
 
Last edited:
Okay now I got a better picture about your setup and that makes me recommend you a few things.

If the only online VM is the webserver (and from your comments I guess the db runs in this particular vm also) I'd setup it a bit different. Get 2 of those RE4s (don't know in Norway but where I am there's almost no price difference between RE4 250GB, 500GB and 1TB), RAID 1 them and run the Webserver VM on it. Then run any other VM you think of (lab) on the M500. With the M500 you can grow the VM count (assuming low storage needs for them) until your 32GB of RAM bottlenecks them.

Can you share a bit more about the new hardware? 1240v2, great. 32GB RAM (what kind?) maxed out. 3 NICs (what kind, bandwidth, controllers and also layout (3 different PCIe x1 cards, 2 cards and 1 onboard, one dual?)). What chipset? SATA 3/6Gbps, how many.
A good idea with RAID 1 for the web server, it's mostly reads and that will also give it some more protection!

More details about the hardware. Most of this stuff was purchased due to decent pricing, so it's not all server equipment, but it works well with Hyper-V (and vSphere)

CPU: Intel Intel Xeon E3-1240V2
RAM: 32GB Corsair 9-9-9-24 1600Mhz - Non-ECC
Motherboard: MSI Z77A-GD65
4x 6Gbps (2x Intel C216 and 2x Marvell ASM 1061)
4x 3Gbps (Intel C216)
Network:
1 Onboard Intel 82579V 100/1000 - Single port
2x PCI-e x1 Intel 82574L 100/1000 - Single port
So 3 all up

All the RAM slots are full at the moment 4x8.
 
What do you boot this server off? If you boot of a small SSD, my recommendation goes like this:

SATA0 6Gbps Intel: boot
SATA1 6Gbps Intel: M500
SATA4 3Gbps Intel: RE4
SATA5 3Gbps Intel: RE4

None of the NIC controllers are using ECC, but the 32GB either.
I guess those 2 PCIe cards are the Intel CT desktop so you can team them. How many 1Gbps do you have on that NAS? If you can get a 2Gbps link between the server and the NAS you could speed up backups to it. I tested a 4x WD Red 3TB RAID 10 array and got sustained 260 220 MB/s sequential write. The 1TB are a little bit slower but not by much. I also wouldn't recommend RAID 5 on your NAS.
 
Last edited:
What do you boot this server off? If you boot of a small SSD, my recommendation goes like this:

SATA0 6Gbps Intel: boot
SATA1 6Gbps Intel: M500
SATA4 3Gbps Intel: RE4
SATA5 3Gbps Intel: RE4

None of the NIC controllers are using ECC, but the 32GB either.
I guess those 2 PCIe cards are the Intel CT desktop so you can team them. How many 1Gbps do you have on that NAS? If you can get a 2Gbps link between the server and the NAS you could speed up backups to it. I tested a 4x WD Red 3TB RAID 10 array and got sustained 260 220 MB/s sequential write. The 1TB are a little bit slower but not by much. I also wouldn't recommend RAID 5 on your NAS.

The NAS uses Netgears X-RAID2 (4 disks), so it's kind of like RAID 5. I can sustain a single disk failure anyhow. The speed is pretty good, i get 112MB/s read and 80MB/s write, So it's not too terrible.

The NAS has 2x 1000Mb Nics. I'm currently using one of them for iSCSI and the other for general house stuff. I don't think it supports NIC teaming though.

The disk setup look like a good idea. I'll have to go and price out those drives!

Currently on my primary Hyper-V host it has 2 NICs. I use one for the Host and the other for the VMs.

On the new host with 3 Nics what would be best? Team the VM network or the host network?
 
The NIC teaming in this case doesn't make much sense then. In the case of backup transfer times it could make a difference (think backup frequency). But for VMs, unless you want network redundancy (where teaming would help), I would balance them over the 2 remaining NICs and use the onboard one for host management.
What network gear do you use? How many GbE ports do you have
 
The NIC teaming in this case doesn't make much sense then. In the case of backup transfer times it could make a difference (think backup frequency). But for VMs, unless you want network redundancy (where teaming would help), I would balance them over the 2 remaining NICs and use the onboard one for host management.
What network gear do you use? How many GbE ports do you have
I have 2x Netgear Prosafe switches with 8 ports each. Each port can handle 1Gb. 1 of the ports is used up by a cable to get the internet from the front to the back of the house and 2 to connect the switches to each other, so I have 12 ports available.
 
Okay, I'm not familiar with Netgear Prosafe switches but it looks like they don't support 802.3ad.

So if you backup frequently, dedicate one NIC exclusively for the NAS subnet, one for host management and one for VM access. Get at least 3 NICs on your day to day workstation and set them up the same way. All of them on the same switch.

Use the 2nd switch for whatever NAS related file access (entertainment, etc) and eventual wired machine that needs to get on the network (keep your work related traffic on the 1st switch, you can't get the same throughput on the 2nd one by daisy chaining them).
 
Okay, I'm not familiar with Netgear Prosafe switches but it looks like they don't support 802.3ad.

So if you backup frequently, dedicate one NIC exclusively for the NAS subnet, one for host management and one for VM access. Get at least 3 NICs on your day to day workstation and set them up the same way. All of them on the same switch.

Use the 2nd switch for whatever NAS related file access (entertainment, etc) and eventual wired machine that needs to get on the network (keep your work related traffic on the 1st switch, you can't get the same throughput on the 2nd one by daisy chaining them).
I have a GS108T which does support 802.3ad. I've actually only just recently got it, so i haven't had time to play with it yet.

I actually have very little experience with Link aggregation which is one of the reason why i got one of those switches on sale.
 
Then I think you should use this switch as your primary switch (the Hyper-V server with the blog connected to it). I wouldn't use LACP for now, given the hardware you have at the moment, but I do think that, if you'll take my advice and use the ssd I recommended you for the test/lab vms, you'll finally need to grow your network, and transfer speeds, so you'll need to aggregate bandwidth. Just keep the blog separated from the other stuff (on it's dedicated array).
 
i've always run VRs in raid0 for my local desktop storage array. i ran the 600GB ones for years without issue, then upgraded to dual 1TB. i upgraded my NAS to nice set of hardware and storage, so only use a single VR now. 🙂
 
Back
Top