15K HDD implementation through RAID

uzuncakmak

Member
Apr 2, 2012
35
0
0
I have a Dell Precision T3500 workstation. currently on that system, there are 2 7200 rpm HDDs connected via a RAID controller. I do not know much about the RAId systems; however, when I check in Win7's disk management, both disks show up separately..

What I wish to do is to replace these HDDs with a single 600GB 15K HDD.. considering that PC will be a network share with some data read/write traffic, which is the best idea to implement:

1. A single 15K HDD bypassing RAID (direct connection to the board.)
2. A single 15K HDD with RAID (I don't know if that is possible!!) RAID 0 or 1
3. Two modules of 15K HDDs (300 GB each) via RAID 01 or 1
4. Keep the current RAID system with 2 HDDs and add the 15K HDD directly without RAID (if possible..)


Performance wise (most traffic will be on the reading side and less on the write side) which would be the best option to go?? As I am new to these things (already stated), can you also alternatively suggest implementation methods if above are not good at all..





Thanks in advance.
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,930
2,897
146
Have you considered using an SSD? Performance wise, that would make the most sense. A good 512 GB SSD will be pricey, but then so are SAS 15K RPM drives. And the SSD will be much faster still.

If no SSD, the 2x 15K Raid 0 will be the fastest. raid with only one disk is not possible
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
Have you considered using an SSD? Performance wise, that would make the most sense. A good 512 GB SSD will be pricey, but then so are SAS 15K RPM drives. And the SSD will be much faster still.

If no SSD, the 2x 15K Raid 0 will be the fastest. raid with only one disk is not possible

Yes I was initially thinking of SSD which is fast compared to HDDs but in some forums I have heard people running into performance problems when they are used under load and they start becoming very slow... I will have around 20 PCs reading uncompressed video from that network share at the same time and the write back there.. so then I decided to go back to 15K solution.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
I just checked my RAID controller and it seems supporting up to 4xHDD.

So the question is : will I benefit, again performance wise, installing 4x 150 GB 15K HDD instead of 2x 300 GB 15K???

Another question comes to my mind is that: would I gain more performance by having a second RAID controller???
 

NP Complete

Member
Jul 16, 2010
57
0
0
Performance wise, SSDs will always be your fastest option.

You don't mention what the network connection between your server & PCs? GigE? Fiber, 10Gig? And what number of NICs does the server have?

If you only have a single GigE connection to your server, once you start using a single SSD or 15k drive, the NIC will be the bottle neck most of the time. In such a scenario, RAID should be used for redundancy and not speed.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Actually, hard drives do OK with sequential read performance. With that being said, if you can go SSD (or two) that is going to be best for this workload. A best of breed option is sticking the 7,200rpm drives (or a 15K rpm setup) as your big storage tier then using SSDs to serve the most frequently used videos.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
probably a moot question since this board will not allow 15k drives anyways.

10k VRaptors.. or newer 1GB platter HDD.. will be about as fast as you can go with any HDD, short of an SSD arrangement.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
Performance wise, SSDs will always be your fastest option.

You don't mention what the network connection between your server & PCs? GigE? Fiber, 10Gig? And what number of NICs does the server have?

If you only have a single GigE connection to your server, once you start using a single SSD or 15k drive, the NIC will be the bottle neck most of the time. In such a scenario, RAID should be used for redundancy and not speed.

1 Gb Ethernet connection. There is a single NIC on the server. Having more NICs on the server will help?? If so, solution becomes easier rather than trying to get 10 GigE connection to handle the network traffic..

And I was thinking of using RAID 0 with 4x15K HDD to increase read/write speed (especially read speed). If I implement the same idea for SSDs, will I get more performance comparing to 15K ones.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
probably a moot question since this board will not allow 15k drives anyways.

10k VRaptors.. or newer 1GB platter HDD.. will be about as fast as you can go with any HDD, short of an SSD arrangement.

I just verified with Dell and it does support 15K drives.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
What about a dedicated NAS solution??... I have seen ones with 2 or more GigE ports , with an add-on card it can even support 10 GigE if needed for later.. Assuming I will have 1 Gb Ethernet (given that around 20 PCs reading uncompressed video data from that NAS simultaneously) what specs I should be looking for in a NAS??
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
I just verified with Dell and it does support 15K drives.

ok, good to know.. I assumed it had the spec's listed at Dell which would mean that it's running the ICH10R SB. It must have an optional SAS capable mobo installed then.

Anywho.. read speeds will be better with 15k drives but access times will be cumulative as you add disks and make the array wider. Won't be huge lag.. but it will be evident, especially with random data such as an OS volume would access and during multitasking. Windows accellerators/ram caching will help somewhat, but will not save you from that lag all the time. SSD will eliminate that lag completely and will even be perceptible over the NAS as the latency goes down to nill compared to any HDD made.

IMO,(and I do have 4 x 15k drives in R0 on my older workstation).. I would think the money better spent to use an SSD based OS and run 4 x fast HDD for the storage volume. The money saved over the 15k drives will help offset the added cost of the SSD. Good luck with it.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
ok, good to know.. I assumed it had the spec's listed at Dell which would mean that it's running the ICH10R SB. It must have an optional SAS capable mobo installed then.

Anywho.. read speeds will be better with 15k drives but access times will be cumulative as you add disks and make the array wider. Won't be huge lag.. but it will be evident, especially with random data such as an OS volume would access and during multitasking. Windows accellerators/ram caching will help somewhat, but will not save you from that lag all the time. SSD will eliminate that lag completely and will even be perceptible over the NAS as the latency goes down to nill compared to any HDD made.

IMO,(and I do have 4 x 15k drives in R0 on my older workstation).. I would think the money better spent to use an SSD based OS and run 4 x fast HDD for the storage volume. The money saved over the 15k drives will help offset the added cost of the SSD. Good luck with it.

I know that read/write speeds of an average SSD is much higher than regular drives (as u said even 15K). And I think having multiple SSD s on a RAID system can multiply that speed as well... if so, then my only concern is that "Will a single 1-GigE ethernet port be enough to feed 20 PCs with the required data"???....
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,386
32
91
ok, good to know.. I assumed it had the spec's listed at Dell which would mean that it's running the ICH10R SB. It must have an optional SAS capable mobo installed then.

Anywho.. read speeds will be better with 15k drives but access times will be cumulative as you add disks and make the array wider. Won't be huge lag.. but it will be evident, especially with random data such as an OS volume would access and during multitasking. Windows accellerators/ram caching will help somewhat, but will not save you from that lag all the time. SSD will eliminate that lag completely and will even be perceptible over the NAS as the latency goes down to nill compared to any HDD made.

IMO,(and I do have 4 x 15k drives in R0 on my older workstation).. I would think the money better spent to use an SSD based OS and run 4 x fast HDD for the storage volume. The money saved over the 15k drives will help offset the added cost of the SSD. Good luck with it.

What the heck am I reading?

Access times aren't cumulative. They remain the same.
WTH is this with OS on SSD? He isn't after boot times. As long as his OS is on a separate drive its I/O won't be interfering with the network share.
And this is a multi-user environment. The benefits of 15k SAS are going to be astronomical over cheaper desktop drives.


if so, then my only concern is that "Will a single 1-GigE ethernet port be enough to feed 20 PCs with the required data"???....

Well, what does your network traffic look like now?
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
What the heck am I reading?

Access times aren't cumulative. They remain the same.
WTH is this with OS on SSD? He isn't after boot times. As long as his OS is on a separate drive its I/O won't be interfering with the network share.
And this is a multi-user environment. The benefits of 15k SAS are going to be astronomical over cheaper desktop drives.




Well, what does your network traffic look like now?


I AM NOT SEEKING ANY PERFORMANCE RAISE ON THE COMPUTER WITH THE NETWORK SHARE. AND MY ONLY CONCERN IS HOW CAN I GET THE MAX READ SPEED OF PCS OVER THE GIGE PORT FROM THAT COMPUTER.

This is system I plan on building so cant make any comparison now...
 
Last edited:

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
I know that read/write speeds of an average SSD is much higher than regular drives (as u said even 15K). And I think having multiple SSD s on a RAID system can multiply that speed as well... if so, then my only concern is that "Will a single 1-GigE ethernet port be enough to feed 20 PCs with the required data"???....

I suggest you buy another NIC (fairly cheap) and them you team the NICs together and that would give you a 2Gbit connection. That will improve performance more than drives will.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
I suggest you buy another NIC (fairly cheap) and them you team the NICs together and that would give you a 2Gbit connection. That will improve performance more than drives will.

How many NICs can I exactly team up on Windows 7 Pro 64 bit machine and how?
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
It is easily done within the driver. I use Intel NICs as they have the best teaming options in my opinion. I do not know how many Win7 Pro can use (I know at least 2), but Server2008 can use many. I even do this on my home server for my wife's photography business.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
It is easily done within the driver. I use Intel NICs as they have the best teaming options in my opinion. I do not know how many Win7 Pro can use (I know at least 2), but Server2008 can use many. I even do this on my home server for my wife's photography business.

Assuming I had teamed up multiple NICs together on that machine, should I still be considering a choice between SSD and 15K drives both through RAID 0/1/5?
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
What the heck am I reading?

Access times aren't cumulative. They remain the same.
WTH is this with OS on SSD? He isn't after boot times. As long as his OS is on a separate drive its I/O won't be interfering with the network share.
And this is a multi-user environment. The benefits of 15k SAS are going to be astronomical over cheaper desktop drives.

"astronomical"?.. lol. There are so many things wrong with those statements that it's not even worth going into it with you. You're obviously much smarter than I and you win. ^_^

as for the gains to be had from faster ethernet?.. well.. should go without saying that faster.. is faster, and obviously depends on the amount of data that needs to flow at any given time. Although the host will be the main rig to have the fastest card/s on since it will be shared by all, you'll need to obviously outfit all the machines with capable enough hardware or you'll just create bottlenecks.

Some NIC's will have dual ports or can be used in multiple card arrangements that can be teamed for increased bandwidth or simply used for redundancy should one(or more) fail. Will probably be the cheapest route to take here for Gib per $$ spent unless you want to shell out the cash for a nicer 10GbE card.

Switches would be another cost consideration as they can get quite rediculous when you want top speeds and "20 PC's" will already cause those costs to rise pretty fast as it is. All goes back to the old hotrodding rule of.. speed costs money.. how fast do you want to go?

Good luck on the hunt.

EDIT: lol.. that's what I get for taking phone calls while trying to respond. You seem to be in capable hands now and I can't offer much more than the above anyways.

PS. I do know that not all hardware will allow teaming, so you do need to be hardware specific with choices or Windows will not give you that functionality mentioned above.
 
Last edited:

uzuncakmak

Member
Apr 2, 2012
35
0
0
"astronomical"?.. lol. There are so many things wrong with those statements that it's not even worth going into it with you. You're obviously much smarter than I and you win. ^_^

as for the gains to be had from faster ethernet?.. well.. should go without saying that faster.. is faster, and obviously depends on the amount of data that needs to flow at any given time. Although the host will be the main rig to have the fastest card/s on since it will be shared by all, you'll need to obviously outfit all the machines with capable enough hardware or you'll just create bottlenecks.

Some NIC's will have dual ports or can be used in multiple card arrangements that can be teamed for increased bandwidth or simply used for redundancy should one(or more) fail. Will probably be the cheapest route to take here for Gib per $$ spent unless you want to shell out the cash for a nicer 10GbE card.

Switches would be another cost consideration as they can get quite rediculous when you want top speeds and "20 PC's" will already cause those costs to rise pretty fast as it is. All goes back to the old hotrodding rule of.. speed costs money.. how fast do you want to go?

Good luck on the hunt.

EDIT: lol.. that's what I get for taking phone calls while trying to respond. You seem to be in capable hands now and I can't offer much more than the above anyways.

PS. I do know that not all hardware will allow teaming, so you do need to be hardware specific with choices or Windows will not give you that functionality mentioned above.


Yes as you mentioned the main rig (the one with the network share) is at least 4 times better than the other 20 PCs, at least in terms of the tech specs and not practically...

My network will not support 10 Gb Ethernet so the obvious choice here will be using multiple NICs..

Earlier I was talking with a Cisco Rep and will stick with his suggestion as their hardware are very reliable as I know..
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Assuming I had teamed up multiple NICs together on that machine, should I still be considering a choice between SSD and 15K drives both through RAID 0/1/5?

I would go with 15K HDDs in a Raid 5 format. Raid 0 is the fastest, but offers no redundancy. Raid 1 offers the best redundancy, but at a price of speed and 50% of your storage space. Raid 5 offers speed somewhere in between Raod 1 and 0, while offering redundancy at a ratio of 1/n (where n is the total number of drives in the array).

Keep in mind that a 15K HDD Raid 5 array (using SAS 6Gps) will surpass the speeds of 1Gb NIC. Two will be perfect for this setup.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Some NIC's will have dual ports or can be used in multiple card arrangements that can be teamed for increased bandwidth or simply used for redundancy should one(or more) fail. Will probably be the cheapest route to take here for Gib per $$ spent unless you want to shell out the cash for a nicer 10GbE card.

Intel PCIe GB NICs cost about $39 each from Newegg. They are faster than any Realtek onboard NIC that are on most MBs today. The driver from Intel allows you to setup these NICs as a team for either load balancing or fault protection. This setup can be used by any Windows machine on any existing GB network.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Intel PCIe GB NICs cost about $39 each from Newegg. They are faster than any Realtek onboard NIC that are on most MBs today. The driver from Intel allows you to setup these NICs as a team for either load balancing or fault protection. This setup can be used by any Windows machine on any existing GB network.


sounds like a winner to me. :thumbsup:
 

Topweasel

Diamond Member
Oct 19, 2000
5,437
1,659
136
Now I know that this is has been taken care of. Keep in mind the read speeds of the drives. If you look at Newegg, you can see the difference in read speeds of almost any drive. As far as I know there isn't a single 2 drive Raid 0 configuration that would even with SAS drives, read as fast as one decent SSD.

So if after teaming you still one day decide to look back at changing the drives. Keep in mind SSD. Performance issues can arise from 2 issues WinXP/Vista, or Raiding them. Both are solved by making sure you get one with a really good Garbage Collector.
 

uzuncakmak

Member
Apr 2, 2012
35
0
0
I would go with 15K HDDs in a Raid 5 format. Raid 0 is the fastest, but offers no redundancy. Raid 1 offers the best redundancy, but at a price of speed and 50% of your storage space. Raid 5 offers speed somewhere in between Raod 1 and 0, while offering redundancy at a ratio of 1/n (where n is the total number of drives in the array).

Keep in mind that a 15K HDD Raid 5 array (using SAS 6Gps) will surpass the speeds of 1Gb NIC. Two will be perfect for this setup.

When I quickly compare these two:

http://www.newegg.com/Product/Produc...82E16820167095

http://www.newegg.com/Product/Produc...82E16822332062

Practically the speeds will be similar and SSD is cheaper per GB compared to 15K... So isn,t it a better choice to implement SSD RAID 5 instead of 15K RAID 5???