• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

building a NAS (network attatched storage) server

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: tw1164

I was plan starting to plan for a Raid NAS solution for my HTPCs, but after looking at the article below I don't think I'm going with a RAID solution. I think a JBOD will be better for me.

You may want to look at this thread, its about the same subject.

http://forums.snapstream.com/vb/showthread.php?t=28650

I dunno if I'd do JBOD.. if one drive fails, the whole array is still lost, no? Also, this assumes you have a copy of everything elsewhere... data isn't replacable sometimes 😉 But it all depends on your needs really. Still wish there were a cheap reliable backup method.
 
Originally posted by: randomlinh
Originally posted by: tw1164

I was plan starting to plan for a Raid NAS solution for my HTPCs, but after looking at the article below I don't think I'm going with a RAID solution. I think a JBOD will be better for me.

You may want to look at this thread, its about the same subject.

http://forums.snapstream.com/vb/showthread.php?t=28650

I dunno if I'd do JBOD.. if one drive fails, the whole array is still lost, no? Also, this assumes you have a copy of everything elsewhere... data isn't replacable sometimes 😉 But it all depends on your needs really. Still wish there were a cheap reliable backup method.


No you're right, I really meant to say independent drives.
 
JBOD is suppose to work well if your dealing with a absolutely massive amount of data. Terrabytes and terrabytes. Tens or hundreds of terrabytes. At that point then simple 'RAID' breaks down and is unreliable.

The key to making it work is to keep rendundant copies of data spread throughout a bunch of computers and bunches of harddrives. And have a effective manner of keeping track of them with load balancing and whatnot. Theoreticly you could then waltz around your datacenter with a shotgun and a handfull of shells and take out a computer at random and suffer no loss of data or service.

A example of this is folks at Archive.org
They found that they couldn't manage with RAID arrays. There was just to much data to handle. So what they do now is use is a massive amount of inexpensive PCs (rack mounted Mini-itx machines actually) and have a terrabyte or two worth of harddrives in each node.
http://www.capricorn-tech.com/tb80.html
That's about 80 terrabytes of disk space per rack.

Since it's all archival then speed realy isn't a concern.. massive amounts of space and redundancy is more important.

edit:
Last information I've seen on it the entire Archive.org archive has 1.5 Petabytes worth of disk space. The entire thing is managed by one full time employee and a part time assistant and draws 50kW worth of power...

http://linuxdevices.com/news/NS2659179152.html
 
if I were to build a computer with a raid 5 array with a hardware raid controller, would that use up a lot of my CPU? would it be a worthwhile investment to get a dual core if i planned on this being my main computer and gaming while people are grabbing my files? should I up from 1gb of ram to 2gb of ram?
 
Well.. one of the major attractions to hardware RAID is that it off-loads part of the I/O work from the cpu to the on-board raid proccessor. This should improve cpu performance over having just a single harddrive on a machine that is very busy.

The problem is is that you have to find a 'real' hardware raid controller. That is one that has a onboard proccessor. Decent ones that can do RAID 5 are going to cost you around 300 bucks.
 
Originally posted by: Nothinman
I dunno if I'd do JBOD.. if one drive fails, the whole array is still lost, no?

JBOD stands for Just a Bunch of Disks, i.e. there is no array.

Yes I know, but what I don't know is how that is implemented. My understanding is that it's just like raid 0 in terms of fault tolerance.. one drive dies and you can say bye bye to all your data. If that's not the case.. then.. well.. i take it back 🙂
 
Yes I know, but what I don't know is how that is implemented. My understanding is that it's just like raid 0 in terms of fault tolerance.. one drive dies and you can say bye bye to all your data. If that's not the case.. then.. well.. i take it back

No, it's not RAID0, otherwise it would just be called RAID0. In JBOD all of the disks should show up seperately as if they were hooked up to a non-RAID SCSI controller.

Perhaps you're talking about a linear or concatenated array where the volumes are spanned from disk to disk in an end to end fashion. There would be no speed difference but all of the disks would appear to be one big volume and you can extend it easily since there's no striping or anything.
 
Originally posted by: Nothinman
Yes I know, but what I don't know is how that is implemented. My understanding is that it's just like raid 0 in terms of fault tolerance.. one drive dies and you can say bye bye to all your data. If that's not the case.. then.. well.. i take it back

No, it's not RAID0, otherwise it would just be called RAID0. In JBOD all of the disks should show up seperately as if they were hooked up to a non-RAID SCSI controller.

Perhaps you're talking about a linear or concatenated array where the volumes are spanned from disk to disk in an end to end fashion. There would be no speed difference but all of the disks would appear to be one big volume and you can extend it easily since there's no striping or anything.

I was under the impression JBOD *did* span. I just looked it up on wikipedia, and it seems to say so. It also says that drive failure will NOT kill the volume, just the data on the failed drive.. so that's good to know.
 
Back
Top