File Server Plans - Suggestions?

stncttr908

Senior member
Nov 17, 2002
243
0
76
Alright, when I get around to building a new desktop I'm also going to build a RAID 5 file server (1.5TB + parity) for my important files/future considerations. Here is what I'm proposing:

Plan #1 (Current/new parts)

This calls for a few new parts plus parts from my current system when I upgrade.

** = new part

ABIT NF7-S 2.0
Athlon XP 2500+
512MB/1GB DDR
Gigabit Ethernet PCI**
1x500GB Seagate 7200.10 IDE
3x500GB Seagate 7200.10 IDE**
PROMISE FastTrak SX4060 PCI IDE 4-Channel Adapter** (or other card, suggestions?)
Cheap ATX case, DVD-ROM, etc.
Windows Server 2003, Linux, etc.

Plan #2

All new parts here, aside from the assorted bits

Intel 945G/GZ board w/Gigabit, Intel GMA 950, ICH7 southbridge (RAID 0/1/0+1/5) Intel Matrix RAID, etc.
Core 2 Duo E4300/E6300 (new/used)
4x500GB SATAII 16MB cache (Western Digital, Seagate, etc.)
2x512MB DDR2-800
Cheap ATX case, DVD-ROM, assorted bits
OS undetermined as of yet

Now, the questions...

1) Since the ICH7's performance is pretty solid, coupled with a C2D, would this be faster overall than a PCI card and my old CPU/mobo?
2) If it is faster, would it be worth the $200-$300 more I'd spend for option #2?
3) What is the deal with the performance dropoffs with files 256MB or larger on a RAID 5 NAS?

It would be handy to use the server as a render/encoding slave if need be, so that's another plus for #2.

Thanks for reading and any help you can provide.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
my SAMBA server will soak up as much ram as you throw at it...but it doesn't have to...
it's CPU is never more then (iirc) 2% utilized...it's on a p2 400
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Well, that could be true, but is your server using a controller card? Perhaps if the job was offloaded to the CPU/subsystem it would be faster. If that truly is the case I have some old Pentium III CPUs at home and some extra RAM I could use instead, but I have a feeling that if I really want to run say Server 2003 I'd need a little more juice.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Originally posted by: stncttr908
Well, that could be true, but is your server using a controller card? Perhaps if the job was offloaded to the CPU/subsystem it would be faster. If that truly is the case I have some old Pentium III CPUs at home and some extra RAM I could use instead, but I have a feeling that if I really want to run say Server 2003 I'd need a little more juice.

there is your mistake.....why use a howitzer to swat a fly? (lol, swat, fileserver...I kill myself)


It depends. I have several file servers, one is an old 8 way compaq DL8000, 8x P2500, 2GB memory, with 12 SCSI disks in some sort of raid config (think it's five, been a while since I looked). note that I've never had SAMBA (or much else) fill even one proc on this box. it sits idle, and it has pretty dang fast disks.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Well that certainly is good news. That tells me that I could definitely use my current components at the very least, or even more low-end stuff if I wanted to use the remnants for this system for a LAN box or HTPC maybe.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: nweaver
my SAMBA server will soak up as much ram as you throw at it...but it doesn't have to...
it's CPU is never more then (iirc) 2% utilized...it's on a p2 400

Does this server have gigabit? What sort of network read/write rates do you get with it for large files?
 

Zap

Elite Member
Oct 13, 1999
22,377
2
81
Originally posted by: stncttr908
** = new part

3x500GB Seagate 7200.10 IDE**
PROMISE FastTrak SX4060 PCI IDE 4-Channel Adapter** (or other card, suggestions?)

4x500GB SATAII 16MB cache (Western Digital, Seagate, etc.)

Why go with EIDE on the other config? Can't you get PCI SATA RAID controllers?
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
No other reason than I already have one of the drives. It probably would be a better IDEA to go 4xSATA and sell off the IDE or keep it for backup images.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Don't forget a backup device like a DVD writer.
Promise raid cards are generally software raid. Might want to look into something that does it in hardware. LSI megaraid cards aren't bad.
 

MerlinRML

Senior member
Sep 9, 2005
207
0
71
Running a file server can need very little or quite a lot, depending on what else the box is doing and how many users will be accessing the box at one time.

For myself, I run a Win2k3 dual 1Ghz p3 server with 1GB of RAM. My RAM usage and CPU load are barely a blip on the radar for the 2 users that sometimes connect to the box. Other than provide an always on RAID protected storage repository, this box does nothing else except for download the occasional windows update. I would say I've got way more than I really need here.

If you plan to start making this box do actual work (such as hosting apps, multimedia encoding, data processing, etc) or you increase the number of users, then you may need to start adding more resources.

The key components that will affect your ability to move files around are a fast storage system and a gigabit network. From there, you're limited to 125MB/sec by your network unless you want to invest fibre channel, infiniband, or 10Gb Ethernet.

In terms of the hardware vs software RAID controller, many people are perfectly fine running software RAID controllers or software RAID. As long as you have the resources to spare, software RAID is not a problem.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Well, at the most I would say it might be doing a little RAR archiving, backup compression (maybe Acronis True Image Server or something along those lines) and maybe a little bit of encoding. I'm leaning towards the 945/965 platform only because it'd be easy to drop a C2D in there should I need more computational power down the road.
 

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Originally posted by: MerlinRML
Running a file server can need very little or quite a lot, depending on what else the box is doing and how many users will be accessing the box at one time.

For myself, I run a Win2k3 dual 1Ghz p3 server with 1GB of RAM. My RAM usage and CPU load are barely a blip on the radar for the 2 users that sometimes connect to the box. Other than provide an always on RAID protected storage repository, this box does nothing else except for download the occasional windows update. I would say I've got way more than I really need here.

If you plan to start making this box do actual work (such as hosting apps, multimedia encoding, data processing, etc) or you increase the number of users, then you may need to start adding more resources.

The key components that will affect your ability to move files around are a fast storage system and a gigabit network. From there, you're limited to 125MB/sec by your network unless you want to invest fibre channel, infiniband, or 10Gb Ethernet.

In terms of the hardware vs software RAID controller, many people are perfectly fine running software RAID controllers or software RAID. As long as you have the resources to spare, software RAID is not a problem.

It should use memory up....it would make it faster. My SAMBA server will chew up the whole 2 gigs as cached stuff.

As far as file transfer speeds, I've never checked actual speeds, it's not that important to me. It's all on an older compaq harware raid5 scsi card, I've thought about going S/W raid, as I think I have enough horsepower that it would go faster.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
If you don't care about performance, then any hardware will do. There's no limit to how low you can go. Might as well go with a low power save the earth and your hydro bill build in that case.

A P-II 400 isn't too bad considering the processors that are included in the "fakegigabit" consumer NAS boxes. And these do show performance scaling with processor speed.

But the bigger performance problem with old builds is the I/O bandwidth -- unless you hunt out an older server motherboard + compatible parts (which most people wouldn't when they're considering an old spare parts build), the limited and often poorly implemented I/O will limit the performance significantly in terms of gigabit.

But if high gigabit performance isn't important to you, then that's fine.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
stncttr908, there are two variables you should consider: how much you want to spend, and how much your data is worth to you.

RAID 5 is an interestion question these days. To make RAID 5 go fast, you need a good hardware controller with a battery-backed cache, and drives that behave right to cache control commands. The thing is, for what that costs, you could just buy more drives and do a RAID 1+0. This will run faster and be more reliable, at the expense of more drives, but you don't have to buy an expensive controller to get there.

I have a bit of personal religion about memory - I think it's basically insane to run any important system without ECC memory. One of my dislikes about Intel is that they force you to the high-end desktop chipset in order to get ECC memory support, and right now that's the 975X, which only comes with the ICH7R, and we're also now talking about much more expensive motherboards. The AMD64 platform, in contrast, supports ECC memory basically everywhere because it's a function of the CPU's on-die memory controller rather than of the chipset.

I would not recommend buying new IDE drives today. Buy SATA instead. IDE's dead. Wave goodbye to it.

The Promise controllers are a waste of money, ditto Highpoint and any other fake RAID or host RAID or other cheapo RAID. You're just asking for pain.

DO NOT get a cheap case for your file server. DO NOT get a cheap power supply for your file server. Well, that is, if you like your data. I like my data, so I get good cases with excellent vibration dampening and cooling, and I get good power supplies. If you don't care about losing drives and the data that's on them, then go ahead and use a cheap case and PS.

To address your specific questions,

1) if you are planning on doing software RAID, I do think the AHCI route is the way to go. On the Intel platform, this means ICH7R or ICH8R - you want the R-flavored south bridge to get more ports (4 7R, 6 8R). On the AMD64 platform, this means the ATI SB600 (four ports). If you're doing hardware RAID, you could stick with a slower main system and PCI bus. But it would seem silly to buy a $300 RAID card and buy it in PCI/PCI-X, given that the older bus is basically going away. I'd still recommend you go to a newer PCI-E system.

Either the AMD64 or the C2D platforms should be a big jump up from your current AXP.

2) worth it? Again, how much do you want to spend, and how important is your data to you?

3) RAID 5 is slow. There are several very fundamental reasons why. Most implementations are able to cheat a bit using big write-back caches (you'll notice that all the hardware RAID 5/6 boards have beefy on-board caches). So when you "write" to the RAID 5 volume, you really just write to the cache, your OS goes on its merry way, and then the board gets it to the disk as fast as it's able. That trick only works when the write size is below the cache size; once the cache fills up you can't put more in until that much data has been written out. And so you start to see the true write speed of your RAID 5 array.

RAID 10 does not have this problem. Again, you pay for the extra performance in more drives.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Thanks very much for the informative replies, they are much appreciated.

@cmetz, I dunno if I can really afford an exquisite system. Yes, my data is important to me, but with RAID 5 should that be enough security? Do I really need ECC memory, a vibration-dampening case, etc? I was definitely going to get a good PSU for the server, but that was really the extent of it. My current case isn't the best, and I've been running RAID 0 for three years with zero problems.

1) Alright, sounds good. Looks like I'll be spending a little bit more on a board then.

2) I'm going to keep it at a $900 limit, under hopefully.

3) Speed really isn't the issue, it's the space and the redundancy, and for the money it definitely seems like the best option. The server is going to be used for music, documents, and probably dumping my games to .iso format and storing them along with videos and other large files. The most important part basically is the redundancy and the transfer speed over my network.

Thanks very much for the help/experiences.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Wow, board selections are frustrating. The ECS 945G-M3 has the ICH7-DH w/RAID, but only supports P4/Celeron, killing my C2D upgrade potential. However, the 3.0 version of the board includes C2D support but only has the ICH7. There's an Intel 945G board with the ICH7R, but it's $30 more. Oh well, that's the price I have to pay I suppose.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
If you really care about your data, don't spend all your money on a high end RAID controller, vibration-damped case, etc. etc. -- spend some of it on a backup or two. RAID is not a backup. Regardless of how good the single system is, etc, it could still go "poof", and that's what external backups are for.

An Antec SLK3000B will be cheap, give good cooling, have enough space for at least 8 drives, and come with a cage with damping mounts for 5 drives. You need to add a 120mm fan. Of course, you could get a lot fancier with an expensive case if you really want to, but I think this is fine for home use.

If you want to get still fancier with data integrity, you could go to a RAID 6 implementation in addition to the backup. But this is the deep end, and I haven't done it yet, so I'll stop here on that.

The AMD SB600 doesn't have native RAID 5. It does have a RAID 10, like Intel, which is better than nVIDIA's and others' RAID 0+1.

RAID 5 writes for large sequential data sets is not really the problem, at least with decent implementations. The remaining problem is with smaller writes. For this, the smart data caches can help -- hopefully holding the data until there's enough to make nice big sequential writes. This is generally beyond the capability of integrated and inexpensive RAID implementations without their own cache. However, it's also a role that's not really important for home file servers.

Here's a sample data transfer, writing to a 3-drive RAID 5 array (from another RAID array), which demonstrates good sustained RAID 5 write performance.

D:\tools>xxcopy /y f:\test\test0\10.gb n:\test\test9

XXCOPY == Freeware == Ver 2.93.1 (c)1995-2006 Pixelab, Inc.
...
-------------------------------------------------------------------------------
F:\test\test0\10.gb 10,000,000,000
-------------------------------------------------------------------------------
Directories processed = 1
Total data in bytes = 10,000,000,000
Elapsed time in sec. = 72.89
Action speed (MB/min) = 8231
Files copied = 1

That's 10 GB / 72.89s ~ 137 MB/s. This is about perfect for this 3-drive SATA array, as only 2 drives hold application data, and 137 MB/s / 2 is about the max that these drives can sustain. 10 GB is also big enough to swamp any attempted caching tricks here.

If you read lots of forum posts, you'll hear that this can only be done by expensive add-on storage controllers, and certainly not an on-board nVIDIA RAID 5 implementation, which of course is wrong.

The 130+ MB/s figure is also notable in that it exceeds what a gigabit file server can do (at least for single user / simple workloads). In rough theory, this would be enough file server performance to saturate gigabit. In practice, I think you could benefit from still higher file system performance to squeeze the last bits out of gigabit. But it's pretty good, and probably about as good as you can get with current practice. Note however that this is just the performance at the beginning of the drives -- we'd need more to sustain high speeds when the drives got full. We could get this from bigger/faster drives or more of them, assuming the RAID implementation didn't cap out due to its own limitations.

I can demonstrate pretty good performance in network data transfers writing to the same RAID 5 array with this result:

ftp> get 10.gb
200 PORT command successful.
150 Opening BINARY mode data connection for 10.gb(10000000000 bytes).
226 Transfer complete.
ftp: 10000000000 bytes received in 99.42Seconds 100581.36Kbytes/sec.
ftp> bye

Cliffs:

1. For home use, RAID 5 doesn't have to be as complex and expensive as its sometimes said to be.
2. No RAID alone is a backup. Always have an external backup for data you care about.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Very comprehensive. Thanks.

Right now I have a 500GB external HDD which I originally planned to sell off after this server build, but I've since reconsidered. I'll probably end up keeping it as a backup drive for system backups and important server backups.

Out of curiosity, are these your own benchmarks? What hardware are you running?
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
stncttr908,

>but with RAID 5 should that be enough security?

Folks who have a lot of experience with enterprise-grade RAID 5 arrays pretty much always have experience with the rebuild that failed. The complexity of RAID 5 (or with any RAID, to some extent) means more things to fail. RAID 5 in particular has an Achilles' heel in that the rebuild operation itself pushes the good drives hard, and as drives get bigger (and I'm not sure they're getting more reliable...) the odds of one of those failing during the rebuild keeps going up. That's why there's so much push for RAID 6 now.

Also, if you do software RAID 5, an OS crash can toast your array state, especially if your OS has a poor quality software RAID implementation. The nice thing about a hardware controller is that your OS can crash and at least the controller can keep the array in a good state. Granted, the controller firmware can have array-toasting bugs, too. But this is rare in controllers not made by problem vendor A.

Do you need ECC memory? I have some religion about this question, but that's because I'm a EE and I understand statistics. But then, I have ECC memory on all my desktop boxes, too, for the same reasons. Unfortunately again, on the Intel platform especially, it drives your cost way up.

When I say "vibration dampening case" I mean "quality case". Don't go cheap. A good build-quality case will dampen vibrations somewhat, and you can also do things yourself to improve that (see silentpcreview.com). Vibration is bad for the longevity of your drives, and often overlooked. Cheap cases transmit vibrations from one drive to the next. But cooling is very important, too. Heat is also bad for the longevity of your drives. Finally, you want a case you as a human can work with, it's not a huge pain to assemble and disassemble.

>I've been running RAID 0 for three years with zero problems.

Here's an opportunity to improve, I hope you'll consider doing so. Obviously, you have to make trade-offs.

1 & 2) if you're looking for four drives and on a budget, you're not doing hardware RAID. An approach you could do is to get an A64 SB600 board, an EE single-core A64 CPU, 2x256MB of ECC DDR2 memory, four WD5000AAKS drives, a good quality supply and case, and run software RAID 1+0. That will give you 1TB of fast, reliable storage. Or you can have 1.5TB of slower, less reliable storage with software RAID 5. It's your cost vs. performance/reliability trade-off to make.

The back of the napkin math says you're still a bit over $1000. If you are really tight on money and have an existing 7200.10 500MB IDE drive, you could use that in place of buying one new SATA drive. I don't think that's going to be great for performance, though.

(Aside: I've heard very mixed reviews of 7200.10 reliability, which is why I'm not recommending them. I have WD5000YS drives and like them very much, but I believe that the WD5000AA* series will be cooler and more reliable in the long run because they have one fewer platter. Unfortunately, there isn't a WD5000AAYS model yet. So while I like the longer warranty and theoretically the RE2 series has better vibration handling, I think the AA has advantages that will make it last longer in practice. But that's my highly unscientific opinion - you should research it. You could get 7200.10 SATA drives for basically the same cost if you prefer Seagate)

Now, if you want more than four drives and hardware RAID isn't supported by your budget, then jumping over to an ICH8R platform might make sense. You currently must give up ECC, which I consider a deal breaker but you might not. But that gives you six AHCI ports on the 8R. It's more expensive for the CPU and motherboard, though.

Supposedly JMicron's AHCI controller works. I don't know, it makes me a little nervous, trusting a brand-new no-name company with my data.

>Wow, board selections are frustrating.

No kidding. It's not really a great time to be buying either AMD or Intel systems. Both platforms are going through major changes and there's the threat of more price wars.

You might find that waiting as long as you can is to your financial benefit, maybe more so than usual.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
Out of curiosity, what is the difference between the AAKS and AAYS models? I'd like to get the WD5000AAKS drives and save $80 total over the RE2 series, but I've heard a bit about problems with the drives being dropped from RAID arrays sporadically and in the worst cases, every other boot.

Your outlook for RAID 5 is quite unsettling. The prospect of being unable to rebuild an entire array is scary. I suppose I should do some more research, since your horror-scenarios may be little more than FUD... ;) (kidding of course)

I'm not really 100% certain I'll be doing this build soon anyway. It might wait until after this semester of college (early May), but I've recently become increasingly paranoid about my data security.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: stncttr908
Out of curiosity, are these your own benchmarks? What hardware are you running?

Yes, those are benchmarks that I made running on my hardware. The hardware and software configurations that I run for benchmarks vary according to goals and of course what I have available.

For the numbers I recently reported, the RAID 5 was on-board nVIDIA nForce 430/6150 on an Asus A8N-VM CSM with an X2 3800+ (stock speed) and 2 GiB RAM. The RAID 5 drive configuration was a frankenstein -- just something I had around -- 2 300 GB Maxtors and 1 500 GB Maxtor. I've gotten similar results with the 2 Maxtors and 1 Seagate 7200.10 (and 1 Maxtor and 1 Seagate and 1 Raptor..). This is not to say that you should mix drive types in this crazy way; they're just what I had handy for the tests. Soon one or more of the Maxtor's are going into another array, and that'll be the end of this configuration. The OS was running on a separate (PATA) drive.

I've disabled one core and reduced the RAM down to 512 MiB for some transfer tests on this machine (which were still fine), but didn't do that here with the RAID 5 as that wasn't the point -- I wanted good on-board RAID 5 numbers; not to demonstrate the cheapest performance. (I think that dropping the CPU and RAM down might still give good RAID 5 write performance, but didn't test it, so have no real data on that.)

This machine currently runs XP Home and a Vista RC. The numbers I provided were from Vista. XP Home also performs well, but not quite as well.

The source data for the tests came from a couple of different RAID arrays. As any decent RAID array can give read performance that exceeds the gigabit cap, I'll leave out the details for now.

I used Marvell PCIe-based NICs (SysKonnect) for the network tests given here. I've used on-board and others at various times.

As to the hardware that I run to store my own data -- times have changed and are changing; I couldn't buy the same storage controller any more, and even if I could, I'd pick something different. I'll leave out these details as I don't think they'd give you a reliable example.

High end storage controllers, when configured and used properly, are nice. If you already have a good backup and money to spend on such a controller, don't get me wrong -- you can get some value for your money in terms of features, support, and reliability. But my point is that if you have a good backup, then even some "free" on-board RAID 5 implementations can get the job done and give good performance in home servers.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
stncttr908, the YS is the RE2 (RAID edition), the KS is the SE16 (desktop). RE2 is five-year warranty, has TLER (which you can disable), and is supposed to have better vibration dampening/compensation. The SE16 is three-year warranty (OEM) or one-year (retail). They have released the AA 160Gb/platter PMR drive in the KS/SE16, but not yet in the YS/RE2. It makes some sense, the RE2 is supposed to be the higher-reliability drive and so they probably give the new design a few months on the street to make sure there are no unforseen problems before releasing the RE2 version of it. The SE16 and RE2 drives are very similar, though.

This is similar to the difference between the Barracuda and the Barracuda ES, BTW.

In the case of WD, the RE2 drives are usually not much more expensive ($20ish). So it's really not much more to spend. But right now, there's no RE2 version of the AA drive to be had, and generally speaking fewer platters means more reliability.

(I plan on buying a pile of AAYS drives for my home fileserver when they come out, so I'm not just talking hypotheticals. I basically decided to wait and see here. The Seagate 7200.11 series also might be interesting, but 7200.10 quality is reported to be widely variable, not the rock-solid I came to expect from the 7200.7 days)

>I've heard a bit about problems with the drives being dropped from RAID arrays sporadically and in the worst cases, every other boot.

I have never heard of this - do you have a source?

There was a problem with RE2 drives dropping out of arrays with some controllers very occasionally, and this was fixed with a firmware update. I'll give WD credit, when they do have these kinds of problems they (eventually?) own up to it and publicly release a firmware update to fix it. Seagate has RAID performance problems with the 7200.9, and I still haven't gotten them to actually admit it and give me the firmware update to fix it (yes, there is a fix, I know all about it, but their support people don't seem to :( ).

Maxtor has a really bad track record on ATA/SATA RAID - perhaps you're thinking of them?

>Your outlook for RAID 5 is quite unsettling. The prospect of being unable to rebuild an entire array is scary. I suppose I should do some more research,

Please do more research, knowledge is good.

RAID 5 is okay as long as you have good backups and accept the risks and either spend $$ on expensive hardware assists or accept the slower performance. RAID 10 is better reliability, lower risk, and better performance. It used to be that RAID5 was a lot cheaper, because drives and I/O ports to the drives were REALLY expensive. Now that drives are cheap, it might be more cost effective for example to get another drive or two and do 10 versus paying for a hardware controller to do RAID 5. A four-port Areca controller is about $300, and so is two WD 500GB drives. Yes, I oversimplify the comparison, but you see how the solutions compare now. If the drives were ten times more expensive, the math would be different and RAID 5 would be more compelling, but now drives are cheap (which is exactly why RAID is on everyone's minds in the first place).

>It might wait until after this semester of college

I think that would be a good idea. On the Intel side, you'll have new C2D chips, price drops, and ICH9R. I expect AMD to have to drop prices pretty significantly to compete, though I don't expect any major new products. Either platform should be much more interesting in a couple months from now, enough that waiting probably would be a good idea.

>but I've recently become increasingly paranoid about my data security.

Good for you. Losing data sucks.

Do make sure you have a backup strategy. All this stuff is totally moot if you accidentally rm -rf /, or your OS decides to really shred your filesystem metadata. RAID is not a back-up solution, it's an availability solution.
 

stncttr908

Senior member
Nov 17, 2002
243
0
76
@Madwand1 - Thanks for the figures/hardware description

@cmetz - About the AAKS dropping issue, here is the thread that was brought to my attention regarding it.

Basically, I think I've decided to hold off on the file server idea for a while. It's getting too expensive and I don't really think the need is there at this point. Perhaps in a few years when I plan to have a place of my own and a network of PCs (desktop, laptop, HTPC, etc.) and more $$$ to work with. Right now this project is getting way too expensive and is severely cutting into my desktop overhaul budget! For now I'm going to snag another 500GB and another enclosure for desktop/laptop/important data nightly backups and call it a day. Thoughts?

Thanks for the help. This thread is definitely an invaluable resource in terms of file server/RAID setups for me, and I'll definitely reference it when the time comes. Hopefully it helps some others out as well. I suppose I had a lot of misconceptions about RAID and considered it pretty failsafe.

Cheers.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: stncttr908
For now I'm going to snag another 500GB and another enclosure for desktop/laptop/important data nightly backups and call it a day. Thoughts?

Good idea. You'd be well ahead of the vast majority of home users by doing just this.

 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
stncttr908, okay, first off, that's an issue with a KS, not an AAKS. Second, I think there's a lot of misinformation in that thread.

Never make your boot volume a software RAID volume, always use a single boot/system volume and then make the software array data-only. Booting and software RAID has always been a bad place. If you truly need your system volume to have RAID, then you truly need hardware RAID.

I have read (have not confirmed) that the Intel ICH8R is not a six port controller. It's a four port controller, and a two port controller. This creates a bunch of gotchas that have been causing people grief when trying to set up RAID arrays across all six ports. I know this causes grief with the BIOS and trying to boot off of the array, and I think this might also cause grief if you use Windows and Intel's Windows RAID drivers. I don't think this causes trouble if you use OS drivers that treats every drive as fully separate and does the RAID function fully on top of the hardware (e.g., Linux md, and probably the high end Windows Server's built-in support for software RAID).

Also, if you notice in that thread, the poster finally actually ran some diagnostics, the diags said he had bad cables, and when he replaced his cheap cables things got better. I hope everyone here understands that if you use sub-standard cables, you're going to have problems. That's one place not to skimp when building an important server, but people do often forget the little things.

Backups are good, I don't think there's any downside to getting another drive and using it for backups. Just really really do them. It's easy to slack, and that's when something will fail...