• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Unable to install Linux on SATA RAID 0 drive.

SandeepKRam

Junior Member
Hello guys , this is my first post here ...and as things have it its about a very big problem.

I recently upgraded my system from an 32bit Intel 2.4Ghz to an AMDAthlon 64 3200+ Winchester with the Asus A8V Deluxe mobo .

I then went ahead and bought two Redington SATA 80GB disks and configured them to run in RAID 0 mode . I then successfully installed Win Xp pro on it . The PC shows them correctly as one 160GB drive. I have created 4 partitions using powerquest partition magic 8 .

C: is where I have Xp
D: , E: and H: are the other partitions.

All of them are in NTF format except for the D: which I created in ext3 format hoping to use for Linux.

This is similar to my setup before upgrading - only difference was I used to install the two Os's viz Windows and Linux on two different Hard disks that I had back then . Now of course the PC shows both the SATA disks as one 160GB volume .

Problem is that while installing Linux I get an error about the SATA RAID volumes. It says that Linux could not read the partitions on this disk and to initialise the drive I have to format the whole disk which will of course cause data loss. Also mentioned was the Kernel 2.4 supported my setup but this is no longer available in 2.6 kernel .

So I plugged in one more hard disk - a 40GB Seagate and set it as the Primary Master disk . It showed up in windows all right , and I was even able to install linux to it but it failed to install the boot loader onto the Sata disks. So I asked it to install the boat laoder to the first sector of the seagate HDD - it did install it but at bootup - it doesn't show the boot laoder and goes into windows directly.

I have tried every thing - changind the order of the SATA volume and the Seagate disks in boot device priorities and also I tried changing the Seagte disk as disk no.1 but it didn't work .

I need help now .

I want to install Linux and I have tried FC4, Suse 9.3 , Mandriva LE 2005 , and so many other distros to no avail. Pls help me out. I even tried VMware but that didn't work either.

The only thing left now is to completely disable the SATA RAID volume from BIOS and try ..but even if it worked I can't go to BIOS everytime I have to change the OS . PLs help me .
 
Originally posted by: n0cmonkey
Software or hardware raid?


well I am stil very new to RAID configs....I setup the RAID from the BIOS . My mobo has a built in or onboard Promise RAID controller.. and there is the fastbuild setup utility to setup the RAID in BIOS. I used this utility to setup the raid . I didn't install any other hardware apart from the two hard disks...
 
Originally posted by: SandeepKRam
Originally posted by: n0cmonkey
Software or hardware raid?


well I am stil very new to RAID configs....I setup the RAID from the BIOS . My mobo has a built in or onboard Promise RAID controller.. and there is the fastbuild setup utility to setup the RAID in BIOS. I used this utility to setup the raid . I didn't install any other hardware apart from the two hard disks...

Ok, it's software raid. Look for support for Promise raid configurations in your distro's documentation.
 
Originally posted by: Mesix
Windows detects the etx3 partition? I didn't know windoze could do that!

:frown: sarcasm ?

I don't remember saying windows detects ext3 .... you do know that you can see most types of file system formats in power quest partition magic...

 
It's "Software Raid"...so I would either a) use one for windows, 1 for linux or 2) use S/W raid with linux (linux SW raid rules)
 
Originally posted by: SandeepKRam
Originally posted by: Mesix
Windows detects the etx3 partition? I didn't know windoze could do that!

:frown: sarcasm ?

I don't remember saying windows detects ext3 .... you do know that you can see most types of file system formats in power quest partition magic...

Actually, I was suprised. Since it was given a letter, I assumed windows detected it.

It never detected any of my non-NTFS partitions, so I was just suprised
 
well I don't care if it detects it and accordingly adjusts the drive letter ... I can always change it later on in Partition magic ..but I just want the damn thing to run linux. any help is appreciated. 🙁
 
Linux developers don't care much about this sort of configuration.

This is because those RAID 0/1 controllers you generally get with SATA controllers are snake oil. They are designed to pump up the number of 'bullet point' features when people are shopping and comparing motherboards.. they are actually quite useless in terms of real-world performance. This is because even though they call themselves raid controllers all of the work is done in software via the drivers. This is similar to how Winmodems or Software Modems work when compared to 'real' modems. They offload the work the hardware normally does onto the cpu.

They are useless and there are better ways to get real performance improvements with 2 drives, such as Linux's MD 'software raid'. Linux's software raid is very fast for software raid and has had a lot of work on performance and stability compared to drivers you'd find that would work with this sort of 'bios raid' drivers you'd get with on-board controllers. Also there are other things like Linux's LVM (logical volume management) were you can combine and dynamicly resize logical volumes (think virtual partitions) accross and multiple drives and such.

Here is the Linux SATA raid FAQ. Read thru that and you'll notice a certain pattern to the responses. Note that the problem only happens when your running them in bios RAID configuration, they will all work as a normal sata controller with no issues in Linux.

The major problem happens when you want to dual boot with Windows and your using a onboard 'bios raid' controller. There are some drivers that will allow you to do this on your own and will make Linux work with these types of raid controllers but they aren't going to be simple to setup and use for a new Linux user. Generally speaking it's not worth the effort.
 
Originally posted by: drag
Linux developers don't care much about this sort of configuration.

This is because those RAID 0/1 controllers you generally get with SATA controllers are snake oil. They are designed to pump up the number of 'bullet point' features when people are shopping and comparing motherboards.. they are actually quite useless in terms of real-world performance. This is because even though they call themselves raid controllers all of the work is done in software via the drivers. This is similar to how Winmodems or Software Modems work when compared to 'real' modems. They offload the work the hardware normally does onto the cpu.

They are useless and there are better ways to get real performance improvements with 2 drives, such as Linux's MD 'software raid'. Linux's software raid is very fast for software raid and has had a lot of work on performance and stability compared to drivers you'd find that would work with this sort of 'bios raid' drivers you'd get with on-board controllers. Also there are other things like Linux's LVM (logical volume management) were you can combine and dynamicly resize logical volumes (think virtual partitions) accross and multiple drives and such.

Here is the Linux SATA raid FAQ. Read thru that and you'll notice a certain pattern to the responses. Note that the problem only happens when your running them in bios RAID configuration, they will all work as a normal sata controller with no issues in Linux.

The major problem happens when you want to dual boot with Windows and your using a onboard 'bios raid' controller. There are some drivers that will allow you to do this on your own and will make Linux work with these types of raid controllers but they aren't going to be simple to setup and use for a new Linux user. Generally speaking it's not worth the effort.

i agree with most of that, except the bolded statement. there IS a big difference, at least in the things i do. i am always the first into a map on any game i play over the internet (or lan actually). ive noticed a HUGE increase in speed when copying files on the same disk array, and windows went from taking 3-4 bars (the xp loading bar) to less than 1, and the raid array was the only change.

with that aside, "software" raid is definitely not as good as hardware raid for one huge reason: raid cards can massively increase the STR of drives because of the extra cache. i can copy a 300mb file in about 1 second with my array (2x 74gb raptors) because my controller card has 128mb of cache.

edit: sorry, didnt mean to resurrect this thread. i searched for unix raid because im about to give that a shot and i found this thread.
 
If you want to "raid" Linux, either get a supported h/w raid card, or use md (software raid). S/W raid in Linux is terrific, and I use it all the time, esp in lower end boxes, where I want a little bit of data redundency with 2 different sized drives, i.e. 1 10 gig, 1 20 gig. I will run 10 gigs for /, swap, and /boot, and then 10+10 gig partitions as raid 1 for /home or /var, depending on usage. If you tried to run raid 1 in windows on that setup, you would only be able to get a single 10 gig mount point.
 
Linux developers don't care much about this sort of configuration.

Not true, dmraid supports many (no idea how many there are really) of the BIOS assisted software RAID formats.
 
Originally posted by: Bigsm00th
i agree with most of that, except the bolded statement. there IS a big difference, at least in the things i do. i am always the first into a map on any game i play over the internet (or lan actually). ive noticed a HUGE increase in speed when copying files on the same disk array, and windows went from taking 3-4 bars (the xp loading bar) to less than 1, and the raid array was the only change.

I've seen some benchmark tests where it didn't seem to affect loading times much, but it probably depends on the exact game. If it is loading data sequentially, and it is not CPU- or memory-bound during the loading operation, a RAID0 will help. If it is pulling data from a lot of different files, and then has to do a lot of work on the fly to parse/build the geometry of the level, it won't help you that much.

RAID0 will speed up copying big files around, whether software or hardware based. The amount of overhead for 'software' RAID0 is very minimal; essentially, it just maps blocks from the virtual disk to one of the actual hard drives, which is a very fast operation. With RAID5/6, hardware is significantly better than software. And a caching controller can help mitigate the write penalty of mirrored RAID levels (like RAID1 and 1+0), as well as the thrashing that you get when copying files within the same physical drive/array.

with that aside, "software" raid is definitely not as good as hardware raid for one huge reason: raid cards can massively increase the STR of drives because of the extra cache. i can copy a 300mb file in about 1 second with my array (2x 74gb raptors) because my controller card has 128mb of cache.

The cache has almost nothing to do with increasing the STR; STR (for large data transfers) is based almost entirely on how fast the drives are, and how many are striped together. What the cache does is help immensely if you request the same data multiple times, and really smart controllers will look at access patterns and try to prebuffer data if they think you are doing more-or-less-sequential reads/writes. But if you just want to read or write a big sequential chunk of data, the amount of cache on the controller is pretty much irrelevant, since you will be bottlenecked by the drives.

Writeback cache *will* dramatically improve write performance for small write operations (ones that can fit mostly or entirely in the cache), since you do not have to wait for the data to be written to the disks (and so you can transfer at the speed of the CPU->controller interface). But if you are working with large amounts of data, you'll still be working at the speed of the drives. If your files were 2GB in size rather than 300MB, you'd see almost no difference between a software RAID0 and a hardware RAID0 with 128MB of cache.
 
Well my understanding on Software RAID is that with Linux 'MD' software RAID 5 is generally faster then any hardware raid.

This is because the controllers on the RAID 5 devices use a more general purpose (as apposed to something like in your video card which is very specialized) proccessor to calculate the stuff needed for Raid is much much slower then even low-end general purpose CPUs we have in our computers. This is something like a 200-400mhz proccessor versus 3.0ghz pentium 4.

When we were all running 333mhz Pentium's then when you got hardware RAID it would significantly lower the CPU overhead for the box while increasing reliability and disk transfer speed... but it's not like that anymore. Modern CPUs are insanely fast.

See here in benchmarks were Linux software RAID is twice as fast on the same machine with the same harddrives then hardware raid. http://www.chemistry.wustl.edu/~gelb/castle_raid.html

And that's not the only place were I see things like that.. Linux MD software raid has performed better in other benchmarks, even versus other software raid solutions. To me it seems one of the substantially better things Linux has going for it for server stuff.

(and there isn't anything wrong with the drivers either.. look towards the end of the benchmark page for more examples)

But generally you don't run RAID for speed.. What it is for is higher hardware reliability (well actually aviability, I guess)... Hardware raid devices offer things like hotswappable drives and hotspares and data protection features that you don't generally get with any software RAID devices. With software RAID it's not uncommon for a failing drive to spout gibberish that get itself 'RAID'd across all the drives before that drive fails enough to crap out and get the attention of the administrator.

With Linux software raid you can combine any devices or partitions you want to get any software raid configuration you want. If you want you can combine 3 disk partitions on the same disk to do 'software raid', but obviously that is impractical. It supports hotswappable and hotspares theoreticly in Linux, but the SATA support is such that you can't realy do live hotswapping because of the libata drivers limitation. Many SATA controllers/drives would freak out anyways, even if there was proper support for doing stuff like that.

With RAID 5 software RAID in Linux also there are other limitations are that disk activity comes with higher overhead and it's easy to saturate your PCI buss with harddrive activity..

As time progresses though these limitations are less then they used to be. With PCI express serial busses you can pack substantially more harddrives into a PC and get good performance. Multicore CPUs can handle very high utilization for disk activities on one core while providing as good as single threaded performance on another core.

I think that now, and going into the future, the way to get highest speed disk access possible is to use a entire computer just as a simple "disk drive". A dedicated PC with multicore CPU and many disks on a PCIe bus with (dedicated) low latency, high speed networking interconnects is probably the hot ticket to fast storage, IMO.

For reliability backups are much better then RAID anyways.

As for the 'BIOS' (all they realy are is vendor-specific propriatory software raid with BIOS assistance to work around Window's built-in limitations) RAID 0 having better results for the OP, it's probably more subjective then you think. Your comparing your system before reinstalling with a fresh Windows XP install, and a fresh game install on a file system with no fragmentation AFTER you just more then doubled the cost per gigabyte for disk storage for your computer.. It's going to be very hard to be subjective.

See here:
http://faq.storagereview.com/SingleDriveVsRaid0

Storage review has done lots of benchmarking, shown were RAID 0 is advantagious and shown were it's not. They have a handfull of articles on their site with benchmarks and explainations if you want to look for them.

Their conclusion, which I tend to agree with is:
To summarize, RAID 0 offers generally minimal performance gains, significantly increased risk of data loss, and greater cost. That said, it offers the ability to have one large partition using the combined space of your identical drives, and there are situations where the benefit of the benefits outweight the disadvantages. It is your computer: The choice is up to you.

Especially with the choice is up to you part. There are cases were it's good, but I'd rather just buy more RAM and have the OS do aggressive disk caching... It'll be slow the first time you run stuff, but after that it'll be much more faster then having a big disk array.
 
Originally posted by: drag
Well my understanding on Software RAID is that with Linux 'MD' software RAID 5 is generally faster then any hardware raid.

This is because the controllers on the RAID 5 devices use a more general purpose (as apposed to something like in your video card which is very specialized) proccessor to calculate the stuff needed for Raid is much much slower then even low-end general purpose CPUs we have in our computers. This is something like a 200-400mhz proccessor versus 3.0ghz pentium 4.

When we were all running 333mhz Pentium's then when you got hardware RAID it would significantly lower the CPU overhead for the box while increasing reliability and disk transfer speed... but it's not like that anymore. Modern CPUs are insanely fast.

Yes and no. Most hardware RAID5 cards use some sort of general-purpose processor to do the RAID calculations (rather than custom logic), but they are generally fast enough to push as much data as the interface can take. Could your CPU do it? Sure. But your CPU and system RAM have better things to do in most situations.

See here in benchmarks were Linux software RAID is twice as fast on the same machine with the same harddrives then hardware raid. http://www.chemistry.wustl.edu/~gelb/castle_raid.html

I don't know enough about the bench they're running (ie, was that write-only? read-write? what kind of hit ratios?), but the performance on that controller looks VERY low. It also looks like these are not caching controllers (that is, they do not have a substantial chunk of onboard memory), which can hurt performance significantly in RAID5 (since either the driver is buffering data in local memory, or you can only work in write-through mode).

And that's not the only place were I see things like that.. Linux MD software raid has performed better in other benchmarks, even versus other software raid solutions. To me it seems one of the substantially better things Linux has going for it for server stuff.

If you have CPU time and RAM to burn, software RAID (even RAID5) is just as good. If you have a sever that needs all the CPU time it can get to interface with clients, hardware storage controllers will do a lot better overall than software RAID.

But generally you don't run RAID for speed.. What it is for is higher hardware reliability (well actually aviability, I guess)... Hardware raid devices offer things like hotswappable drives and hotspares and data protection features that you don't generally get with any software RAID devices. With software RAID it's not uncommon for a failing drive to spout gibberish that get itself 'RAID'd across all the drives before that drive fails enough to crap out and get the attention of the administrator.

You can certanly run RAID for speed -- if you know what kind of access patterns you will be facing. A RAID0 will be *way* faster than a single drive for heavy sequential read-write access, and RAID5 is almost as fast for lots of sequential reads and only occasional writes.

I think that now, and going into the future, the way to get highest speed disk access possible is to use a entire computer just as a simple "disk drive". A dedicated PC with multicore CPU and many disks on a PCIe bus with (dedicated) low latency, high speed networking interconnects is probably the hot ticket to fast storage, IMO.

That's basically a NAS/SAN box. In practice, once you start going that way, it quickly becomes more effective to use specialized hardware than just a normal PC acting as a fileserver.

For reliability backups are much better then RAID anyways.

As is often repeated but rarely listened to, RAID can never replace backups. 😛 No amount of RAID, for instance, can save you from physical destruction of the fileserver (due to a natural disaster, fire, etc.)
 
Also, with any raid array, when you "accidently" delete the file, the SEC can't get it off your RAID array, then can, however, get it off your tapes 🙂
 
Originally posted by: Matthias99
Originally posted by: drag
Well my understanding on Software RAID is that with Linux 'MD' software RAID 5 is generally faster then any hardware raid.

This is because the controllers on the RAID 5 devices use a more general purpose (as apposed to something like in your video card which is very specialized) proccessor to calculate the stuff needed for Raid is much much slower then even low-end general purpose CPUs we have in our computers. This is something like a 200-400mhz proccessor versus 3.0ghz pentium 4.

When we were all running 333mhz Pentium's then when you got hardware RAID it would significantly lower the CPU overhead for the box while increasing reliability and disk transfer speed... but it's not like that anymore. Modern CPUs are insanely fast.

Yes and no. Most hardware RAID5 cards use some sort of general-purpose processor to do the RAID calculations (rather than custom logic), but they are generally fast enough to push as much data as the interface can take. Could your CPU do it? Sure. But your CPU and system RAM have better things to do in most situations.

I realy depends. If the PC is just for that task then it doesn't matter. You'd use it's cpu to improve the performance of other systems doing more important stuff.

How much does a good hardware raid card costs?

I can go out and buy a 400 dollar Dell 'server' PC, buy a few SATA to PCI adapters and stuff a terrabyte of RAID'd disk space for under a thousand bucks.

It's flexible, it's fast. I can share it's disk space between a dozen 'servers'. I can use GFS and EVMS/LVM2 to managed shared disk access over the network so it's like a local file system. And there are other distributed file server options for high-speed network disk access... Stuff that is much nicer and has less overhead then dealing with traditional file services like NFS or CIFS/SMB stuff.

Those servers end up smaller, less rackspace, less energy, and so on and so forth.

See here in benchmarks were Linux software RAID is twice as fast on the same machine with the same harddrives then hardware raid. http://www.chemistry.wustl.edu/~gelb/castle_raid.html

I don't know enough about the bench they're running (ie, was that write-only? read-write? what kind of hit ratios?), but the performance on that controller looks VERY low. It also looks like these are not caching controllers (that is, they do not have a substantial chunk of onboard memory), which can hurt performance significantly in RAID5 (since either the driver is buffering data in local memory, or you can only work in write-through mode).[/quote]

Look around and you'll find other benchmarks that show linux software raid is generally faster. This isn't the first benchmarks that I've found that show this.

It's kinda a dirty little secret. It's easier for hardware vendors to just point to 'low overhead' and 'faster' rather then try to tell them that buying their products may not increase performance, but will provide other tangible benifits.

And that's not the only place were I see things like that.. Linux MD software raid has performed better in other benchmarks, even versus other software raid solutions. To me it seems one of the substantially better things Linux has going for it for server stuff.

If you have CPU time and RAM to burn, software RAID (even RAID5) is just as good. If you have a sever that needs all the CPU time it can get to interface with clients, hardware storage controllers will do a lot better overall than software RAID.[/quote]

I like idea of having dedicated file servers sharing resources to other systems that do more speed important tasks. Basicly a NAS/SAN like you said.

With things like infinaband becoming affordable, being able to use LVM to manage storage pools of PCs in the same manner that you can manage storage pools of individual drives inside of a server (kinda like using software RAID), PCI express aleviating PC bus limitations, etc. etc. I can see PC clustering storage arrays replacing SANs for most people.

For isntance Archive.org doesn't use SANs, it doesn't even use RAID.. It just uses lots and lots and lots and lots of Mini-ITX machines with a few disk drives apeace running linux which they mirror and such accross each other to ensure high aviability, reliability, and decent performance.

Last time I read about Google they were using PC clusters in a similar way. They would have a load-sharing cluster of PCs that acted as one unit as a 'node', then they would have multiple of this nodes that shared would individually respond to requests for search engine stuff. The nodes were mirrored, had load balancing capabilities, and hotspare capabilities. If a PC in a node blew up then they would shutdown that node and a hotspare would take over automaticly. Then the techs could attend to the hardware failure at their convience with no loss of service to anybody. They wrote and open sourced their own "GFS", which is Google File System that they do all that with. (which is different from Redhat's GFS (global file system))

As it is now... I can take 5 500gig drives and stick them in with a couple disk controllers into a 400-600 dollar PC. Then take 3 identical things to those machines, give them a couple multi-port ethernet cards and combine them with nice switches for the 'storage fabric'.

On those dedicated storage machines I would setup GNBD, which is a way to provide direct block access to storage over a regular network to clients. So basicly those machines would appear as regular disk drives to any client. Which in turn would be the actual servers doing CIFS to windows clients, or OpenAFS or NFS to Linux workstations, or running Apache or Oracle or whatever.

Basicly GNBD would make the storage stuff those PCs are sending as /dev/sd* files.. I would use CLVM to manage those block devices like I would with regular disk drives. CLVM is a extended version of regular Linux LVM2, but with the ability to work well over a network and deal with multiple machines accessing the storage pool at the same time, as well as some other features.

So in actuality I would create a 6+ terrabyte shared storage for my servers using commodity PCs and free software. It would be faster and have less overhead for my individual application servers then if I went with hardware RAID for them, but at a fraction of the cost of SANS.

Obviously though it's inferior to a 'real' sans, but it would work if I needed the storage size, but lacked the budget.

GFS and friends also allow you to do that with SANs, which is what I suppose most people use it for nowadays.

So you can go:
SANS ---> 'sans fabric' ---> Multiple servers with their file system being managed with GFS. ---> regular ethernet ---> end user clients
or
SANs ---> 'sans fabric' ---> Linux GNBD servers ----> ethernet storage backbone ---> Multiple regular Linux servers ---> regular ethernet network ---> end user clients
(this allows you to save costs and improve peformance of sans without having to upgrade the sans to fill out your entire network.. the servers can be located and interface the regular ethernet network in different places to reduce traffic loads and isolate PC clients from one another, that sort of thing)
or
Linux file server clusters running GNBD ---> ethernet storage backbone ---> Multiple regular Linux servers ---> regular ethernet network ---> end user clients

The last scenerio is what I was talking about.

But there are also in the works lots of other stuff. It's not as mature as GFS and some CLVM (which is being worked on to expand it's capabilities), which people use in enterprise right now.

For instance you can use ddraid... It allows you to take network block devices and run actual software RAID over the network. That's definately not something you can do with hardware raid!
http://sources.redhat.com/cluster/ddraid/

I can use OCFS2, which is Oracle clustering file system version 2, which is usefull for general purpose stuff despite it's name.
http://oss.oracle.com/projects/ocfs2/

OCFS2 actually should make it into the vanilla kernel proper one of these days.

There is Lustre, which is a high performance distributed file system used in many linux clusters today.. including that Top500 stuff.
http://www.lustre.org/

For isntance hardware vendors are currently selling special purpose Lustre solutions for scientific applications were very high I/O is needed. Using PCs (with hardware RAID btw) they are able to setup a parrellel network based file system that has been run to get over 10Gb/s 2-way file transfer performance using PC clusters. You can go out and right now buy pre-built systems using exotic interconnects like 'Quadrics' to get sustained file transfer performance of around 2.5Gb/s.
http://www.taborcommunications.com/hpcwire/hpcwireWWW/04/1203/108916.html

All that file system stuff is open source right now.

Right now this sort of thing should be possible using regular PCs on a regular budget.. but it isn't. Ethernet is a big limitation, but as we get things like infinaband or myranet cheaper it will become more and more practical.

If you need this stuff RIGHT NOW, obviously SANS are the only practical solution for a the vast majority of people. But I don't think it will always be like that.
 
Originally posted by: drag
<snip -- lots of good info>

If you need this stuff RIGHT NOW, obviously SANS are the only practical solution for a the vast majority of people. But I don't think it will always be like that.

Pretty much sums it up.

You can get hardware controllers that suck, and 'software' RAID controllers (or even pure OS-level RAID) that are great. Just being 'hardware RAID' doesn't make a controller good (or vice versa), and I didn't mean to imply otherwise.

One problem you'll find with the '$400 PC with some controllers/disks' is that, unless you use PCIe or PCI-X, you'll be bandwidth-starved between the CPU and disks, especially if you have multiple striped arrays in one PC. Using a lot of systems (and putting only a few disks in each one, with a lot of bandwidth between them) mitigates this, but then you're paying $400-500 for each array, and you can get cheap NAS appliances in the $400-500 range (and maybe even less at this point)!
 
Back
Top