Go Back   AnandTech Forums > Hardware and Technology > SFF, Notebooks, Pre-Built/Barebones PCs

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 12-21-2012, 11:47 AM   #1
TecHNooB
Diamond Member
 
TecHNooB's Avatar
 
Join Date: Sep 2005
Posts: 7,440
Default Want to setup a home server system

I'd like to setup a simple home server to throw files onto. Need recommendations for hardware. Not sure if I want to go with prebuilt or build it myself. I want to be able to put the system in a raid configuration (the one that uses 1 drive to support N drives). If you have experience with this, please share any advice you have
__________________
Sushi is good

Quote:
Originally Posted by brianmanahan View Post
smoblikat you fool, you fell victim to one of the classic blunders! the most famous of which is "never post your actual pay in an ATOT brag thread", but only slightly less well known is this: "never post your actual pay when alkemyst is online!!" hahahahaha
TecHNooB is offline   Reply With Quote
Old 12-21-2012, 01:33 PM   #2
DaveSimmons
Elite Member
 
Join Date: Aug 2001
Location: Bellevue, WA
Posts: 37,199
Default

If all you want is a place to dump files, there are NAS (Network Attached Storage) appliances that connect to your router. That can be cheaper, smaller and use less power than building a general purpose PC, even a mini-ITX one.

So:
- what are the exact uses?
- how much storage do you need? With RAID-1 you only get 1 drive's capacity. with RAID-5 you get the total of all drives less one (4 - 2 TB drives = 3 x 2 = 6 TB)

Remember that even with RAID you can lose files to hardware failure from fire, flood, theft, squirrels. And you can lose files to human error or a virus/trojan that alters files. RAID only stops immediate file loss from disk failure.
DaveSimmons is offline   Reply With Quote
Old 12-21-2012, 01:49 PM   #3
TecHNooB
Diamond Member
 
TecHNooB's Avatar
 
Join Date: Sep 2005
Posts: 7,440
Default

I mainly want to use the server as a repository (for source code and documents) and storing music/movies. I plan on using linux/svn/apache. However, I have never actually used any of these things, which is why I seek advice

Preferably, I'd want a small form factor and enough sata ports for ~4 HDs. Need suggestions on which parts to get. Low power is a plus.
__________________
Sushi is good

Quote:
Originally Posted by brianmanahan View Post
smoblikat you fool, you fell victim to one of the classic blunders! the most famous of which is "never post your actual pay in an ATOT brag thread", but only slightly less well known is this: "never post your actual pay when alkemyst is online!!" hahahahaha
TecHNooB is offline   Reply With Quote
Old 12-21-2012, 02:09 PM   #4
rsutoratosu
Platinum Member
 
Join Date: Feb 2011
Posts: 2,281
Default

How much space do you need ? Basically Im using a HP microserver with a hp raid adapter, using 4 4tb hitachi drives.. 16TB unraided, 12TB raided.

If you need to back those up, add 2 6tb usb drives, or adjust to your size.

My HP microserver is around 87watts with 4 drives, tested on the killawatt. Runs windows 2008 r2 64 bit with 8gb memory. I think the setup cost me around 1500 each server. Its pricy but I like my performance.

I use to have those wd nas, linksys nas, buffalo nas, its always sluggish and not as fast as a real server. Other options are linux nas. There are other cheaper alternatives.
rsutoratosu is offline   Reply With Quote
Old 12-21-2012, 02:15 PM   #5
TecHNooB
Diamond Member
 
TecHNooB's Avatar
 
Join Date: Sep 2005
Posts: 7,440
Default

I probably won't need more than 4 TB. I rather not go with the NAS option. Would a mini-itx case be too small?
__________________
Sushi is good

Quote:
Originally Posted by brianmanahan View Post
smoblikat you fool, you fell victim to one of the classic blunders! the most famous of which is "never post your actual pay in an ATOT brag thread", but only slightly less well known is this: "never post your actual pay when alkemyst is online!!" hahahahaha
TecHNooB is offline   Reply With Quote
Old 12-21-2012, 02:19 PM   #6
TheStu
Moderator
Mobile Devices & Gadgets
 
Join Date: Sep 2004
Location: New Cumberland, PA
Posts: 11,193
Default

Quote:
Originally Posted by TecHNooB View Post
I probably won't need more than 4 TB. I rather not go with the NAS option. Would a mini-itx case be too small?
There is the Fractal Node 302, hold 6 3.5" drives if you don't have a dedicated GPU in there (which you wouldn't need)
__________________
Quote:
Originally Posted by mfenn View Post
The 6770M can play Crysis 2, for suitably small values of play
TheStu is online now   Reply With Quote
Old 12-21-2012, 02:47 PM   #7
thepriceisright
Junior Member
 
Join Date: Dec 2003
Posts: 19
Default

How much are you looking to spend? Do you have any existing hardware that you think you could reuse?

Home file servers are simple things so you really shouldn't fret too much over the hardware. Picking up something used would be a good idea IMO, check out the for sale thread (I would look for you but my post count is too low) to see if anything catches your eye there.

I might suggest going with a dedicated raid card though. Once you do an array that's more complicated than a mirror it's nice to make the thing transplant-able.
thepriceisright is offline   Reply With Quote
Old 12-21-2012, 03:05 PM   #8
TecHNooB
Diamond Member
 
TecHNooB's Avatar
 
Join Date: Sep 2005
Posts: 7,440
Default

Going for a cheap/compact/low power build. I don't build enough systems to have spare parts lying around.. so I'll probably have to buy everything. Taking suggestions on which motherboard to get
__________________
Sushi is good

Quote:
Originally Posted by brianmanahan View Post
smoblikat you fool, you fell victim to one of the classic blunders! the most famous of which is "never post your actual pay in an ATOT brag thread", but only slightly less well known is this: "never post your actual pay when alkemyst is online!!" hahahahaha
TecHNooB is offline   Reply With Quote
Old 12-21-2012, 09:22 PM   #9
Talaii
Member
 
Join Date: Feb 2011
Posts: 31
Default

Quote:
Originally Posted by TecHNooB View Post
Going for a cheap/compact/low power build. I don't build enough systems to have spare parts lying around.. so I'll probably have to buy everything. Taking suggestions on which motherboard to get
I'd go with either an intel ITX board (intel NIC is better than the realtek/broadcom ones, 4 SATA ports) or an Asus P8H77-I (six sata ports, Realtek NIC). Throw in a Celeron G5XX and some RAM and you're good to go. I'm using a P8H77-I with four WD greens in raid 5 (using mdadm) and a PCIe intel network card, and it works wonderfully. I can easily saturate gigabit even with the slower drives - sequential reads/writes to the array do around 350/200 MB/s. The hard part is finding a nice compact case to fit as many drives as you want - I have a Chenbro ES34169, which unfortunately is no longer available (as far as I can tell).

The other option is to get a microserver. They have a lot less CPU power (to be fair, you really don't need more CPU power), but they support ECC memory and come as a nice compact box with everything inside. Again, the integrated NIC is not ideal - a nice PCIe intel card would be a nice addition - but for linux it's not a huge difference. If you wanted to run Freenas/some other ZFS setup it would be - their drivers for the integrated NIC are apparently terrible.

I'd avoid a Hardware raid card, actually - it ties you to another point of failure (if the RAID card dies, you'll need to buy a second identical card to recover the array, usually). Unless you really need absolute maximum performance, software raid is ideal - plugging the disks into any other linux computer, rebuild the array there. Rebuilding an md array is relatively easy as long as the disks are intact, whereas I used to have a hardware raid card, and when it died all I could do was try and search ebay for a duplicate. The only real advantage for a hardware raid card in a home server is making it simple to boot off the raid array - I'd rather just get a small SSD/laptop drive/USB stick to run the OS.
Talaii is offline   Reply With Quote
Old 12-21-2012, 09:28 PM   #10
Red Squirrel
Lifer
 
Red Squirrel's Avatar
 
Join Date: May 2003
Location: Canada
Posts: 27,387
Default

If you don't mind spending more money look at Supermicro systems. I've been wanting to get one to upgrade my current whitebox system. Get one with like 8 drive bays. Use Linux MD raid, and put the OS on a SSD (for reliability mostly).

You can also go cheaper and build a server yourself using just desktop parts. That's what my current server is. I just got a rackmount case and stuck 2 5-bay cages into it. Currently have 7 drives in raid 5 (one is a hot spare). Idealy I'd like to go raid 6 but my current kernel does not support live raid5->raid6 conversions, so I'll do that whenever I upgrade my server and thus, the OS.

You can also go even cheaper and just buy a pre built PC like a Dell or Emachine or something. Even a Celeron will do fine. You'll need to somehow fit a drive cage into it and possibly sata card. Though personally I would not do that option, but if you are on a budget, it would work.

I've been wanting to build myself a SAN using one of the Supermicro 24 bay cases. Been looking into it but it wont be cheap at all. I want to start working on my basement before I buy something like that, though.
__________________
~Red Squirrel~
486dx2 @66Mhz turbo, 8MB ram, 512MB HDD, sound blaster 16 + 2x cdrom, Trident 1MB video card @ 640*480, 56k high speed modem.
Red Squirrel is offline   Reply With Quote
Old 12-22-2012, 03:57 AM   #11
lakedude
Golden Member
 
Join Date: Mar 2009
Posts: 1,410
Default

Couple of things

Instead of hardware raid consider using ZFS.

http://www.youtube.com/watch?v=QGIwg6ye1gE

http://en.wikipedia.org/wiki/ZFS

http://www.youtube.com/watch?v=6F9bscdqRpo

Hardware wise we have had trouble streaming 1080p over wireless. You are going to need wired or really fancy wireless to stream high bit rate 1080p...
lakedude is offline   Reply With Quote
Old 12-22-2012, 11:51 PM   #12
thepriceisright
Junior Member
 
Join Date: Dec 2003
Posts: 19
Default

Quote:
Originally Posted by Talaii View Post
I'd avoid a Hardware raid card, actually - it ties you to another point of failure (if the RAID card dies, you'll need to buy a second identical card to recover the array, usually). Unless you really need absolute maximum performance, software raid is ideal - plugging the disks into any other linux computer, rebuild the array there. Rebuilding an md array is relatively easy as long as the disks are intact, whereas I used to have a hardware raid card, and when it died all I could do was try and search ebay for a duplicate. The only real advantage for a hardware raid card in a home server is making it simple to boot off the raid array - I'd rather just get a small SSD/laptop drive/USB stick to run the OS.
Err yeah, I overlooked the linux part so his OS will be able to do more complex arrays. Was stuck in Windows world for a minute there.
thepriceisright is offline   Reply With Quote
Old 12-24-2012, 11:36 PM   #13
thecoolnessrune
Diamond Member
 
thecoolnessrune's Avatar
 
Join Date: Jun 2005
Location: Stoughton, WI
Posts: 8,316
Default

The reasons you do hardware RAID arrays are multiple.

1: A RAID card is way less likely to fail than something corrupting your OS installation. A corrupted OS installation can take your entire storage array with it.
2: Performance. RAID 5 and its ilk have very slow write speeds. Hardware RAID cards have memory caches (like Windows 8 relies on heavily) to amplify write speeds
3: Error proofing. RAID 0 and RAID 1 arrays are just fine in software, but I would never make a software RAID 5 array. Why is that? Traditional RAID systems don't check for disk corruptions. Corrupted data on a single RAID 5 HDD will fail your entire array when you *do* lose a drive and its trying to rebuild. The only way to prevent this is by scrubbing the data. To do this, the data must be checked and passed through ECC supporting memory, which you're only going to get properly implemented on hardware RAID cards with built-in memory.

However, ZFS takes care of all the above issues with its L2ARC for caching, as well as its use of checksumming data to prevent data corruption before it happens. Additionally, ZFS groups operate outside the operating system. You can create your storage pool, and if your OS for whatever reason goes down, you can reinstall the OS (or change the OS entirely), and simply "import" the ZFS storage pool back in. If cheap is your goal (in other words, not buying hardware RAID cards, Battery packs, etc.), then ZFS is definitely the way to go.
__________________
Sabrina Online!
Jay Naylor Illustrations! (Language and some situations NSFW)

I'm for poop.
thecoolnessrune is offline   Reply With Quote
Old 12-25-2012, 05:21 AM   #14
Talaii
Member
 
Join Date: Feb 2011
Posts: 31
Default

Quote:
Originally Posted by thecoolnessrune View Post
1: A RAID card is way less likely to fail than something corrupting your OS installation. A corrupted OS installation can take your entire storage array with it.
2: Performance. RAID 5 and its ilk have very slow write speeds. Hardware RAID cards have memory caches (like Windows 8 relies on heavily) to amplify write speeds
3: Error proofing. RAID 0 and RAID 1 arrays are just fine in software, but I would never make a software RAID 5 array. Why is that? Traditional RAID systems don't check for disk corruptions. Corrupted data on a single RAID 5 HDD will fail your entire array when you *do* lose a drive and its trying to rebuild. The only way to prevent this is by scrubbing the data. To do this, the data must be checked and passed through ECC supporting memory, which you're only going to get properly implemented on hardware RAID cards with built-in memory.
To address these:
A corrupted OS installation can and will take everything with it. It's also a lot rarer than you would think, unless you do something really stupid. Having a hardware raid controller is no protection against someone accidentally going "rm -rf /" (or the equivalent gui command), or against a broken operating system install doing the same thing. It might be proof against a badly-written software raid implementation - but unless you're using a bleeding-edge untested linux kernel version or a development version of md, that's not a problem. And anyone using bleeding-edge ANYTHING for a server that needs to be reliable deserves what they get. Well-done software raid is very portable - plug the disks into another linux machine, run one command, and the array is again accessible. If a disk dies during the transfer, you can rebuild on the new machine.

Performance: I can easily saturate gigabit with a 4-disk raid 5 array (using md in linux). For sequential writes, it's almost fast enough to saturate two teamed gigabit connections. Unless you're dedicated enough to invest in 10Gb ethernet, it's perfectly adequate for home use. Sure, random iops isn't high - but if you care about that, an SSD will do much, MUCH better than any traditional disk setup. Sure, it won't be anywhere near as fast as a good hardware controller, but that sort of speed is rarely necessary in a home situation, and not worth the extra money/hassle.

Error proofing: Firstoff, RAID is not backup. There are far too many points of failure that will destroy the entire thing (power supply blowing up from a nasty power surge, someone breaking in and taking the box, an accidental rm -rf or similar, etc) to count on it alone for protecting important data. I have HAD an computer die (an old VIA motherboard killed itself, taking almost everything with it). I moved the hard disks to a different install of linux on a different computer and it took one command (mdadm --scan --assemble iirc) and the whole thing was again accessible. I've never had problems with data corruption from software raid, though non-ECC memory isn't perfect. If you DO want ECC memory, the microserver supports it.

I realise anecdotal evidence doesn't tell the whole story, but I've lost two raid arrays to hardware raid controllers malfunctioning (one started corrupting everything on the array - I think it was some sort of problem with the onboard memory. I had another controller die, and couldn't find a compatible replacement, so although the disks were fine I couldn't recover the data). Software raid, on the other hand, has been reliable - even when the computer running the array died, I was always able to move it to a different linux box and reassemble the array.
Talaii is offline   Reply With Quote
Old 12-25-2012, 11:09 AM   #15
Griffinhart
Golden Member
 
Join Date: Dec 2004
Posts: 1,032
Default

I'm a big fan of the HP Microserver:
http://www.newegg.com/Product/Produc...82E16859107052

It is what I am using. My current configuration is:
The Microserver with 8GB of RAM.
The included 250GB HD for boot/system.
The Optional "Remote Access Card" (Essentially an ILO/KVM over IP board)
3, 3TB HD's
1, 500GB USB External HD. (Doesn't require external power)

I use two of the 3TB drives in Raid 1 (Windows Software RAID) to mirror my Data for redundancy. I use the remaining 3TB drive for system backups. 3 Months of nightly full backups of 4 machines and I still have 1.8TB avail for backups.

The USB drive is a small portable USB Drive that doesn't need an external power brick and I use it to backup the boot drive so I can rebuild in the event of a HD failure.

I went with Windows Home Server 2011, though Windows Server 2012 Essentials is a good option as well.

I have my server configured with "Lights Out" software and WoL configured. It only turns on during the backup window in the early hours, and then hybernates itself when finished. Waking the server is a simple matter of clicking an icon on my taskbar to send a WoL magic packet. If I want to access the server remotely via Web Access, I have an App on my phone to send a "magic packet" that wakes the server.

It replaced a full time, always on, server complete with RAID 5, Windows Small Business Server 2011 Standard box. I got tired of managing my own DNS, DHCP, Exchange Server and worrying about lost mail during ISP problems, power failure (more frequent than It should be), or hardware problems. The bonus of significant power savings is great too. It has represented a noticably lower monthly elecric bill.

The form factor is very nice too. it's bigger than the old WHS boxes HP made, but a little smaller than the M-ITX PC I have and pretty quiet. It's a lot more quiet than the full server I had before.
__________________
Gaming Rig:Intel i7 3770/32GB Corsair DDR3/256 GB Samsung 830 SSD/1.5TB WD Caviar Black/680 GTX/Windows 8 Pro
Media Center:Intel Q6600/4GB DDR3-1600/2TB WD Black/Ceton InfiniTV 4/Windows 7 Enterprise
Tablet:MS Surface Pro - 128GB
Griffinhart is offline   Reply With Quote
Old 12-25-2012, 01:44 PM   #16
dagamer34
Platinum Member
 
Join Date: Aug 2005
Location: Houston, TX
Posts: 2,545
Default

Whatever you do, just remember that RAID is not a backup. You should have another separate 3-4TB drive connected via USB to act as a backup in case rebuilding the RAID array fails.
dagamer34 is offline   Reply With Quote
Old 12-25-2012, 03:37 PM   #17
Zap
Super Moderator
Off Topic
Elite Member
 
Zap's Avatar
 
Join Date: Oct 1999
Location: Somewhere Gillbot can't find me
Posts: 22,378
Default

Quote:
Originally Posted by dagamer34 View Post
Whatever you do, just remember that RAID is not a backup.


I always try to mention this to people. Not all believe me and some want to argue.
__________________
The best way to future-proof is to save money and spend it on future products. (Ken g6)

SSD turns duds into studs. (JBT)
Zap is offline   Reply With Quote
Old 12-25-2012, 08:18 PM   #18
thecoolnessrune
Diamond Member
 
thecoolnessrune's Avatar
 
Join Date: Jun 2005
Location: Stoughton, WI
Posts: 8,316
Default

Quote:
Originally Posted by Talaii View Post
To address these:
A corrupted OS installation can and will take everything with it. It's also a lot rarer than you would think, unless you do something really stupid. Having a hardware raid controller is no protection against someone accidentally going "rm -rf /" (or the equivalent gui command), or against a broken operating system install doing the same thing. It might be proof against a badly-written software raid implementation - but unless you're using a bleeding-edge untested linux kernel version or a development version of md, that's not a problem. And anyone using bleeding-edge ANYTHING for a server that needs to be reliable deserves what they get. Well-done software raid is very portable - plug the disks into another linux machine, run one command, and the array is again accessible. If a disk dies during the transfer, you can rebuild on the new machine.

Performance: I can easily saturate gigabit with a 4-disk raid 5 array (using md in linux). For sequential writes, it's almost fast enough to saturate two teamed gigabit connections. Unless you're dedicated enough to invest in 10Gb ethernet, it's perfectly adequate for home use. Sure, random iops isn't high - but if you care about that, an SSD will do much, MUCH better than any traditional disk setup. Sure, it won't be anywhere near as fast as a good hardware controller, but that sort of speed is rarely necessary in a home situation, and not worth the extra money/hassle.

Error proofing: Firstoff, RAID is not backup. There are far too many points of failure that will destroy the entire thing (power supply blowing up from a nasty power surge, someone breaking in and taking the box, an accidental rm -rf or similar, etc) to count on it alone for protecting important data. I have HAD an computer die (an old VIA motherboard killed itself, taking almost everything with it). I moved the hard disks to a different install of linux on a different computer and it took one command (mdadm --scan --assemble iirc) and the whole thing was again accessible. I've never had problems with data corruption from software raid, though non-ECC memory isn't perfect. If you DO want ECC memory, the microserver supports it.

I realise anecdotal evidence doesn't tell the whole story, but I've lost two raid arrays to hardware raid controllers malfunctioning (one started corrupting everything on the array - I think it was some sort of problem with the onboard memory. I had another controller die, and couldn't find a compatible replacement, so although the disks were fine I couldn't recover the data). Software raid, on the other hand, has been reliable - even when the computer running the array died, I was always able to move it to a different linux box and reassemble the array.
To address these points:

Your points about portability are valid, but really don't address any of the reasons I made regarding hardware RAID. Like I had said before, RAID 0 and 1 is fine on md and Windows equivalents, but the corruptions, and write hole issues is what makes RAID 5 and higher insuitable for software implementations.

Secondly, you are trying to reason anecdotally that URE's rarely happen, but real-world data suggests otherwise. http://www.zdnet.com/blog/storage/wh...ng-in-2009/162

The above article is taken using 1TB and 2TB drives, which is nothing compared to the 3TB and 4TB drives we're using now. Just do a google search and you will find many many people with RAID 5 corruption issues. Why? Because they were doing software RAID 5 and not scrubbing their array. The article I linked to made the faulty edit that this will happen regardless of your RAID setup, but this is not true. The whole point of a hardware RAID card with ECC memory is that you can error check and scrub the array regularly. Any URE's that do develop can be fixed before you lose a drive. There's always a possibility of developing yet another URE during rebuild, but everything is risk, which brings me to my next point.

You claim that RAID is not a backup, and I honestly have no reason why you would bring this up. While it is an important point, it has literally 0 to do with my points. If you *really* believed what you posted, why would you do RAID at all? Why not just a giant JBOD? That way, if a drive goes down, you lose the whole thing. But you can always reclaim from backup right?? Right? Oh wait, you mean you were trying to ensure you didn't have to restore from backups every time there is a drive failure? That's the *point*. RAID 5 in its default state, and with today's high capacity drives is more or less unfit to rely on for data protection. RAID 5 needs scrubbing to be considered reliable, and you can two that two ways:

ECC memory in a RAID controller to compare for bit rot or other UREs
Read the parity information for every bit of data on all the disks.

You do not need ECC memory to scrub an array, but it makes it easier and less invasive on the array. Linux md will even scrub the data regularly if you set a cron job for it, and if the OP is going to be using the Linux MD RAID system he most definitely should.

Another point I want to make is that VM workloads are usually not even remotely sequential. You have multiple VMs hitting the same array. It is, by definitely, not sequential. An SSD would make a great addition, but traditional RAID doesn't support storage tiering, which brings me around in a circle again to my original point.

If one is not willing to use Hardware controllers (and I do not believe there is much point in them), then ZFS is pretty much the de facto filesystem to use for storage cheaply. ZFS's checksumming removes concerns of URE's, supports tiered storage with SSDs, supports fast synchronized writes when using Log disks, and provides the same mobility as MD.

One thing that is becoming quickly apparent is that RAID is obsolete. RAID is an old system that will be supported for many years to come, but it is obsolete. While everything supports RAID, no large system relies on it as the final product but rather as the base. The system is then built on. Smart Software Storage is the future. EMC, NetApp, and Compellant all create software-based storage pools that are based on small RAID groups for redundancy. That way a RAID group going down doesn't wreck the whole array.

If you want to do Software arrays, there are much better ways than RAID to do it.
__________________
Sabrina Online!
Jay Naylor Illustrations! (Language and some situations NSFW)

I'm for poop.
thecoolnessrune is offline   Reply With Quote
Old 12-27-2012, 10:39 AM   #19
TecHNooB
Diamond Member
 
TecHNooB's Avatar
 
Join Date: Sep 2005
Posts: 7,440
Default

Quote:
Originally Posted by Griffinhart View Post
I'm a big fan of the HP Microserver:
http://www.newegg.com/Product/Produc...82E16859107052

It is what I am using. My current configuration is:
The Microserver with 8GB of RAM.
The included 250GB HD for boot/system.
The Optional "Remote Access Card" (Essentially an ILO/KVM over IP board)
3, 3TB HD's
1, 500GB USB External HD. (Doesn't require external power)

I use two of the 3TB drives in Raid 1 (Windows Software RAID) to mirror my Data for redundancy. I use the remaining 3TB drive for system backups. 3 Months of nightly full backups of 4 machines and I still have 1.8TB avail for backups.

The USB drive is a small portable USB Drive that doesn't need an external power brick and I use it to backup the boot drive so I can rebuild in the event of a HD failure.

I went with Windows Home Server 2011, though Windows Server 2012 Essentials is a good option as well.

I have my server configured with "Lights Out" software and WoL configured. It only turns on during the backup window in the early hours, and then hybernates itself when finished. Waking the server is a simple matter of clicking an icon on my taskbar to send a WoL magic packet. If I want to access the server remotely via Web Access, I have an App on my phone to send a "magic packet" that wakes the server.

It replaced a full time, always on, server complete with RAID 5, Windows Small Business Server 2011 Standard box. I got tired of managing my own DNS, DHCP, Exchange Server and worrying about lost mail during ISP problems, power failure (more frequent than It should be), or hardware problems. The bonus of significant power savings is great too. It has represented a noticably lower monthly elecric bill.

The form factor is very nice too. it's bigger than the old WHS boxes HP made, but a little smaller than the M-ITX PC I have and pretty quiet. It's a lot more quiet than the full server I had before.
looking into this... looks promising btw, when I said no to NAS before.. i had no idea what I was talking about.
__________________
Sushi is good

Quote:
Originally Posted by brianmanahan View Post
smoblikat you fool, you fell victim to one of the classic blunders! the most famous of which is "never post your actual pay in an ATOT brag thread", but only slightly less well known is this: "never post your actual pay when alkemyst is online!!" hahahahaha
TecHNooB is offline   Reply With Quote
Old 12-28-2012, 08:23 AM   #20
rsutoratosu
Platinum Member
 
Join Date: Feb 2011
Posts: 2,281
Default

I think the HP microserver is a great machine. Im actually using a HP P410 SAS controller and just taking the mini sas connector off the motherboard and plug it into the raid card. This card supports the 4tb drives and my array is 16tb unraided and 12tb in raid 5. These raid card are pretty cheap off ebay, 100-150 with 256/512mb cache. Its an enterprise raid controller, not a cheap one.

I have 3 of these machines so 1 act as a hardware backup and I have the 3rd hook up to a lto3 drive for backups.
rsutoratosu is offline   Reply With Quote
Old 12-28-2012, 06:33 PM   #21
stevech
Senior Member
 
Join Date: Jul 2010
Posts: 203
Default

try the on-line demos for Synology and QNAP.
I suggest the DS212j
stevech is offline   Reply With Quote
Old 01-04-2013, 09:47 AM   #22
tntech
Junior Member
 
tntech's Avatar
 
Join Date: Jul 2012
Posts: 4
Default

I've had good luck running Ubuntu server v.12.04 with Zentyal mgm't interface. Zentyal gives me a GUI much like a NAS unit. I installed a no-name SATA raid card in a Dell Poweredge SC420 (obsolete salvage) and run two 1gb drives as a RAID1 so even if I lost a drive or the server died, a partition might be recoverable (via Linux). The easiest route if you don't want a learning curve is to purchase a NAS. I've had a Buffalo LS-320GL (single drive) that ran without fail for almost 6 years. I've relegated it to a network backup unit at this point.
tntech is offline   Reply With Quote
Old 01-04-2013, 11:38 AM   #23
raf051888
Member
 
Join Date: Jan 2011
Location: Columbia, MO
Posts: 144
Default

I use a Lian Li PC-Q08B for my WHS 2011 homeserver. It holds 6, 3.5-inch HDs and still has a spot for a 2.5-inch drive; while also accepting a full size PSU.
raf051888 is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 07:14 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.