Help on upgrading a small office file server

hoorah

Senior member
Dec 8, 2005
755
18
81
I setup an old workstation for my father's business awhile ago. It recently went down (won't POST) and I have to set him up with something new. The data hard drives are fine, and are 2x 1TB SATA drives, no RAID (just daily software backups from A to B using SyncBack Pro), but will want a new OS drive.

I'm no stranger to building PCs myself but am not up-to-date on the newest products out right now, especially when reliability and longevity is the number 1 criteria rather than gaming.

Since I'm only serving files, it doesn't have to be fast. I'd prefer it to be low power usage to minimize electricity cost and heat, but we don't have to go crazy and try to get it down to a 10 watt build or anything like that. Cool, quiet, and efficient is the name of the game.

I was thinking of a decent case (with plenty of room for hard drives + hard drive cooling), a reliable 400W or so power supply, a decent, reliable motherboard with plenty of ports + ESATA, a decent, low power, dual-core at most CPU, 2GB (if that) of RAM, and an O/S Drive (something reliable, either a pro-level HDD or a solid state). All of these items in whatever the flavor of the month is (meaning - whatever you guys think are the current best picks).

As for an O/S, we have a retail copy of XP Pro that can be moved around that I have no qualms re-using. All we use the box for right now is file-sharing. IF we were to upgrade, I'd put windows home server on it.

Obviously, there is zero requirement for any sort of audio or graphic hardware, nor is any monitor, keyboard, or mouse necessary.

Thanks guys! Any follow up questions I'll be happy to answer!

1. What YOUR PC will be used for.

Mostly office file serving, with possible office gateway like functions in the future (VPN server, FTP server, etc)

2. What YOUR budget is.

Since I'm not going high end, anything should be acceptable.

3. What country YOU will be buying YOUR parts from.
USA

4. IF YOU have a brand preference. Nope.

5. If YOU intend on using any of YOUR current parts, and if so, what those parts are.

His current server was an old-ish dell workstation, A precision 630 (I might be wrong on that). If I can re-use the case/power-supply/CD-DVD, I'd love to, but I don't know if the PS/mobo have proprietary connections. Lets assume that I can't, and we're getting a new case/power supply.

6. IF YOU have searched and/or read similar threads. Searched - no. Read similar threads - yes.

7. IF YOU plan on overclocking or run the system at default speeds. Default. Reliability is paramount.

9. WHEN do you plan to build it? Yesterday.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
More Questions:
How many users. XP Pro will only allow 10 concurrent connections IIRC. Is that an issue?

Rackmount or on-rackmount?

Why do you want eSATA? If it is for upgrade space, then just make sure there are enough internal (or hotswap) ports.

How much space do you need? You have 2 TB of non-redundant space now, is that enough? How much growth is expected per year, and what do you expect the lifetime of the device to be?

You mentioned that you don't have RAID set up now. Do you care? (you state reliability is a concern, but you don't have RAID) Do you want redundant storage?

Is this just a backup server, or being used to actually serve files? any databases running on it (MS SQL, MySQL, MS Acess, etc)?

My initial thoughts:
an SSD would be a waste in a small file server. Use those dollars to add redundancy in a RAID setup. If power is a concern, you need a low power CPU like an Atom and an OS that supports disk spindown.

A decent case without hotswap bays will cost about $100, and another $100 for a good PWS. You may also want a case that supports hot-swap and has lots of bays like the norco 4220 but that is probably overkill since you only have 2 data drives + 1 OS drive now. the RPC2008 may be a better fit if you need a case with some room to grow.

As for an interface to disk, I would go with some kind of hardware RAID card like an adaptec 31205 or 30805. There are tons of different types to choose from, but this is where your dollars that you were planning on spending on an SSD would be better spent.

Depending on OS though, such as freenas which would fit your need for easy administration (I am assuming easy admin based on your comments) you don't need a hardware raid card. freenas likes the cheapy $20 cards and a software RAID 1 or 10 just as well as a hardware raid card.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
If there's less than ten users, you'd probably be best off just picking up an Acer Windows Home Server for $350. It'll do what you need and more, comes pre-configured, is easy to use, and comes with a warranty. Unless you plan to back it up, you'll probably want to add a second hard drive so it can use its built-in shared folder redundancy to help protect against a single hard drive failure.

I just noticed that you already have some disks. You will be able to use those on the WHS but you'll have to move files around a bit since WHS will format a disk when it's added to the data pool. You can continue to use Synchback to make backups of the shared folders on the WHS server.
 
Last edited:

hoorah

Senior member
Dec 8, 2005
755
18
81
More Questions:
How many users. XP Pro will only allow 10 concurrent connections IIRC. Is that an issue?
'
Nah, he only has 5 users at most.

Rackmount or on-rackmount? Non

Why do you want eSATA? If it is for upgrade space, then just make sure there are enough internal (or hotswap) ports.

I wanted it as an easy way of adding portable storage. See, this file server is includes his on-site backup, but we also have off-site backup at my house (about an hour away). We do incremental backups overnight to my server, but if he throws 20GB of video on the box in a day, sometimes the incremental can get bogged down and I catch it up by using a USB HDD when I visit (about once every 2 weeks). I don't have a specific reason for using eSATA vs USB drives, but I figure its not like the feature costs a ton of money so we should probably add it now in case we want it later.


How much space do you need? You have 2 TB of non-redundant space now, is that enough? How much growth is expected per year, and what do you expect the lifetime of the device to be?

I have no idea. This whole setup is sort of a trial run for us, and he's growing into it. When we started, he had files scattered all over USB drives, none of them backed up. We moved everything to the server, and he went from 20GB of data in august of 09 to about 200GB of data today. Some of it is video (which I might ask him to compress), but he doesn't edit it. The video is usually pretty important and has to be kept around for awhile for legal reasons. I still don't know if he has any legal requirements when it comes to handling this data (a point that worries me daily). So....short answer is yeah, we need room for expandability.


You mentioned that you don't have RAID set up now. Do you care? (you state reliability is a concern, but you don't have RAID) Do you want redundant storage?

I'd prefer not to have RAID. I don't work in IT, and I can't run down to him when he has a problem, so if something terrible happens, I need to be able to talk him through data recovery over the phone. I figured it was easier for him to just be able to yank a drive out and plug it into a USB box or an eSATA port on his laptop if he had to have the data RIGHT NOW and the server wasnt working. Having him try to figure out RAID over the phone is probably not a good idea.


Is this just a backup server, or being used to actually serve files? any databases running on it (MS SQL, MySQL, MS Acess, etc)?

Just a file serve and backup, no databases. I have syncback daily copy drive A to drive B, thats all.

My initial thoughts:
an SSD would be a waste in a small file server. Use those dollars to add redundancy in a RAID setup. If power is a concern, you need a low power CPU like an Atom and an OS that supports disk spindown.

The reason I would want the SSD is for the reliability of not having moving parts. Is an SSD still a waste in that sense? If so, would a pro or enterprise level drive be a better bet? I'm trying to find a link, but can't seem to find one, maybe its no longer a relevant product. I seem to recall some drives having a 5 year warranty vs a 1-3 year, and they usually cost about 20% more. I want to say they were marketed as workstation class drives. Am I making any sense? This would be for the OS, btw.

I liked the first case you posted. That looks like it has a decent number of drive bays.


As for an interface to disk, I would go with some kind of hardware RAID card like an adaptec 31205 or 30805. There are tons of different types to choose from, but this is where your dollars that you were planning on spending on an SSD would be better spent.

Like I said before, I'd like to stay away from RAID so that my father can recover the data if he needs to in an emergency. I hate to say it, but just knowing his personality, when computers go down he gets frustrated and starts "trying things". I'd prefer he has a solution where he can pull a drive out and plug it into his laptop so as not to have downtime and NOT royall eff up a RAID setup.

Depending on OS though, such as freenas which would fit your need for easy administration (I am assuming easy admin based on your comments) you don't need a hardware raid card. freenas likes the cheapy $20 cards and a software RAID 1 or 10 just as well as a hardware raid card.

I've toyed with FREENAS before, and it works. The reason I stuck with XP is that its easy for me to RDP into for remote tinkering, and my father can somewhat get around in there with phone support if necessary. If something happened that required being at the PC and it had Freenas on it, he wouldn't have a clue what to do.
 

hoorah

Senior member
Dec 8, 2005
755
18
81
If there's less than ten users, you'd probably be best off just picking up an Acer Windows Home Server for $350. It'll do what you need and more, comes pre-configured, is easy to use, and comes with a warranty. Unless you plan to back it up, you'll probably want to add a second hard drive so it can use its built-in shared folder redundancy to help protect against a single hard drive failure.

I just noticed that you already have some disks. You will be able to use those on the WHS but you'll have to move files around a bit since WHS will format a disk when it's added to the data pool. You can continue to use Synchback to make backups of the shared folders on the WHS server.

Not a bad idea, and one I have considered. My only hesitation with a setup like that is that the last time I looked at them, the boxes looked a little on the small side, which I'd prefer to have better cooling on the hard drives and room for more physical drives for expansion. Otherwise, yeah, I liked that idea.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
Based on everything you stated I think RebateMonger's suggestion of WHS is a good one. More drives can be added easily though USB or eSATA (if the eSATA supports hot-plug).

Remember though, you don't have to buy a pre-configured solution like the Acer box. There are TONS of different pre-configured types you can get. You can also get WHS and roll your own around other hardware. WHS will wipe a drive when added to the pool as RebateMonger stated, so be careful when adding storage to the pool. You can also (if absilutely required) pull out the storage drives and attach to another computer. They are just NTFS formatted.

You can use a larger case like the one I linked to so you can have extra bays for future expandability. WHS uses software to keep 2 copies of files on 2 physical disks (if you have redundancy turned on) so you are also good if a disk fails. A disk failure would be transparent to an end user, and probably your father as well. That can be both a good and bad thing.

On space, it looks like you are on the low side of about 1 TB a year maximum. 8 bays will allow roughly 8 years of expansion using 2 TB drives and WHS, so I would get an 8 bay case minimum.

Remember, with redundancy you are going to have 1/2 of the available space of the disks themselves. Definitely not a bad thing if it keeps you from making an emergency visit to restore from backup.

WHS would also allow you to perform remote administration, which appears to be a requirement.

By the way, be careful housing data off-side for your dad, especially if there is a legal requirement. PCI can be a huge PITA, so make sure you are following the rules around securing the data if it is subject to PCI rules.

I'd prefer to have better cooling on the hard drives and room for more physical drives for expansion. Otherwise, yeah, I liked that idea.

Cooling a hard drive is over-rated. In fact, keeping a hard drive cooler is bad for longevity, not to mention power. (more power used at cooler temperatures). google has a great white paper out on this that showed lower temps did not help in longevity. Keep the drive under 55C (which pretty much any case can do without trying) and you are good. Most manufacturers design their drives to operate at 60C. 55c gives a large 10&#37; margin.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,227
126
Cooling a hard drive is over-rated. In fact, keeping a hard drive cooler is bad for longevity, not to mention power. (more power used at cooler temperatures). google has a great white paper out on this that showed lower temps did not help in longevity. Keep the drive under 55C (which pretty much any case can do without trying) and you are good. Most manufacturers design their drives to operate at 60C. 55c gives a large 10&#37; margin.

I disagree with the Google study. Cooling a HD is always important, especially if it's a warm/hot-running 7200 RPM HD or faster.

I suggest keeping HDs under 50C, preferably under 45C. Anything significantly cooler than that doesn't matter much.

I remember when my IBM 75GXP got above 50C, it started to lose sectors.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
I disagree with the Google study. Cooling a HD is always important, especially if it's a warm/hot-running 7200 RPM HD or faster. I suggest keeping HDs under 50C, preferably under 45C. Anything significantly cooler than that doesn't matter much. I remember when my IBM 75GXP got above 50C, it started to lose sectors.

The IBM 75GXP is a few generations old, and was (arguably) one of the worst drives ever made. They had roughly a 50% failure rate within a year and PC World ranked them as one of the worst tech products ever made. IBM even lost (or settled out of court, I forget which) a lawsuit around the drives which cost them $100 per drive. I appreciate that you had issues with a IBM 75GXP related to temperature, but you only have data for one (or a few) hard drives, and for 1 manufacturer. Google's white paper is on tens of thousands (if not hundreds of thousands) of disks across all manufacturers. They have more data than drive manufacturers themselves, and certainly more than the experiences you or I have had.
 

hoorah

Senior member
Dec 8, 2005
755
18
81
Remember though, you don't have to buy a pre-configured solution like the Acer box. There are TONS of different pre-configured types you can get. You can also get WHS and roll your own around other hardware.

Sounds good. I think thats what I'll do. I've been reading up on WHS and it sounds like a good platform. Picking cases is easy, now I just need to decide on the hardware to run it.

You can also (if absilutely required) pull out the storage drives and attach to another computer. They are just NTFS formatted.

Really? I'm still a bit confused by the WHS storage pool, but I guess thats just going to take some reading and I'll try not to bog down this thread with that stuff. If so, good news then!

You can use a larger case like the one I linked to so you can have extra bays for future expandability. WHS uses software to keep 2 copies of files on 2 physical disks (if you have redundancy turned on) so you are also good if a disk fails. A disk failure would be transparent to an end user, and probably your father as well. That can be both a good and bad thing.

Yeah, I know. Aside from the whole computer failing, his daily backup drive died about a month ago. I ordered a new one and had it shipped to his house, and told him to install it and I would remotely bring the backups online. When I went to pick up the server yesterday, guess where the drive was? Sitting in the box on his desk, of course...... Since the server was still running, it wasn't important enough to him to install the drive.

On space, it looks like you are on the low side of about 1 TB a year maximum. 8 bays will allow roughly 8 years of expansion using 2 TB drives and WHS, so I would get an 8 bay case minimum.

I'm not sure what you mean - 1TB/year is on the low side for storage use? Or adding 1TB/year is on the low side for what I'm actually using?

Remember, with redundancy you are going to have 1/2 of the available space of the disks themselves. Definitely not a bad thing if it keeps you from making an emergency visit to restore from backup.

Right, that was the idea to begin with - an onsite, daily backup that he can get going immediately, and a weekly-ish remote backup for a worst-case scenario restore.

WHS would also allow you to perform remote administration, which appears to be a requirement.

By the way, be careful housing data off-side for your dad, especially if there is a legal requirement. PCI can be a huge PITA, so make sure you are following the rules around securing the data if it is subject to PCI rules.

What do you mean by PCI rules? Googling gets me results related to credit cards. Its not data like that. In any event, I can't get a straight answer out of my dad as far as what the requirements are. He just says "save it all". The data at my house is stored on a dlink 323NAS on its own separate drive behind my firewall. If it needs to be more secure than that, I can work on that when I find out the requirements.


Cooling a hard drive is over-rated. In fact, keeping a hard drive cooler is bad for longevity, not to mention power. (more power used at cooler temperatures). google has a great white paper out on this that showed lower temps did not help in longevity. Keep the drive under 55C (which pretty much any case can do without trying) and you are good. Most manufacturers design their drives to operate at 60C. 55c gives a large margin.

Interesting, had no idea. I still feel more comfortable if the drives arent all jammed in there with 1/2mm in between each one. Also, makes it easier to physically work with. I don't mind scrapping the active cooling though.



So, what do you think about the motherboard/platform/CPU/Power supply? I want low power usage, but Im not sure I want to take it down to the atom level (as I still need to rlogin without falling asleep waiting).
You think I should just pick some decently priced corei3 with 8x SATA ports, 2GB of decent ram and call it a day?

I happen to have a cheap SATA Raid card from the old system that was about $20 and has 2x sata ports on it, FWIW.
 

Fayd

Diamond Member
Jun 28, 2001
7,970
2
76
www.manwhoring.com
The IBM 75GXP is a few generations old, and was (arguably) one of the worst drives ever made. They had roughly a 50&#37; failure rate within a year and PC World ranked them as one of the worst tech products ever made. IBM even lost (or settled out of court, I forget which) a lawsuit around the drives which cost them $100 per drive. I appreciate that you had issues with a IBM 75GXP related to temperature, but you only have data for one (or a few) hard drives, and for 1 manufacturer. Google's white paper is on tens of thousands (if not hundreds of thousands) of disks across all manufacturers. They have more data than drive manufacturers themselves, and certainly more than the experiences you or I have had.

1: i agree that harddrive cooling is overrated. most cases will adequately cool harddrives contained in them without any special consideration.

2: your statement that it uses more power at lower temperatures is completely wrong. electronics get more efficient the lower in temperature they get.

3: you're reading the results of google's study wrong. while it's clear that harddrives that are cooled "too much" have higher failure rates, temps getting too high are also an issue. there's a sweet spot between 35C and 47C. check page 6 of google's study for the histogram of harddrive failures as compared to temperature. while temps up to 50C are acceptable, getting beyond that isn't advisable.

4: i'm far more inclined to believe the problem with harddrive overcooling is that it doesn't cool the whole drive equally, but concentrates in "spots". that, or harddrive overcooling makes a dramatic difference between load temp and idle temp.. resulting in excess expansion and contracting of the drive materials.
 
Last edited:

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
3: you're reading the results of google's study wrong. while it's clear that harddrives that are cooled "too much" have higher failure rates, temps getting too high are also an issue. there's a sweet spot between 35C and 47C. check page 6 of google's study for the histogram of harddrive failures as compared to temperature. while temps up to 50C are acceptable, getting beyond that isn't advisable.

True. But the failure rate is toughly the same at 50C as 30C, and the probability of failure at 51C is about 6 times less than failure at 45C. Part of that is certainly because google wasn't running many drives at 50C+, leading to 50C having a lower probability density than 45C. What is clear though is that as a drive ages, it becomes less resistant to higher temperatures.

4: i'm far more inclined to believe the problem with harddrive overcooling is that it doesn't cool the whole drive equally, but concentrates in "spots". that, or harddrive overcooling makes a dramatic difference between load temp and idle temp.. resulting in excess expansion and contracting of the drive materials.

That could be. I have no idea what cooling a drive with air does to temps of each part of the drive. For example, I know that the hot-cold cycle of an on-off cycle of a CPU causes migration and degradation of the silicon faster than being constantly hot (hot as in turned on) but I don't know what that does to the internals of a drive. I would bet the head moves out of sync with the platter, but there are indexing points that should put it back in line. Perhaps the magnetic medium spalls? Ya got me.

Think mechanical not electronic.

Exactly. a hard drive is a mechanical device at it's heart. Most of the energy used by a drive will be used on keeping the platters rotating and the heads moving. Both of which take less energy when the drive is warmer. For example, Air is less dense and the grease in the bearings are lower viscosity when heated.

I'm still a bit confused by the WHS storage pool, but I guess thats just going to take some reading and I'll try not to bog down this thread with that stuff. If so, good news then!

I will greatly simplify: WHS in redundancy mode is a lot like RAID1. RAID1 stores an exact copy of 1 disk as another disk at the block level (a file may be many blocks). RAID though may use a different way of physically locating those blocks (which as with a different metadata). This metadata can make it so you can not pull a drive out of a RAID1 set and plug it into another machine and have it work as a regular drive in the above stated emergency you had.

WHS though has copies at the file level (same file in 2 places) but the disks are both NTFS. Think of it like this: When you copy a file to WHS, WHS actually puts the file in 2 places. If you have >2 disks, the 2nd copy may go to any of the 2 remaining disks, but it will be in 2 separate locations. Again, this is simplified, but is close enough for this explanation.

On space, it looks like you are on the low side of about 1 TB a year maximum. 8 bays will allow roughly 8 years of expansion using 2 TB drives and WHS, so I would get an 8 bay case minimum.

I'm not sure what you mean - 1TB/year is on the low side for storage use? Or adding 1TB/year is on the low side for what I'm actually using?

Sorry, I meant that it looks like 1 TB a year will be how much you use, with some extra space. You are actually growing at roughly 400 GB/year, but I would suggest planning for 1 TB a year.

What do you mean by PCI rules?

I am using PCI as a 'catch all' for several groups of laws that were enacted to protect customer data. PCI stands for Payment Card Industry and is specifically around credit card data (such as storing credit cards). However, there are also lots of rules around storing customer data such as phone numbers, addresses, etc.

I happen to have a cheap SATA Raid card from the old system that was about $20 and has 2x sata ports on it, FWIW.

Is it compatable with WHS? WHS is basically windows server 2003 so it can be picky on hardware.
So, what do you think about the motherboard/platform/CPU/Power supply? I want low power usage, but Im not sure I want to take it down to the atom level (as I still need to rlogin without falling asleep waiting).
You think I should just pick some decently priced corei3 with 8x SATA ports, 2GB of decent ram and call it a day?

A dual core atom would probably be overkill. Without any kind of software RAID with parity (or add-on's for WHS like torrents, media transcoding, etc), you are not going to need much of anything to run a file server. The issue with Atom's is that I don't know of a mainboard that has an atom and several expansion slots. This wouldn't be a problem with a hardware RAID card because you can get 256 drives onto 1 RAID card in a single PCIe slot. But you will probably use 9 drives at some point (8 + OS). You may also want a faster NIC instead of a built in gigabit at some point down the road. I'll do some looking around on suggestions.
 
Last edited:

Davidh373

Platinum Member
Jun 20, 2009
2,428
0
71
What I would do is pick up a 775 chipset with a Celeron Conroe with 2GB ram. I want to build a home file server with FreeNAS. I've heard some bad things about Windows Home Server, I'm not sure if those things are at all true, but I'd rather have something free that works than something that's $100 and doesn't work.
 

hoorah

Senior member
Dec 8, 2005
755
18
81
I will greatly simplify: WHS in redundancy mode is a lot like RAID1. This metadata can make it so you can not pull a drive out of a RAID1 set and plug it into another machine and have it work as a regular drive in the above stated emergency you had.

Awesome. I've been doing a lot of reading into WHS, and the new version that was just released for testing. I'll probably wait until the next version goes to market and pick that up. In the meantime, I'll just use the XP we've been using.

I am using PCI as a 'catch all' for several groups of laws that were enacted to protect customer data. PCI stands for Payment Card Industry and is specifically around credit card data (such as storing credit cards). However, there are also lots of rules around storing customer data such as phone numbers, addresses, etc.

I see. Well, theres no customer data in there. Without getting too deep into it, I think the most important part is that he doesn't lose it, but we'll make it more secure as time goes on.


Is it compatable with WHS? WHS is basically windows server 2003 so it can be picky on hardware.

No idea, but probably though. I just picked whatever card had the best rating on newegg for $20, so I imagine it wouldn't get a great rating if it wasn't pretty compatible with the server OSs. Anyway, it has a silicon image chip on it. If its not compatible, its not the end of the world, I have plenty of uses for it.


A dual core atom would probably be overkill. Without any kind of software RAID with parity (or add-on's for WHS like torrents, media transcoding, etc), you are not going to need much of anything to run a file server. The issue with Atom's is that I don't know of a mainboard that has an atom and several expansion slots. This wouldn't be a problem with a hardware RAID card because you can get 256 drives onto 1 RAID card in a single PCIe slot. But you will probably use 9 drives at some point (8 + OS). You may also want a faster NIC instead of a built in gigabit at some point down the road. I'll do some looking around on suggestions.

Got it. We don't do anything but file serving now because we'd like to keep things simple until we get a routine going. I'm still trying to teach him that he HAS to store his files on the server because those are the only ones that get offsite backup and other things like that. He plans on hiring more employees eventually and we might start adding functions like VPN access, FTP access, a security system, etc to the server. Still not sure on our future plans.

The cost difference between an atom and something more powerful isn't worth the downtime if we had to build a new box in 2 years to perform more functions rather than just add to it. Sure, we could build a separate box, but I'm not going to take the chance that we want to expand on the server.

Thanks for all the help. I've been looking at cases and have a few on a short list. I'm going to put a build together soon and post it up for critique.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Anyway, it has a silicon image chip on it. If its not compatible, its not the end of the world, I have plenty of uses for it.
I've used several SATA disk controllers with Silicon Image chips and all have had Server 2003 drivers available.
 

hoorah

Senior member
Dec 8, 2005
755
18
81
I've used several SATA disk controllers with Silicon Image chips and all have had Server 2003 drivers available.

Sweet, good to know. Its just a 2 SATA port card, don't know what speed, and the board I just bought has 5 (6 if you include Esata) so I probably wont need it any time soon. Actually, I'm going to put it in an old Athlon 2000 and run a test setup of windows server "vail" while the new stuff is in the mail since my dad needs his file server up yesterday, and then we'll know for sure.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
I haven't installed the very latest Vail release (I just downloaded it last night), but the initial public beta has some features that worry a lot of us long-time WHS users. As of the previous (April?) release, MS was effectively "striping" the disks (as in RAID0). Files in any shared data folder that weren't being duplicated by WHS had a pretty good chance of being completely lost if just a single disk failed.

That's in sharp contrast to the original WHS where, even without folder duplication, the only files that would be lost would be files on that particular failed disk.

Assuming that MS still offers the original WHS Trial version, that's what I'd install on a new WHS server. You can install it, add disks and data, and can do a Server Reinstallatin when the OEM WHS software and Key arrive. The Reinstallation is simple and I've done it several times without any problems.

If you temporarily install Vail, you'll have to move any data back off it before installing the original WHS. There's no way to migrate from WHS2 to WHS1, nor, apparently, from WHS1 to WHS2, without putting the data files elsewhere.
 
Last edited:

Homerboy

Lifer
Mar 1, 2000
30,890
5,001
126
I haven't installed the very latest Vail release (I just downloaded it last night), but the initial public beta has some features that worry a lot of us long-time WHS users. As of the previous (April?) release, MS was effectively "striping" the disks (as in RAID0). Files in any shared data folder that wasn't being duplicated by WHS had a pretty good chance of being completely lost if just a single disk failed.

That's in sharp contrast to the original WHS where, even without folder duplication, the only files that would be lost would be files on that particular failed disk.

Assuming that MS still offers the original WHS Trial version, that's what I'd install on a new WHS server. You can install it, add disks and data, and can do a Server Reinstallatin when the OEM WHS software and Key arrive. The Reinstallation is simple and I've done it several times without any problems.

If you temporarily install Vail, you'll have to move any data back off it before installing the original WHS. There's no way to migrate from WHS2 to WHS1, nor, apparently, from WHS1 to WHS2, without putting the data files elsewhere.

wait what?!?!! I wasn't that interested in Vail to start with (it offers very minimal new features that I even fine intersting) but if what you just posted is true (I haven't kept completely up to speed with Vail I admit) then I am not even REMOTELY interested in Vail. Wow... that simply can't be true. Why the hell would they do that?

/me pets his WHS1 disk
 

hoorah

Senior member
Dec 8, 2005
755
18
81
I think I wasn't exactly clear. I meant I was going to install Vail (or WHS1) on temporary hardware while I waited for the new hardware to come in, but I'm still waiting for WHS2 to go RTM before purchasing. So it doesn't really matter what I install now - WHS1, WHS2, or stick with XP Pro temporarily, because I'm going to have to wait awhile for WHS2 to go RTM before I purchase it.

I'm not sure if thats the best thing to do since reliability is the name of the game. I'm sure the safe thing to do is just purchase and install WHS1 and be done with it, but you know that feeling when you know the next version is around the corner....

As for the RAID0 thing, that is a bit disconcerting, but I'm not sure I have a problem with it. I mean, either your data is important or it isnt. I can't imagine if my data was THAT important, and I wasn't duplicating it, that I would be appeased by the fact that I only lost half of it. I dunno, put me in that situation and I might answer differently, but since I plan only work in redundant space, I don't see it being a concern.

Then again, we all know what happens with the best laid plans.

So I pulled the trigger on that build I posted earlier, and now I have an additional 1TB black drive on the way. I still havent decided the best way to setup the sytem, but I now effectively have 3 x 1TB drive of varying speeds to put into this box, or I can put 2 x 1TB drives in and mirror the OS onto the 3rd drive and leave it in a box on my desk for quick mailing should the server crash.

if the server doesn't crash, and he runs out of space, the last drive can be formatted and go into the pool. At least, thats my plan.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
I haven't installed the very latest Vail release (I just downloaded it last night), but the initial public beta has some features that worry a lot of us long-time WHS users. As of the previous (April?) release, MS was effectively "striping" the disks (as in RAID0). Files in any shared data folder that wasn't being duplicated by WHS had a pretty good chance of being completely lost if just a single disk failed.

That is really stupid, and negates the whole 'pull the disk and use elsewhere in emergency' mentality that I was commenting on above, or I am misunderstanding (which is certainly possible). For the OP, I would recommend using WHS1. WHS1 will be around for a while, and after WHS2 gets the bugs worked out and ten thousand folks using it and reporting concerns (say a year or 2 after WHS2 has been out) then I would say to investigate an upgrade.


There really isn't a reason to upgrade to WHS2 in this situation besides 'wanting to have the newest stuff' WHS1 does everything required. Having the 'newest stuff' will generally increase speed/performance at the expense of maintenance and up time IMHO, which is a downside to a business; especially one being supported remotely. This server just 'has to work' no matter what.

So I pulled the trigger on that build I posted earlier, and now I have an additional 1TB black drive on the way. I still havent decided the best way to setup the sytem, but I now effectively have 3 x 1TB drive of varying speeds to put into this box, or I can put 2 x 1TB drives in and mirror the OS onto the 3rd drive and leave it in a box on my desk for quick mailing should the server crash. if the server doesn't crash, and he runs out of space, the last drive can be formatted and go into the pool. At least, thats my plan.

I have never actually tried adding an odd number of disks to WHS. Theoretically, it should be able to store 1.5TB of files with redundancy if the algorithm is smart (not using 1 disk as a master, one as slave always but instead using a round robin aproach) but I would guess that WHS will state 1.0 TB of redundant space with three 1 TB drives. Anyone have any idea? I'm kinda interested in that now :p
 
Last edited:

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Last edited:

Fayd

Diamond Member
Jun 28, 2001
7,970
2
76
www.manwhoring.com
True. But the failure rate is toughly the same at 50C as 30C, and the probability of failure at 51C is about 6 times less than failure at 45C. Part of that is certainly because google wasn't running many drives at 50C+, leading to 50C having a lower probability density than 45C. What is clear though is that as a drive ages, it becomes less resistant to higher temperatures.

huh?

unless i'm reading this wrong, the black dot line tracks the AFR. it's 2% at 50 degrees, but only 1.2% or so at 45 degrees.

so, you're 67% more likely (roughly) to have a drive failure at 50 degrees than at 45.
 

Evadman

Administrator Emeritus<br>Elite Member
Feb 18, 2001
30,990
5
81
unless i'm reading this wrong, the black dot line tracks the AFR. it's 2&#37; at 50 degrees, but only 1.2% or so at 45 degrees.

so, you're 67% more likely (roughly) to have a drive failure at 50 degrees than at 45.

I think you are reading it correctly and interpreting it incorrectly. The black dots are annual failure rates, with error bars above and below. The probability density is the white bars behind it. There are 2 values (annual failure rate and probability) graphed on the same chart. The probability density is what we want to check, as it is a measure of the probability that a drive will fail at a given temperature point. The shorter the white bar, the lower the probability is at that temperature point. The probability density of failure at 45C is about 0.01, and .0015/.002 at 50C leading to my '6 times' comment. The 'sweet spot' you were talking about earlier would be where both values are at their lowest, or somewhere higher than 45C. It may be higher than 50C, but we don't have data to make a conclusion about temps higher than 51C where the graph stops (the error ranges also get much bigger). Best guess using that data is that it is right around 47C.

What I was saying about it possibly being skewed is that Google (or anyone) probably doesn't run that many drives at 50C+ which can lead to an uneven measure.