CPU for a medium server?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tynopik

Diamond Member
Aug 10, 2004
5,245
500
126
If your data is important, you back it up.

backups are pointless when your data is corrupted before it gets written to disk

then you're just backing up corrupted copies of corrupted copies

This is what I meant about people who will pile on thinking ECC is better than sliced bread and will solve all your computing woes. ;)

all? no

corrupted data in memory being written to disk? yes
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
SB Xeon. Running a FX home server, bleh.

Do the ECC, do the mirror drive, get a good/efficient PSU, do it right. It won't cost much more, and you'll have no regrets that way.
 
Feb 25, 2011
17,000
1,628
126
AMD 6 core for teh cheap. Or hell, even 4 cores.

Storage Subsystem: 1 boot SSD. 2nd SSD for game servers and any VM images. RAID 5 for FTP/general storage, and to back up the SSDs nightly. (If the boot SSD fails, you can install the new one, boot from a USB stick, and restore from the RAID to a new SSD.)

MOAR RAM is better, yes, but given what you've indicated you're doing, you probably don't need to get a whole lot. For what you've described, anything north of 12 GB is probably unnecessary unless you're provisioning lots of VMs. (SQL will eat all the ram you have and/or all the ram it can, but if you're a hobbyist, learning SQL, etc., your tables probably won't be huge. Unless you're generating large datasets for practice managing large data sets, I suppose.) Assuming your using dual-channel memory, getting a pair of 8GB DIMMs and leaving slots open for future expansion is probably best.

EEC isn't needed for a non-production system. In general, for 24/7 home server use, you can get by just fine with overclocker/enthusiast-level consumer components running at stock speeds. (My laptop has been running as a light use server for several months now quite happily, and most home NAS devices are built around cell phone SoCs, and are intended to be on all the time.)

Frankly, for a home server, I'd be more concerned about idle power use, noise, and heat generation than maximizing Performance/$.
 
Last edited:

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
SB Xeon. Running a FX home server, bleh.

Do the ECC, do the mirror drive, get a good/efficient PSU, do it right. It won't cost much more, and you'll have no regrets that way.

+1. I totally agree.

Do I always follow this advice? No. Wish I did? Yes.

Honestly, thinking about putting a SuperMicro board with/KVM over LAN capability in my WHS since my G620 is having issues with the current Z68 board, then just buying 8GB of ECC and calling it a day.

Mostly for the headless operation.

If I could move my RAID1 set and not have to re-install WHS, I'd likely do it today... (kicking myself for putting the OS and data on same drives, but I was cheap and I thought that two big drives would be "fine.")

It's getting to the point where I have much better things to do with my time than to dork around with a functional but incomplete setup.
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
SB Xeon. Running a FX home server, bleh.

Do the ECC, do the mirror drive, get a good/efficient PSU, do it right. It won't cost much more, and you'll have no regrets that way.

If you're going this far, I will agree with you.

The opinion I believe is ludicrous is that in the absence of any other RAS features, you should waste money on ECC. It's the least likely to be of benefit to a home user. It's the last thing to worry about. If you've done the others though, sure.

The "oh no, bit flip in memory" bugaboo is nothing more than someone latching on to one thing and thinking it is massively important when it's a minor piece of the RAS puzzle, and not the first one that should be addressed either. "Oh no, you'll backup a corrupted file." A backup plan is not "I keep one extra copy of everything". A backup plan is "I have a daily image of my system for the last 10 days, a weekly image for the last 2 months, and a monthly image for the last year". That is backup.

I would rank what is feasible for a home user in the following order.

1.Backup (not copy, but an actual backup scheme) This makes all other RAS features only account for uptime and have no bearing on data integrity. ECC, like RAID, is not any sort of substitute for backup, but backup, if all you care about is data integrity, is a substitute for all else. Ideally, either use some sort of "cloud" (eyeroll at the buzzword) storage, or raid protected storage local. Or disk, replicated to "cloud".
2.a workable UPS (OH NO, IF YOU LOSE POWER YOU MIGHT CORRUPT A FILE! in ECC crusader's terms)
3.RAID (only if you really care about uptime because you already have a bare metal capable image if you've done step 1 right)

If you've implemented all that, you might want to consider ECC.


I think the reason people latch on to ECC is that they don't really have an idea of what the whole image is. Dealing with that sort of thing every day, ECC is on my list, yes, and in my environment it is a must, but it is very, very far down the list in regards to what is most important.

If I went in to a datacenter and they had no backup, no ups, no mirrored or parity protected drives, and non ecc memory, for example, I'm not going to, out of the gate, tell them to replace all their memory with ECC (unless budget is of no concern, and the hardware already supports it, because it is not a labor intensive implementation). Backup is going to be the most important thing, then power redundancy (if they can only afford a UPS, then a UPS) or RAID. After all that is in place, ECC would be where money could go.
 
Last edited:

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
Bare metal image backups are funny, because if you lose something like a motherboard (not that any of those other things are going to save you) you essentially have to hope and pray that your restore will actually work, that your drives are going to actually boot, etc.

That, and bare metal copies aren't usually free or incredibly easy to configure and keep working. This is a home setup, afterall. ;)

To the OP, I would run an extremely minimal "bare metal" OS and do everything else via VMs. This way, your machine backups can, indeed, be simple copies that can be trivial to power on somewhere else.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Bare metal image backups are funny, because if you lose something like a motherboard (not that any of those other things are going to save you) you essentially have to hope and pray that your restore will actually work, that your drives are going to actually boot, etc.

That, and bare metal copies aren't usually free or incredibly easy to configure and keep working. This is a home setup, afterall. ;)

To the OP, I would run an extremely minimal "bare metal" OS and do everything else via VMs. This way, your machine backups can, indeed, be simple copies that can be trivial to power on somewhere else.


I have a WHS 2011 server at home, so it handles all that for me. Bare Metal, or not, I can still get my data.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
I have a WHS 2011 server at home, so it handles all that for me. Bare Metal, or not, I can still get my data.

True, true. But that isn't going to do his Free BSD server any good :p

Can WHS backup server OS's as well? Haven't even attempted that... I've been too cheap (recurring theme) to buy a raid controller for my ESXi boxes, and too cheap to buy a dedicated NAS... so backing up my VMs in any other fashion than manual has been pretty much a no-go. If I could at least hit my Windows Server 2008R2 VMs with WHS, that would be pretty much all I need...

Nothing stopping me from trying it in the next few minutes, I guess...
 
Feb 25, 2011
17,000
1,628
126
Bare metal image backups are funny, because if you lose something like a motherboard (not that any of those other things are going to save you) you essentially have to hope and pray that your restore will actually work, that your drives are going to actually boot, etc.

I've had a lot more drive failures than anything else. Followed by optical drives and video cards. (Neither of which you'd need on a server.)

I've never lost a motherboard except by doing something stupid. (Damaged during installation, avoidable power surge, got it wet, etc.)

For home use, where overall cost is usually a bigger factor than in corporate/enterprise IT, it's usually best to guard against the two or three most likely points of failure and not worry too much about the rest. A mobo - or RAID controller - that lasts six months of 24/7 use will probably last the life of the server, if you don't break it switching cases or fry it with a power surge or something. (And if it breaks within the warranty period, you can replace like for like and nothing bad happens.)
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
I've had a lot more drive failures than anything else. Followed by optical drives and video cards. (Neither of which you'd need on a server.)

I've never lost a motherboard except by doing something stupid. (Damaged during installation, avoidable power surge, got it wet, etc.)

For home use, where overall cost is usually a bigger factor than in corporate/enterprise IT, it's usually best to guard against the two or three most likely points of failure and not worry too much about the rest. A mobo - or RAID controller - that lasts six months of 24/7 use will probably last the life of the server, if you don't break it switching cases or fry it with a power surge or something. (And if it breaks within the warranty period, you can replace like for like and nothing bad happens.)

Oh, absolutely. I totally agree with your points. It was my intention to point out that a bare metal back up is simply not an end all solution to the "problem" - and can be expensive, difficult to maintain, difficult to test/verify, etc. I mean, you are testing your restores, right? :D

Having a bare metal image local on the machine and up in the cloud etc. is fine, but bare metal restores being a first line of defense? Just due to their nature in my experience... bleh. That's the old way of doing things. VM's and file copies are the way to go. What's installed on and aware of the hardware config should be as light as possible and either configured via script or very close to installed defaults. Hyper-V, XenServer, vSphere, whatever. All "IMHO."

A bit different discussion to have than what matters to the OP, probably.

Side note, I was just talking to my boss the other day about our new vCenter and DB servers. For traditions sake as the vCenter 4 infrastructure was built out just before I started, he wanted them as bare metal, hardware servers. "Haven't had a failure yet." My position was that their availability is so crucial, that having them as VMs was the only responsible business decision we could make. Obviously, there is still a lot of conflicting opinion around this.

He is unamused when I point out that I haven't died yet, either :p
 
Last edited:
Feb 25, 2011
17,000
1,628
126
Oh, absolutely. Having a bare metal image local on the machine and up in the cloud etc. is fine, but bare metal restores being a first line of defense? Just due to their nature in my experience... bleh. That's the old way of doing things. VM's and file copies are the way to go. What's installed on and aware of the hardware config should be as light as possible and either configured via script or very close to installed defaults. Hyper-V, XenServer, vSphere, whatever. All "IMHO."

A bit different discussion to have than what matters to the OP, probably.

I'm a fan of VMs as well, but I will submit that (in my admittedly limited and not super-recent experience - my current workplace is still mostly a bare metal shop) I've never observed one that had I/O & Disk performance anywhere near as good as a bare metal server. At least not out of the box.

And not on hardware than Joe "FreeNAS and Minecraft" Schmoe could easily afford.

If I'm wrong, please say so - it'll make my day.

So for me, and my cheapskate tendencies, I'd go with a "hybrid" bare metal and VM combo solution for multiple-workload servers. (You can run things like Game servers and SQL in VMs without a big penalty, and those are where your config time is spent anyway - setting up file services for a local network is usually pretty easy.) I guess if that makes me old fashioned, I'm old fashioned. :biggrin:
 
Last edited:

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
SSD's have revolutionized any I/O issues to be had with VMs, most can afford a 128GB for some VMs...

Also, Hyper-V, ESXi and XenServer all offer excellent local I/O, disk and network. Compared to the ESX of five years ago, the difference is pretty awesome, in my experience.

100% of the bare metal experience? Close, but no, I don't think so. Not yet, anyway.

I have local bare metal (WHS) servers too, so call me a hypocrite :p
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
If you make vCenter a VM, just don't do something asinine like sticking it on a host that it manages. :p

If you want a resilient vCenter server, you need to cluster it, and have a clustered DB, and have none of those within any host that it manages.
 

blckgrffn

Diamond Member
May 1, 2003
9,687
4,348
136
www.teamjuchems.com
If you make vCenter a VM, just don't do something asinine like sticking it on a host that it manages. :p

If you want a resilient vCenter server, you need to cluster it, and have a clustered DB, and have none of those within any host that it manages.

Ideally, yes.

Just having it as a VM so you can have it encapsulated on shared storage makes the recovery playbook much more simple, much easier to test recovery, and allows for HA to do its part.

A VM is just so much more resilient than hardware. You can do those other things you mentioned, we don't need that kind of uptime. We need to have a clean recovery model.
 

LeftSide

Member
Nov 17, 2003
129
0
76
The "oh no, bit flip in memory" bugaboo is nothing more than someone latching on to one thing and thinking it is massively important when it's a minor piece of the RAS puzzle, and not the first one that should be addressed either. "Oh no, you'll backup a corrupted file." A backup plan is not "I keep one extra copy of everything". A backup plan is "I have a daily image of my system for the last 10 days, a weekly image for the last 2 months, and a monthly image for the last year". That is backup.

The problem i have with your methodoligy is that I have way too much data to know if it is corrupted. I have 6 years of digital photos and videos saved on my ZFS server with almost exactly the backup plan you mentioned via snaps, and Crashplan.
Here is the problem. Lets say 1 photo named "11-24-2006 Disney World 0578" gets corrupted. If it happened in the Hard Drive I will know through the scrub command that the file is corrupted and it can be easily fixed. BUT, what if it's corrupted in RAM? Will I see that photo within the 1 year time frame I have to catch it? What if I have a bad stick of ram, and I have several files corrupted throughout the 500GBs of photos and videos I have?
For me, having ECC ram with a ZFS file system in my home system is peace of mind. Its peace of mind knowing the likelyhood of silent data corruption is minimal. Add Crashplan as a backup to multiple off-site locations, and my house could burn down without a hickup as far as my data is concerned. Snaps protect me from acidental deletes and any other crazy anomolies.
Before I built this server, I lost 27 random photos. My backups went back 2 years, and I only saved 1 file. I had corupted files in my backups and didn't even know it. It took FOREVER to go through all my photos to find those corupted files. PEACE OF MIND.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I would rank what is feasible for a home user in the following order.

1.Backup (not copy, but an actual backup scheme) This makes all other RAS features only account for uptime and have no bearing on data integrity. ECC, like RAID, is not any sort of substitute for backup, but backup, if all you care about is data integrity, is a substitute for all else. Ideally, either use some sort of "cloud" (eyeroll at the buzzword) storage, or raid protected storage local. Or disk, replicated to "cloud".
2.a workable UPS (OH NO, IF YOU LOSE POWER YOU MIGHT CORRUPT A FILE! in ECC crusader's terms)
3.RAID (only if you really care about uptime because you already have a bare metal capable image if you've done step 1 right)
1 and 3 are about uptime and downtime, and recovery of data assumed to be good.
2 is only a problem you're stuck using a bad file system, for which there's no excuse for today, with PC hardware. RAID can often be fragile, too, when it comes to power loss...but that's the whole array, or a drive's-worth of it, not a file.
If you've implemented all that, you might want to consider ECC.
So then, why don't SATA, PCIe, DMI, QPI, HT, etc., ditch checking for errors? We don't need that, if we don't have a backup plan, UPS, RAID, etc., right? No, they still has usefulness. They help assure that correct data was written or read, and if not, can resend until it works, or at log a nasty message about it not working. Today, we need ECC RAM to get that for most data leaving or entering the CPU.