• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

CPU vs server CPU?

Charlie98

Diamond Member
What's the difference between a standard CPU you would have in a PC, vs a 'server CPU?' Don't they do the same thing? Or is there additional functionality a server CPU requires?
 
Often: More cores, more RAS features, SMP capability.

Rarely: Same processor, just a different identifier.
 
I regularly check ebay for motherboard/cpu bundles containing a xeon 3440/3450 hoping somebody will sell one at dirt cheap without knowing what they've got.
 
Also, often, the addition of extra cores means that the clock speed is slower to remain within TDP limits.
 
Right now the major difference on Intel® desktop CPUs vs server CPUs is ECC memory support. While the Intel Xeon® processors like the Intel Xeon E5-2600's can have higher clock speeds and more core that isnt what really sets them apart.
 
Right now the major difference on Intel® desktop CPUs vs server CPUs is ECC memory support. While the Intel Xeon® processors like the Intel Xeon E5-2600's can have higher clock speeds and more core that isnt what really sets them apart.

I like how you just post on the straight and narrow :thumbsup:

For the rest of the lurkers, the difference oftentimes comes down to the validation efforts that go into ensuring the server-features function and perform as intended and needed.

That takes time, time = money, the product costs more to purchase because it costs more the "produce" it.

And there's also the staggering difference in software costs for the server versus desktop. For us desktop users, Photoshop is probably the most expensive single app any one of us would license for ourselves to use at home. (Mathematica or Gaussian 09W may compete for that title)

So desktop users are more likely see the hardware cost of their system as being a significant portion of the TCO (total cost of ownership) for their computers.

Not so with the server crowd. Their software can run anywhere from $10k to $1m per "server", and the infrastructure costs (building + people) can add an equal amount of TCO to that equation.

For that market, whether the CPU itself costs $200 or $2000, the difference in hardware cost is nearly inconsequential in the big picture. So why not charge $2k then?
 
Unlicensed ESXi is useless to an enterprise except extremely limited testing.

Why even bring that up?

Sorry, I didn't realize this was the ENTERPRISE PRODUCTION SOFTWARE DISCUSSION FORUM.


Why even bring up the cost of ESX?


Also, free ESXi is NOT unlicensed. Very important distinction, as unlicensed ESX and free are functionally very different.
 
When might a server need ECC? Or rather, in what scenario can non-ecc result in a catastrophic failure? I'll look into it more - I wonder if there are specific, observable, criteria that may designate a need for ecc (or not). Not all successful companies can be that frivolous, despite so much evidence (as far as I've seen) to the contrary.

I have a vague idea regarding this question.. my guess is more i/o = more probability for i/o error = need. So a server tasked with something more cpu-intensive, far less i/o intensive may not need ecc... 3d render cluster server comes to mind. Could be wwaay off base tho. in a way, I need some ecc.
 
When might a server need ECC? Or rather, in what scenario can non-ecc result in a catastrophic failure? I'll look into it more - I wonder if there are specific, observable, criteria that may designate a need for ecc (or not). Not all successful companies can be that frivolous, despite so much evidence (as far as I've seen) to the contrary.

I have a vague idea regarding this question.. my guess is more i/o = more probability for i/o error = need. So a server tasked with something more cpu-intensive, far less i/o intensive may not need ecc... 3d render cluster server comes to mind. Could be wwaay off base tho. in a way, I need some ecc.


It depends. The more memory you are using, the higher the chance of a flipped bit (obviously). Sometimes, this will do nothing. Sometimes this will result in data corruption, sometimes this will result in system instability.

Remember, you're using what, 8 GB of ram? We use counts like 96, 192, 256GB of ram. The lost productivity because a server went down (sadly, not enough apps are built with true HA in mind) makes the additional cost a no brainer. Picture a VM host that is servicing apps that have a few hundred people relying on it for productivity.

The money we spend on hardware is trivial in comparison to the cost of an outage, so we do everything we can to avoid them.
 
When might a server need ECC? Or rather, in what scenario can non-ecc result in a catastrophic failure? I'll look into it more - I wonder if there are specific, observable, criteria that may designate a need for ecc (or not). Not all successful companies can be that frivolous, despite so much evidence (as far as I've seen) to the contrary.

I have a vague idea regarding this question.. my guess is more i/o = more probability for i/o error = need. So a server tasked with something more cpu-intensive, far less i/o intensive may not need ecc... 3d render cluster server comes to mind. Could be wwaay off base tho. in a way, I need some ecc.

The formatting on this blog link is horrendous, I mean truly awful, but skip down to the bulleted conclusions and you will have your answers from a data-driven position.
 
Intel is being foolishly short-sighted to restrict ECC functionality to server chips.

Most people don't care. It only hurts them a little, if at all.

Fake numbers:

99% of people don't need want or care about ECC.

of the remaining 1%-

.3% care about ECC, but refuse to pay extra and go without.
.3% care about ECC and buy a cheap AMD desktop part instead of Intel.
.4% care about ECC and pay the premium to build Xeon based system.

As long as the extra profit gained from selling Xeons to that .4% exceeds the profit lost from the .3% who buy AMD, Intel's decision only helps them. Of course, actually obtaining completely accurate numbers in a case like this is impossible, so we will never know.
 
precisely the sort of short-sighted thinking i'm talking about

look at the bigger picture

No, not really. Until volume of RAM commonly used by consumers is such that the bit flip chance is unacceptibly high, it's pointless to have such RAS features on consumer devices. It would raise the cost to no benefit.
 
Idontcare,

VMware costs somewhere between $6-8k, before any app costs at all on a 2 socket system.

Hah.

I have 4U hosts that now would require ~15 Enterprise+ licenses to fully "utilize."

So, more like $80k for our 4U and half of that for the 2U boxes we were spec'ing under vsphere 4. Obviously we'll be re-evaluating our scale up approach.

I still find it mind boggling that VMware is putting their install base at risk by monetizing the hypervisor so...

What I am saying is that $6-$8k is on the cheap side (or very soon will be given growing memory densities) even for VMware. That's not too mention the $$$ you dropped putting those two MS Datacenter licenses on there... or ongoing maintenance.

Anyway! Still a good example of where one piece of software can dwarf or equal the outlay on hardware.
 
No, not really. Until volume of RAM commonly used by consumers is such that the bit flip chance is unacceptibly high

It is already unacceptably high

It is (probably) the leading cause of blue screens.

How many consumers have bad ram sticks and never know why their computer constantly locks up?

Right now Intel is losing ground to mobile devices/tablets. They need to buy time to improve their mobile offerings (which seem to be strangely lagging, but that's another story). To help with this, they need to NOT give consumers extra incentive to switch.

Buggy, unreliable hardware is ultimately costing Intel marketshare. Every time bad memory causes someone to throw out their PC in frustration, Intel is just shooting themselves in the foot.

Yes, most consumers don't know what ECC is, nor should they have to. PCs should 'JUST WORK'

It is in Intel's interest to make general PCs as reliable and bulletproof as possible. They should be leading the charge to force all PC manufacturers to implement ECC.
 
Last edited:
Hah.

I have 4U hosts that now would require ~15 Enterprise+ licenses to fully "utilize."

So, more like $80k for our 4U and half of that for the 2U boxes we were spec'ing under vsphere 4. Obviously we'll be re-evaluating our scale up approach.

I still find it mind boggling that VMware is putting their install base at risk by monetizing the hypervisor so...

What I am saying is that $6-$8k is on the cheap side (or very soon will be given growing memory densities) even for VMware. That's not too mention the $$$ you dropped putting those two MS Datacenter licenses on there... or ongoing maintenance.

Anyway! Still a good example of where one piece of software can dwarf or equal the outlay on hardware.

If you're dealing with the mess that is version 5, yes.
 
This is pure conjecture on your part. You have nothing backing it up other than it is what you feel to be the case, do you?

If you read the posting earlier in the thread, you would know that's what James Hamilton (who knows far more about RAM and ECC issues than either of us) suspects.

Also:
http://www.tgdaily.com/hardware-features/24190-microsoft-to-encourage-use-of-ecc-memory-for-vista

According to Enderle, at least Jim Allchin, co-president of Microsoft's Platform Products & Services Division, believes that current mass market memory is a "serious problem". He told TG Daily that Microsoft confirmed that it has found out that a lot of "breakage" in Windows is caused by memory and that "the problem with memory has to be resolved before Vista ships."

This "breakage" apparently is caused by sub-quality memory that does not meet general specifications and can crash software. "Memory touches virtually anything in a computer and therefore has a lot of impact," Enderle said. "If memory is the problem and ECC can fix it, then it is a no-brainer to move towards ECC."
 
Shoddy ECC memory will not magically be more reliable than shoddy non-ecc memory.

Yes, cheap, low quality ram is an issue. That doesn't mean one needs to make the leap to high quality ECC memory when cheaper, non-ECC parts will do just as well.

ECC is not to make unreliable memory reliable. It is to mitigate risk of random bit flip that all memory (cheap or shoddy) experiences at a low rate.

Your PC crashes due to broken memory. Note "This "breakage" apparently is caused by sub-quality memory that does not meet general specifications and can crash software."

Per idontcare's link. A whopping 8% of working (that is non shoddy, broken) DIMMS suffered one more more correctable bit flips within a solid year of powered on time (not a calendar year, but a year of powered on time). That means that yes, there is a 30% chance that you might have a bit flip in a 4 DIMM system in a year (which will likely cause no problem).
 
Last edited:
Back
Top