What makes a server cpu ideal for servers?

Atreus21

Lifer
Aug 21, 2007
12,001
571
126
Hey fellas. Brand spanking new to the forum.

I've been trying to ascertain this for years, with little substantive response, and a few ridiculous ones.

I'm trying to get a real answer to the question of what the difference is between server and desktop cpus. For instance, Athlon 64s and Opteron 64s.

Any explanation, technical or conceptual, would be highly valued.

Thanks.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Manufacturer's Intentions:

1. Server CPU is more expensive - Intel/AMD makes a killing
2. Server CPU goes into special more expensive server motherboards
3. Server CPU is more robust
4. Server CPU has more features (more L2 cache, etc.)


Reality:

1. Server CPU is more expensive - Intel/AMD makes a killing
2. Because uniform platform nowadays Server CPU and desktop CPU can be used on same socket MB as long as MB bios supports.
3. Because uniform platform Server CPUs are not neccessarily more robust than desktop CPU, lots come off the same production line and binned
4. There might be some differences with multiplier locks or default FSB but there isn't much feature difference with Server and Desktop CPU nowadays.

There you have it. #1 reason says most of it.
 

Ika

Lifer
Mar 22, 2006
14,264
3
81
Actually, some server CPUs (like the current G0 quad-core Xeons) may actually be cheaper than their desktop near-equivalents...
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,392
16,236
136
Originally posted by: KIAman
Manufacturer's Intentions:

1. Server CPU is more expensive - Intel/AMD makes a killing
2. Server CPU goes into special more expensive server motherboards
3. Server CPU is more robust
4. Server CPU has more features (more L2 cache, etc.)


Reality:

1. Server CPU is more expensive - Intel/AMD makes a killing
2. Because uniform platform nowadays Server CPU and desktop CPU can be used on same socket MB as long as MB bios supports.
3. Because uniform platform Server CPUs are not neccessarily more robust than desktop CPU, lots come off the same production line and binned
4. There might be some differences with multiplier locks or default FSB but there isn't much feature difference with Server and Desktop CPU nowadays.

There you have it. #1 reason says most of it.

I totally disagree. Some servers use one socket, and then some of this could be opinion, but in fact MOST real servers have 3 big differences
1) They are multi-socket CPU servers, 2 -way, 40way and 8--way are most common, requires special motherboards, and quite often registered memory. These CPU's will NOT often work in single socket motherboards. So for AMD right now, 8-way are popular, and relatively cheap (in server $), and dual-core gets them 16 cores working !
2) Use SCSI disk for faster speed and more dependability, quite often on the motherboard.
3) Absolutely MUST run 24/7 @ 100% load for years

Due to the above, they are $$$$$
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Atreus21
Hey fellas. Brand spanking new to the forum.

I've been trying to ascertain this for years, with little substantive response, and a few ridiculous ones.

I'm trying to get a real answer to the question of what the difference is between server and desktop cpus. For instance, Athlon 64s and Opteron 64s.

Any explanation, technical or conceptual, would be highly valued.

Thanks.

- Usually are capable of scaling up to multi-socket configurations due to having more communication links on the CPU package

- More extensive(and expensive) validation process

- Often has more cache
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Originally posted by: Markfw900
Originally posted by: KIAman
Manufacturer's Intentions:

1. Server CPU is more expensive - Intel/AMD makes a killing
2. Server CPU goes into special more expensive server motherboards
3. Server CPU is more robust
4. Server CPU has more features (more L2 cache, etc.)


Reality:

1. Server CPU is more expensive - Intel/AMD makes a killing
2. Because uniform platform nowadays Server CPU and desktop CPU can be used on same socket MB as long as MB bios supports.
3. Because uniform platform Server CPUs are not neccessarily more robust than desktop CPU, lots come off the same production line and binned
4. There might be some differences with multiplier locks or default FSB but there isn't much feature difference with Server and Desktop CPU nowadays.

There you have it. #1 reason says most of it.

I totally disagree. Some servers use one socket, and then some of this could be opinion, but in fact MOST real servers have 3 big differences
1) They are multi-socket CPU servers, 2 -way, 40way and 8--way are most common, requires special motherboards, and quite often registered memory. These CPU's will NOT often work in single socket motherboards. So for AMD right now, 8-way are popular, and relatively cheap (in server $), and dual-core gets them 16 cores working !
2) Use SCSI disk for faster speed and more dependability, quite often on the motherboard.
3) Absolutely MUST run 24/7 @ 100% load for years

Due to the above, they are $$$$$


Well, I was trying to answer the Ops question which was server cpu vs desktop cpu which last time I checked had nothing to do with SCSI. And I think he was asking about single desktop cpu vs single server cpu. Finally, I am not aware of any CPU that has a requirement of "Absolutely MUST run 24/7 @ 100% load for years" but I could be wrong. If so, slap those puppies in all of our high availability systems across the world and save money from getting rid of all those redundant servers.


 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,392
16,236
136
An Opteron64 is not a server CPU, but a workstation cpu. The mention of SCSI is directly tied the the required motherboard, and the cost for such, not your assumption of $$$. They have to be dependable and fast, and that requires $$$$ And virtually all real servers do use multiple CPU's in multiple sockets.

If you are comparing an Athlon64 to a Opteron 64, the only difference is binning/dependability, and quite often they are the same price, sometimes more, and sometimes less.
 

classy

Lifer
Oct 12, 1999
15,219
1
81
Originally posted by: aka1nas
Originally posted by: Atreus21
Hey fellas. Brand spanking new to the forum.

I've been trying to ascertain this for years, with little substantive response, and a few ridiculous ones.

I'm trying to get a real answer to the question of what the difference is between server and desktop cpus. For instance, Athlon 64s and Opteron 64s.

Any explanation, technical or conceptual, would be highly valued.

Thanks.

- Usually are capable of scaling up to multi-socket configurations due to having more communication links on the CPU package

- More extensive(and expensive) validation process

- Often has more cache

Thats about it. They may use an error correction type memory platform in addition, but there is very little difference.
 

alaricljs

Golden Member
May 11, 2005
1,221
1
76
Both Intel and AMD also fiddle with the CPU internals to lock-out multi-CPU configurations that aren't supported, so an Athlon64 will not go multi-proc and a Opteron 2xx won't go quad-cpu.

Other than that there are certain guarantees of availability when it comes to a server CPU (and mobo). When a manuf says "server" what they are also saying is "we'll support you for X years", so they will ensure that there will be CPUs to plug into that mobo you own for a long time to come (in case yours fails).
 

Atreus21

Lifer
Aug 21, 2007
12,001
571
126
I guess what I'm getting at is the following:

Excluding all external factors (motherboard, tech support, etc.), if we were to construct a benchmark subjecting a cpu to server-like tasks (and I have no idea what those might be apart from maybe virtualization, which is now supported in almost all newer cpus anyway), would we see any real difference in performance between say a xeon and a c2d of comparable architecture?

Although, now that I think about it, I guess it's not easy to construct a server benchmark, because the load to which the cpu may be subjected is almost impossible to reduce to any single variable.

I'm frustrated about this whole topic because it just seems that upgrading the processor(s) in a server is almost a waste of time versus getting a substantial ram upgrade.
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: Atreus21
I guess what I'm getting at is the following:

Excluding all external factors (motherboard, tech support, etc.), if we were to construct a benchmark subjecting a cpu to server-like tasks (and I have no idea what those might be apart from maybe virtualization, which is now supported in almost all newer cpus anyway), would we see any real difference in performance between say a xeon and a c2d of comparable architecture?

Although, now that I think about it, I guess it's not easy to construct a server benchmark, because the load to which the cpu may be subjected is almost impossible to reduce to any single variable.

I'm frustrated about this whole topic because it just seems that upgrading the processor(s) in a server is almost a waste of time versus getting a substantial ram upgrade.


Server and desktop CPUs come off the exact same production lines. They are the same performance wise (if the architecture is the same of course). For example, kentsfield vs. clovertown, or conroe vs. woodcrest, they perform identical.

The difference is that server CPUs are binned for 2 things:

- the chips that run stock speed with the lowest voltage
- the chips that have a higher temperature tolerance

So once the chips come out on the belt, they are tested in different setups until their lowest stable voltage is found, and their maximum stable temperature is found. Those chips with low stock VIDs and high thermal specification are placed on the market as Xeons/Opterons, while the rest go to the desktop market.

That is why a server chip almost always has a lower stock operating voltage then its desktop counter part, and it always has a higher thermal specification. This doesn't mean you wont find a desktop chip with low VID, or a desktop chip that is stable running at 80C, its just that you wont find one that does both, because those are labeled Xeon/Opteron.
 

rchiu

Diamond Member
Jun 8, 2002
3,846
0
0
Well, if 2 processors have comparable architecture, they should perform similarly. Outside of architecture, server CPU may have larger cache size and can improve performance depending on applications, and usually in server environment, larger cache size is beneficial.

Like many people pointed out already, the benefit of a server CPU comes from the platform that uses the CPU. Like ECC memory, SCSI,SMP...etc. Sometimes desktop platform/mobo won't recognize server cpu, and vise versa. That can be a problem sometime.

Before you upgrade your server, think about what your application do and what can benefit the most from the upgrade. The CPU, the IO, the graphic, or the memory. Maybe upgrading the processor will give u a good bank for the buck, and maybe it won't.
 

Atreus21

Lifer
Aug 21, 2007
12,001
571
126
Originally posted by: JAG87
Originally posted by: Atreus21
I guess what I'm getting at is the following:

Excluding all external factors (motherboard, tech support, etc.), if we were to construct a benchmark subjecting a cpu to server-like tasks (and I have no idea what those might be apart from maybe virtualization, which is now supported in almost all newer cpus anyway), would we see any real difference in performance between say a xeon and a c2d of comparable architecture?

Although, now that I think about it, I guess it's not easy to construct a server benchmark, because the load to which the cpu may be subjected is almost impossible to reduce to any single variable.

I'm frustrated about this whole topic because it just seems that upgrading the processor(s) in a server is almost a waste of time versus getting a substantial ram upgrade.


Server and desktop CPUs come off the exact same production lines. They are the same performance wise (if the architecture is the same of course). For example, kentsfield vs. clovertown, or conroe vs. woodcrest, they perform identical.

The difference is that server CPUs are binned for 2 things:

- the chips that run stock speed with the lowest voltage
- the chips that have a higher temperature tolerance

So once the chips come out on the belt, they are tested in different setups until their lowest stable voltage is found, and their maximum stable temperature is found. Those chips with low stock VIDs and high thermal specification are placed on the market as Xeons/Opterons, while the rest go to the desktop market.

That is why a server chip almost always has a lower stock operating voltage then its desktop counter part, and it always has a higher thermal specification. This doesn't mean you wont find a desktop chip with low VID, or a desktop chip that is stable running at 80C, its just that you wont find one that does both, because those are labeled Xeon/Opteron.

That confuses me. Why should one cpu coming off the belt be any different than another, short of a mistake in the manufacturing? I mean, I wouldn't think it was like baking cookies, with some being better than others.

I guess I don't understand enough about the manufacturing process. It just sounds weird to me that a server cpu is a desktop cpu that came out of the oven better. Isn't, say, an AMD Athlon X2 3800+ structurally identical, and therefore identical in performance to another AMD Athlon X2 3800+ assuming the environment is identical? Does the manufacturing process introduce that much variance in quality between otherwise identical cpus?
 

Anonymous Freak

Junior Member
Sep 26, 2006
7
0
0
Originally posted by: KIAmanI am not aware of any CPU that has a requirement of "Absolutely MUST run 24/7 @ 100% load for years" but I could be wrong. If so, slap those puppies in all of our high availability systems across the world and save money from getting rid of all those redundant servers.

Back when I worked for Intel's Enterprise Server Group, we had a customer that, among other systems, had a NetWare server that had been running 24/7 for 6 years without even so much as a soft reboot.
 

Anonymous Freak

Junior Member
Sep 26, 2006
7
0
0
Originally posted by: Atreus21I guess I don't understand enough about the manufacturing process. It just sounds weird to me that a server cpu is a desktop cpu that came out of the oven better. Isn't, say, an AMD Athlon X2 3800+ structurally identical, and therefore identical in performance to another AMD Athlon X2 3800+ assuming the environment is identical? Does the manufacturing process introduce that much variance in quality between otherwise identical cpus?

Yes. Because of the microscopic variations in each raw silicon disc, and the microscopic variations that occur during manufacturing, two chips, even from the same piece of silicon, can have different thermal and electrical properties. Not massive changes, mind you, but minor ones.

For example, AMD's "EE" line of CPUs likely comes from the exact same manufacturing line as the regular models. The "EE" ones just passed tests that showed them more electrically stable at lower voltages at same clock speeds as other chips.

In fact, the slight manufacturing differences are often responsible for the delineation between different speeds of chips. It's not that Intel has one line that produces 2.33 GHz Core 2 Duos, and another that produces 2.93 GHz Core 2 Extremes. It's that they produce the same chip, and after production they test them to see which ones are more stable at the higher speeds. Those they sell as the faster processors.

In fact, the server Xeon 5160 3.0 GHz chip could very well have come from the exact same piece of silicon as the mobile Core 2 Duo Low Voltage L7200 1.33 GHz chip, as well as a desktop Core 2 Duo E6420 2.13 GHz chip. They test the raw silicon, and ones that perform better at higher speeds get slapped on a Xeon mount, ones that perform with much lower voltages well get put on a mobile mount, and the rest get stuck onto desktop mounts. (Mount: the green bit of silicon that attaches the raw die to the pins. This is not a technical name, it's my term.)
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Originally posted by: Atreus21
Originally posted by: JAG87
Originally posted by: Atreus21
I guess what I'm getting at is the following:

Excluding all external factors (motherboard, tech support, etc.), if we were to construct a benchmark subjecting a cpu to server-like tasks (and I have no idea what those might be apart from maybe virtualization, which is now supported in almost all newer cpus anyway), would we see any real difference in performance between say a xeon and a c2d of comparable architecture?

Although, now that I think about it, I guess it's not easy to construct a server benchmark, because the load to which the cpu may be subjected is almost impossible to reduce to any single variable.

I'm frustrated about this whole topic because it just seems that upgrading the processor(s) in a server is almost a waste of time versus getting a substantial ram upgrade.


Server and desktop CPUs come off the exact same production lines. They are the same performance wise (if the architecture is the same of course). For example, kentsfield vs. clovertown, or conroe vs. woodcrest, they perform identical.

The difference is that server CPUs are binned for 2 things:

- the chips that run stock speed with the lowest voltage
- the chips that have a higher temperature tolerance

So once the chips come out on the belt, they are tested in different setups until their lowest stable voltage is found, and their maximum stable temperature is found. Those chips with low stock VIDs and high thermal specification are placed on the market as Xeons/Opterons, while the rest go to the desktop market.

That is why a server chip almost always has a lower stock operating voltage then its desktop counter part, and it always has a higher thermal specification. This doesn't mean you wont find a desktop chip with low VID, or a desktop chip that is stable running at 80C, its just that you wont find one that does both, because those are labeled Xeon/Opteron.

That confuses me. Why should one cpu coming off the belt be any different than another, short of a mistake in the manufacturing? I mean, I wouldn't think it was like baking cookies, with some being better than others.

I guess I don't understand enough about the manufacturing process. It just sounds weird to me that a server cpu is a desktop cpu that came out of the oven better. Isn't, say, an AMD Athlon X2 3800+ structurally identical, and therefore identical in performance to another AMD Athlon X2 3800+ assuming the environment is identical? Does the manufacturing process introduce that much variance in quality between otherwise identical cpus?



If life were like that you wouldn't see people posting batches, date of manufatcure and all that other pretty stuff. Remember, no dice are the same. the dice are cut from huge wafers, and ever die is in a different position on the wafer and every wafer is different from another.

http://www.dvhardware.net/news...l_300mm_45nm_wafer.jpg

to enlighten you, the best dice usually come from the center of the wafer, while the further out you go, the worse the dice. usually the center dice end up in server grade cpu's and extreme edition desktop chips, while the outer dice go into lower end desktop models that are clocked lower.