Originally posted by: JAG87
Originally posted by: Atreus21
I guess what I'm getting at is the following:
Excluding all external factors (motherboard, tech support, etc.), if we were to construct a benchmark subjecting a cpu to server-like tasks (and I have no idea what those might be apart from maybe virtualization, which is now supported in almost all newer cpus anyway), would we see any real difference in performance between say a xeon and a c2d of comparable architecture?
Although, now that I think about it, I guess it's not easy to construct a server benchmark, because the load to which the cpu may be subjected is almost impossible to reduce to any single variable.
I'm frustrated about this whole topic because it just seems that upgrading the processor(s) in a server is almost a waste of time versus getting a substantial ram upgrade.
Server and desktop CPUs come off the exact same production lines. They are the same performance wise (if the architecture is the same of course). For example, kentsfield vs. clovertown, or conroe vs. woodcrest, they perform identical.
The difference is that server CPUs are binned for 2 things:
- the chips that run stock speed with the lowest voltage
- the chips that have a higher temperature tolerance
So once the chips come out on the belt, they are tested in different setups until their lowest stable voltage is found, and their maximum stable temperature is found. Those chips with low stock VIDs and high thermal specification are placed on the market as Xeons/Opterons, while the rest go to the desktop market.
That is why a server chip almost always has a lower stock operating voltage then its desktop counter part, and it always has a higher thermal specification. This doesn't mean you wont find a desktop chip with low VID, or a desktop chip that is stable running at 80C, its just that you wont find one that does both, because those are labeled Xeon/Opteron.