32nm quad core Xeons

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
So, who actually uses those top high performance parts? Typically workstations and reviewers doing benchmarks.

That makes sense to me.

P.S. Speaking of power consumption.....I do hope AMD decreases their lag time on node transitions. As it stands now it looks like you guys are one year behind Intel to 32nm.

In past days was the time difference (between Intel and AMD) to smaller nodes always this long?
 

JFAMD

Senior member
May 16, 2009
565
0
0
Well, let's look at this from a different perspective. 5 years from now (2015), what is going to be the predominant architecture?

Most people would agree that it will be CPU/GPU.

So, based on that, what is the most important technology to be focused on right now? Is it getting to the next node, or is it the integration of a high performance CPU with a high performance GPU?

Based on the recent changes with larrabee, I would venture to say that the competitor has fallen behind AMD, who currently has BOTH a world class CPU and a world class GPU.

From that aspect, when you look at what will matter with IT technologies in the next 5 years, we are actually ahead.

I have yet to see a customer buy a nanometer, but I have plenty of customers who can't get to a 5-series GPU and a bulldozer-type core fast enough.

Process node was a great way for our competitor to try to shift the focus and shift the discussion. Now we have a partner that if fully capitalized to drive us to the next node and we have a signular focus on the designs, including fusion of CPU and GPU.

To get wrapped up in whether it is 32nm or 22nm or 16nm is completely missing the point, the point is fusion of CPU and GPU, and there, the advantage is decidedly AMD's.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
To get wrapped up in whether it is 32nm or 22nm or 16nm is completely missing the point, the point is fusion of CPU and GPU, and there, the advantage is decidedly AMD's.

In the past we have seen 90nm AMD CPUs beating 65nm Intels........

.......But all things being equal being a smaller node is always better right? Less energy usage. Less heat. Sounds good to me.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
I have yet to see a customer buy a nanometer, but I have plenty of customers who can't get to a 5-series GPU and a bulldozer-type core fast enough.

With respect to HD5xxxx isn't that because TSMCs yields are so bad?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Well, let's look at this from a different perspective. 5 years from now (2015), what is going to be the predominant architecture?

Most people would agree that it will be CPU/GPU.

So, based on that, what is the most important technology to be focused on right now? Is it getting to the next node, or is it the integration of a high performance CPU with a high performance GPU?

I see your point. Integrating CPU and GPU seems to be the smartest use of resources.

Still I would hope after that technology is fully implemented there will be money left over (in the form of profits) to remove other bottlenecks at AMD. Whether that is process technology or developing something else only you guys know. I am just a regular consumer who likes cool and efficient products.
 

21stHermit

Senior member
Dec 16, 2003
927
1
81
Well, let's look at this from a different perspective. 5 years from now (2015), what is going to be the predominant architecture?

Most people would agree that it will be CPU/GPU.

So, based on that, what is the most important technology to be focused on right now? Is it getting to the next node, or is it the integration of a high performance CPU with a high performance GPU?

Based on the recent changes with larrabee, I would venture to say that the competitor has fallen behind AMD, who currently has BOTH a world class CPU and a world class GPU.

From that aspect, when you look at what will matter with IT technologies in the next 5 years, we are actually ahead.
All you've said is AMD's vaporware is ahead of Intel's vaporware. Next month Intel ships a working CPU/GPU, admittedly not a world class GPU, but shipping parts win over road map vaporware any day.

I for one do buy a node, very power concious. Perhaps you'd care to discuss AMD TDP vs Intel TDP or even better: power at the wall. I don't have a clear picture and I can be swayed.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,254
16,110
136
Well, let's look at this from a different perspective. 5 years from now (2015), what is going to be the predominant architecture?

Most people would agree that it will be CPU/GPU.

So, based on that, what is the most important technology to be focused on right now? Is it getting to the next node, or is it the integration of a high performance CPU with a high performance GPU?

Based on the recent changes with larrabee, I would venture to say that the competitor has fallen behind AMD, who currently has BOTH a world class CPU and a world class GPU.

From that aspect, when you look at what will matter with IT technologies in the next 5 years, we are actually ahead.

I have yet to see a customer buy a nanometer, but I have plenty of customers who can't get to a 5-series GPU and a bulldozer-type core fast enough.

Process node was a great way for our competitor to try to shift the focus and shift the discussion. Now we have a partner that if fully capitalized to drive us to the next node and we have a signular focus on the designs, including fusion of CPU and GPU.

To get wrapped up in whether it is 32nm or 22nm or 16nm is completely missing the point, the point is fusion of CPU and GPU, and there, the advantage is decidedly AMD's.

I could be wrong, but I don't think so. I certainly do NOT agree. Both server cpu's, and many general usage PC's only need a very tiny video card, and the wimpiest GPU around (that we have today) will suffice. And the wide and varied needs for CPU is to be considered also. Some just need 2 cores, others can;t get enough, servers included.

Then you have the gamers, and dual or triple GPU cards, some with 2 chips on each, and a quad-core cpu or more is what they want. Numbers wise, the dual-core with a wimpy video card is what wins, but there are so many combinations. I see things not changing a lot, although, for the dual-core and video, I can see one market.
 

JFAMD

Senior member
May 16, 2009
565
0
0
I could be wrong, but I don't think so. I certainly do NOT agree. Both server cpu's, and many general usage PC's only need a very tiny video card, and the wimpiest GPU around (that we have today) will suffice.

You are 100% correct. But the combination of CPU and GPU for servers has nothing to do with video output and has everything to do with parallel processing.

The world of x86 CPUs solving all of the large HPC-style problems is drawing to a close. A heterogeneous environment with GPU and CPU will take over much of the technical computing world.

Your email will still run on x86 CPU for the most part, but even there you may see things like security algorithms and scanning being taken over by CPU/GPU computing because there are a lot of parallel tasks that can be handed off to a scalar processor with far more cores (paths) to handle the task.

The landscape of the server world is changing, GPU will add some additional horsepower as we start to get to the point of diminishing returns on cores.

Look at direct X compute on the client side or some of the adobe rendering that can be done as math on the GPU (vs. display graphics.) It is very impressive.
 

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
Price:
Intel 5570 = $1386
AMD 2435 = $989

Intel is 40% more expensive.

Anandtech has already shown that they consume more power

But Anandtech has also shown that Nehalem's performance/watt is superior:

http://it.anandtech.com/IT/showdoc.aspx?i=3606&p=11

While SPECpower shows Nehalem to be better in both power use and performance/watt. Particularly impressive are the low-voltage Nehalems.

http://www.spec.org/power_ssj2008/results/power_ssj2008.html

One can also step down to a X5550 for $999, or a E5540 or L5530 for $780.

So, if servers are sitting at low utilization for a large percentage of the day, does it make sense to spend a 40% premium to get more performance for the small areas of peak utilization?
Given the costs of everything else related to the server, the premium will be much less than 40%. And if servers are sitting at low utilization for a large percentage of the day, what's the point of buying additional ones that merely add to the costs of software licensing, servicing and space?

Economics and power drive the decisions in the data cener, performance does not. There will be specialty bids where performance comes into play, but those are the exception, not the rule.
You can't ignore performance, economics and power merely determine how you best try to achieve that performance. Otherwise, why would the German ISP have any reason to add more servers. Or better yet, they should just replace all the servers with Mac Mini Servers where 6 of them can fit into the same power budget of a single 240W server.

Chevy sells relatively few corvettes. It's mostly because the overwhelming bulk of car buyers are making decisions based on different factors other than raw performance. For the few that do, there is a shiny corvette waiting on the lot for you.
If you need to carry 7 people often, will it be more efficient financially, logistically and gas-wise to buy two compact cars or one minivan?
 
Last edited:

JFAMD

Senior member
May 16, 2009
565
0
0
Yes, but the difference is whether you are driving 7 people to the same destination or 7 people to 7 different places all around town.

Things that can be clustered (i.e. HPC) lend themselves well to the idea of consolidating workloads.

But, look around the typical data center and you will see a lot of heterogeneous workloads. Database, web, java, network infrastructure, order entry, etc. That is why we end up with hundreds or thousands of servers. Virtualization is helping, but we still have a lolng way to go.

In the case of the ISP, they sell dedicated servers to their hosting customers, so each one has to be a discrete server (and in many countries there are laws about putting your data on a server with data from another company.)

In the future, when we get to clustered file systems for standard business application and a fabric/cloud environment where you can just partition and scale out the capacity that you need, you'll find your scenario more realistic. But the challenge there is that the folks truly doing cloud today are, in many cases, actually downclocking the processors in order to lock out the top P states. This reduces power consumption. As we get to a cloud/fabric-like environment, power consumption will probably trump performance even more than today.

In a stand-alone environment like a workstation or a desktop for a power user, I can't argue that performance will be a key criteria. But if you ask IT people today, you'll be amazed to find that 95%+ of what they buy is not the highest performance. So those that put out the statement that one vendor has an advantage over the other based on raw performance are barking up the wrong tree. That would be like me telling gamers that power savings is the true metric for their world.
 

JFAMD

Senior member
May 16, 2009
565
0
0
Given the costs of everything else related to the server, the premium will be much less than 40%.

I actually went back to do a little math on this.

I went to Dell's site and configured 3 servers (PowerEdge 710 and PowerEdge 805). I held all components, warranty and other options equal.

The 3 servers were:
One with 2 Xeon 5570's and 36GB of RAM (3 channels) - $7408
One with 2 Xeon 5570's and 32GB of RAM (only 2 channels, but who would do that?) - $6978
One with 2 Opteron 2435's - $4565

The more expensive one is 62% more expensive, the lower performing dual channel box 2as 52% more expensive. So both are more expensive than the 40% delta on processor pricing.

For fun I configured an E5540 and that was still 20% more expensive. So I started thinking, where do you get to price parity? Basically that is an E5520. So, from a price perspective, you can get 8 2.26GHz Intel cores for roughly the same cost as AMD's most expensive 12 cores @ 2.6GHz.

Knowing that my most popular SKU is the 2431 (2.4GHz), to get to the same price for an Intel system, you're at the bottom, a 1.86GHz DUAL CORE, so 4 total cores of Nehalem vs. 12 2.4GHz Opteron cores. Oh, and you are still paying a price premium for those 4 cores (~4%).

So, yes, you can point to all types of benchmarks to show you how the top speed xeon parts are fast, but when it comes to pure economics, your choice is a 50-60% premium on the top bin vs our 2435 (to match those benchark charts), or you can try to match the price our our most popular dual socket six core platform, but in trying to match budget dollars, you are going to end up with a pair of 1.86 dual core processors. No turbo, no HT, 4.8GT QPI and only 800MHz memory.

If you check Gartner and IDC you'll see that the average ASP for 2 socket servers is ~$3600. That tells me that there are far more servers being bought in that price band than any others. And at $3600, look at what you get with Intel and what you get with AMD. It's all about value in the data center. Benchmarks are nice, but that is not how the overwhelming majority of servers are bought. I've been in this business for close to 20 years and I have never seen more of a focus on value. We're in exactly the right spot.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You are 100% correct. But the combination of CPU and GPU for servers has nothing to do with video output and has everything to do with parallel processing.

The world of x86 CPUs solving all of the large HPC-style problems is drawing to a close. A heterogeneous environment with GPU and CPU will take over much of the technical computing world.

What sort of impact does GPU distance from the CPU have on these type of computations you are talking about?

Would a discrete video card be at a disadvantage to a close proximity "Fusion" set-up?
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,067
3,574
126
Yes, but the difference is whether you are driving 7 people to the same destination or 7 people to 7 different places all around town.

Thats not the point he's trying to make.

You guys make a great point tho, so i dont want to discriminate AMD.

Its like this, if you know EXACTLY how much you will need and its under a set production scale, then i can see a lower wattage, lower power AMD being better.

In some cases, my Sammy is better then Lind is, and Sammy is a Intel Sossoman (dual lappy processor 2.0 SMP) vs Lind (Full blown gainestown X5580 @ 3.07).

Yes im being serious, in NAS duty for example, Lind is just a running faucet @ 200W+ idle vs sammy @ 75W Idle.

But in a scalable enviorment i think AMD will lose hard.
Because a dual Gainestown 8c/16t system will outproduce AMD's 12c/12t system ANY given day of the week.
(However does the IT department need that much Power?)

So you really need to look at the spot your competiting.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Great, we've gone from nRollo marketing Nvidia, to actual paid AMD employees marketing in the forums.

JF, speaking for myself, your presence here is appreciated, the constant marketing is not.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,067
3,574
126
There is nothing wrong with an AMD system tho.

I just built a 955 this weekend for a friend who was on tight budget constraints.

The machine runs great, she has no issues with it, and she doesnt even overclock to the extreme.

It was priced about 300 dollars CHEAPER then an i7 system after i built it.
So it was a win in her case, because she is a myspace farm whore, and the most gpu dependant game she really plays is CS: Source zombie mod.

Once again its about the spot your trying to fill.
There is no 1 system that can handle ALL spots in the IT field.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Great, we've gone from nRollo marketing Nvidia, to actual paid AMD employees marketing in the forums.

JF, speaking for myself, your presence here is appreciated, the constant marketing is not.

My understanding of the issues surrounding that which was nRollo was not so much his real-life allegiances but his demeanor and disposition with which he carried himself and the aggressiveness with which he impressed his opinions on others in the forums.

I don't care who JFAMD is or who he works for so long as he is sociable, congenial, and treats others with the same degree of good-spirited respect we all expect others to treat us.

So far I haven't seen any posts that would suggest I have anything to be concerned about. He's adding alternative viewpoints and backing them up with data.

That's an opportunity right there for you to engage in tactful debate over the merits of his assertions...where else do you get the opportunity (outside of work) to do that? And who knows, you may just come to see things from a new perspective out of the process and in turn find value in the experience in hindsight.
 
Last edited:

Cattykit

Senior member
Nov 3, 2009
521
0
0
Intel has ALWAYS been "Consumer unfriendly".

Edit: Just thinking of how "UN-green" Intel products are, because they disable SpeedStep on Celerons. Millions of laptops in use sucking down and wasting more power than they need, shortening battery life and inconveniencing people unneccesarily, just to prop up Intel's profit margins.

AMD really deserves a gold star here, because ALL of their chips, from the top to the bottom of their line-up, have power-saving features.

Wow...just Wow...
This is fanboism at its worst.
 

JFAMD

Senior member
May 16, 2009
565
0
0
With respect to HD5xxxx isn't that because TSMCs yields are so bad?

I was saying they can't get there ast enough, meaning they want a high performance CPU/GPU combo ASAP. Since I am not in GPUs, I can't count on yields. I know demand has been extremely strong, but don't have visibility on our ability to deliver to that demand.
 

JFAMD

Senior member
May 16, 2009
565
0
0
Great, we've gone from nRollo marketing Nvidia, to actual paid AMD employees marketing in the forums.

JF, speaking for myself, your presence here is appreciated, the constant marketing is not.

Not sure if you are referring to me in terms of marketing, but I shy away from marketing on forums. All I am trying to do is answer questions and clear up misconceptions. I tend to stay clear of all of the "intel vs. amd" threads.
 
Dec 30, 2004
12,553
2
76
My understanding of the issues surrounding that which was nRollo was not so much his real-life allegiances but his demeanor and disposition with which he carried himself and the aggressiveness with which he impressed his opinions on others in the forums.

I don't care who JFAMD is or who he works for so long as he is sociable, congenial, and treats others with the same degree of good-spirited respect we all expect others to treat us.

So far I haven't seen any posts that would suggest I have anything to be concerned about. He's adding alternative viewpoints and backing them up with data.

That's an opportunity right there for you to engage in tactful debate over the merits of his assertions...where else do you get the opportunity (outside of work) to do that? And who knows, you may just come to see things from a new perspective out of the process and in turn find value in the experience in hindsight.

Not to mention, AMD doesn't have the money to pay people to market on forums.
Those here that do support AMD, myself included, support them any time it's a tossup or if the user can get the same results (gaming rig, say, at real world resolutions not 800x600) for a cheaper CPU price, because we know AMD is hurting and without them producing affordable CPUs to keep Intel in check, we'll _all_ be hurting a lot more.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Not to mention, AMD doesn't have the money to pay people to market on forums.
Those here that do support AMD, myself included, support them any time it's a tossup or if the user can get the same results (gaming rig, say, at real world resolutions not 800x600) for a cheaper CPU price, because we know AMD is hurting and without them producing affordable CPUs to keep Intel in check, we'll _all_ be hurting a lot more.

I might be completely wrong but doesn't the price of Operating systems affect the viability of AMD's product markdowns.

Lots of times I say to myself "Why even bother buying a cheap mobo/cpu when a non-transferable OEM license costs $110 and up".

In some ways I think Microsoft's effective monopoly is what hurts AMD the most. This is why I would like to see Google (or someone else) come through with a new Operating system.
 
Last edited:

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
4.61 ghz at 1.344v, this seems representative of what we can expect from 32nm nehalem :)

Call me crazy but I have to roll my eyes at this. 5GHz or BUST! What is so hard about 5? Actually I know the real answer(s), but I'm being sarcastic.

GPU and CPU performance is kind of flatlined. Sure we have more cores but 95% of my applications show no improvement. That said outside of specialized crunching this Gainstown does nothing for me.

Getting back on topic here perhaps the extra cache on quads will help but I'm certainly not getting my panties in knots over it! The smoke is in layers and we need some wind (or someone to turn on a big fan!) to blow it out!
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
I actually went back to do a little math on this.[..]
Not saying that your math is wrong, but you're neglecting some things. Depending on the industry the price of the hardware isn't what you should worry about.


If you pay 10k+ US$ for a license, the cost for hardware sudenly becomes a neglectable factor and you'll want to get the strongest CPU you can get your hands on (at least if you can virtulize the workload or something else). For other industries performance/watt is much more important than the start-up costs and for others you're completly right and they are way better of with a cheaper CPU.
 

JFAMD

Senior member
May 16, 2009
565
0
0
Yes, you are right that there are a lot of variables and it is impossible to boil it down to one vector like so many reviews will try to do. However the "you'll want to get the strongest CPU you can get your hands on" statement does not square off with the industry data. We ship ~1% of our server parts in the top bin. I don't know my competitor's numbers off the top of my head, but it has traditionally been single digit. With 90%+ not buying the top bin, the raw performance vector tends to not be the one that drives the buying decision.

As an example, in the desktop world, who here is running an i7-975? It is the fastest processor out there, right? But who buys $1000 desktop processors? Very few people.

I can get pretty much any processor I want for free, and even I don't have the fastest processors.

I am not denying that there are people who want the highest performance, but in the server world that is actually a minority.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Yes, you are right that there are a lot of variables and it is impossible to boil it down to one vector like so many reviews will try to do. However the "you'll want to get the strongest CPU you can get your hands on" statement does not square off with the industry data. We ship ~1% of our server parts in the top bin. I don't know my competitor's numbers off the top of my head, but it has traditionally been single digit. With 90%+ not buying the top bin, the raw performance vector tends to not be the one that drives the buying decision.
The "you want the strongest CPU you can get your hands on" part is only true if you have to buy extremly expensive licenses, which are usually sold per cpu or core or something like that - so I don't think your data conflicts with this, after all that's only a small portion of the whole market, though probably rather lucrative.


I just wanted to add, that the start-up costs may or may not be the most important part in the whole calculation - there are lots of businesses who don't buy 30k+$ licenses and that don't have datacenters full of CPUs, but that have a small IT budget, so neither raw performance nor performance/watt will be that important to them. And luckily for AMD these are the majority of the market.