Xbit Labs: Dell starts to test ARM microprocessors in servers

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
DEC failed because it stopped innovating and other companies managed to create faster processors at lower prices.

I am completely baffled by this statement :confused:

The DEC Alpha was second to none, and the phoenix that rose from its ashes was the Athlon K7, also second to none for a while.

DEC failed because while it had the best the best in production and under development, the bloat in its cost structure was also second-to-none.

I remember there was a market analysis that went out right around the time of their implosion that highlighted their sales teams was something like 4x the headcount per revenue dollar than the industry average at the time. (in other words every one of their competitors was leaner and meaner and they lost out because they couldn't downsize fast enough)

Heck, Intel already showed off a 48-core single-chip cloud computer in 2009. If it had any merit, they would have already started selling it by now.

Intel wouldn't sell it unless it had margins comparable to the products they were intending to sell.

Why sell a 48-core chip with 40% GM if you can sell a 10-core chip with the same yields but fetching a 60% GM and four-figure ASP?

Remember, Intel's main product is INTC, everything they do is done to inflate the market valuation of their primary product (INTC). All this other stuff, these cpu's and chips and such, they are all merely a means to that end. Those means will only change if some other business forces them too.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
So true, provided the maximum latency for an individual transaction is saited on the human timescale (say 10ms), the TPM/$ metric is all that is going to matter to a majority of the market.

I see that in a different way, but the same analogy, with my SSD's.

In benchmarks, the difference between my 160GB G2 and my 240GB V3 is astounding, but in reality their performance is the same to my perception, and both are vastly superior to the spindle-drives they replaced.

Just as x86 merely needed to be good-enough computing 15yrs ago to supplant the server market, so too for ARM. It need merely be good enough, but economically advantageous, and that will be all she wrote.

In 10yrs time, x86 in servers could be as commonplace as DEC servers were in 2002.

Intel is spending money on dense computing research.
AMD bought a denes computing outfit
Dell starts testing ARM in servers.

You seem to be throwing up walls that don't exist. 64 Bit will matter more for memory density than computing efficiency. We are talking armies of low cost cpus serving up web services programmed at an abstracted layer in some rapid deployment framework like ruby on rails or django. This isn't high performance computing at all. Economies of scale will have a much greater impact.

It will be interesting to see how these HP Project Moonshot ARM Cortex A9 CPU servers pan out.

If Cortex A9 ends up being "fast enough" where does that leave a more power hungry (but faster) architecture like Atom or Bobcat in the grand scheme of these micro severs?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Are you being sarcastic or not?
Kind of, but more hyperbolic. In many ways, ARM is stuck, having roused the interest of Intel. Out-innovating Intel would involve taking risks that might just barely on this side of insane for a non-startup. They will need to be clever in how they sell what they make, more than making anything grand. Intel loves margin, if RR is a CEO suitable for getting AMD into profit, he won't let them bleed more than he has to. Since potential cost advantages could disappear by the time there is a working chip in a working device, it's not so straight-forward to me how they can take advantage of the situation, save maybe to develop enough in-house for a bare working SoC, to reduce total costs to licensees.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
With China (as the emerging market) investing in MIPS for Servers I just have to wonder if the MIPS CPU (rather than the likely more expensive ARM) is the one AMD wants to eventually pursue.
China wanted to be able to develop a computer platform that had no questions about IP rights, and they did it with MIPS. China wanted to make computers for their people, but also wanted proprietary affordable serious number crunchers. x86 would be stuck with Intel/AMD IP crap, even if they could build them themselves. If they ignored IP laws elsewhere officially, that would be bad for business. ARM has yet to prove itself capable (ARM just got SMP in the ISA, just got good vector extensions into the ISA, quality ECC is kind of unknown, just announced 64-bit support...MIPS had this stuff in real servers aeons ago, and CHina began working w/ MIPS ~10 years ago, at which point ARM looked even less capable than today). Oracle would probably be willing to license out SPARC, but they'd likely either want too much money, or have major catches in the deal, to not give China everything (I can imagine Ellison laying awake at night scheming how to get rights away from Fujitsu, just to control it completely :)). I could see PPC being another option, that could be fairly compared to MIPS; I'm not sure if IBM would be against such a deal or not.

China using MIPS does not make MIPS king any more than anyone using ARM or x86. Today, what's more true is that outside of high performance areas, MIPS, ARM, and x86 in final products will be largely similar, and their differences will be less ISA specific than SoC vendor specific (x86 has an edge as far as a nice compatible platform, but ARM is working on that). Once Intel gets a process or two farther down the line (IE, good low-end power/performance), consider it Intel+vendors v. AMD+vendors v. MIPS+vendors[+licensees] v. ARM+vendors[+licensees]. x86 will remain unbeatable in our PCs and notebooks, but elsewhere, where some of that performance may be secondary to costs, things might change, and the ISA will be an artifact of one of the companies involved, not a driving reason for any technical decision.
 
Last edited:

CPUarchitect

Senior member
Jun 7, 2011
223
0
0
Intel is spending money on dense computing research.
AMD bought a denes computing outfit
Dell starts testing ARM in servers.

You seem to be throwing up walls that don't exist.
What wall are you talking about? Either way, take a close look at those three statements. Intel and AMD are spending big bucks on dense servers (SeaMicro was worth 334 million). In contrast, Dell has only "started" testing ARM in servers. In fact they've been playing with it for over a year. And the message they're sending out now is that others are welcome to play too. :p
64 Bit will matter more for memory density than computing efficiency. We are talking armies of low cost cpus serving up web services programmed at an abstracted layer in some rapid deployment framework like ruby on rails or django. This isn't high performance computing at all. Economies of scale will have a much greater impact.
It doesn't matter if the services are high performance computing ones or not. With lighter ones you simply run more of them per CPU. But the 32-bit addressing space of ARM is rapidly exhausted when you add in virtualization and memory fragmentation. Also, workloads change, and you don't want to risk losing money because you can't run more demanding services. So there just is no future for 32-bit.

And besides, ARM has recently presented the ARMv8 64-bit ISA. So it's just a matter of time before they do have chips suitable for the server market. Nobody in their right mind will buy a 32-bit chip from then on. But my point was that ARM will also have to sacrifice performance/Watt for this transition, making it even harder to outperform Intel on any metric.
 

CPUarchitect

Senior member
Jun 7, 2011
223
0
0
Remember, Intel's main product is INTC, everything they do is done to inflate the market valuation of their primary product (INTC). All this other stuff, these cpu's and chips and such, they are all merely a means to that end. Those means will only change if some other business forces them too.
Sure, but the point was that they have the ability to produce something that will outperform anything ARM can design. And Intel has proven before that it's willing to cut deep into its margins if their market share is being threatened.

So I don't see ARM making a name for itself in the server market any time soon. They can't live up to the hype.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
It doesn't matter if the services are high performance computing ones or not. With lighter ones you simply run more of them per CPU. But the 32-bit addressing space of ARM is rapidly exhausted when you add in virtualization and memory fragmentation. Also, workloads change, and you don't want to risk losing money because you can't run more demanding services. So there just is no future for 32-bit.

Thank you very much for the post.

So when does a "Physicalized Server" make more sense than a "Virtualized Server"?

I have read going with "Physicalized Servers" makes more sense when I/O is at a premium.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
For the more casual readers (like myself) here is the definition of "Physicalization" from Wikipedia:

http://en.wikipedia.org/wiki/Physicalization

Physicalization, the opposite of virtualization, is a way to place multiple physical machines in a rack unit.[1] It can be a way to reduce hardware costs, since in some cases, server processors cost more per core than energy efficient laptop processors, which may make up for added cost of board level integration.[2] While Moore's law makes increasing integration less expensive, some jobs require lots of I/O bandwidth, which may be less expensive to provide using many less integrated processors.

Applications and services that are I/O bound are likely to benefit from such physicalized environments. This ensures that each operating system instance is running on a processor that has its own network interface card, host bus and I/O sub-system unlike in the case of a multi-core servers where a single I/O sub-system is shared between all the cores / VMs.

So with this definition out of the way, how will Calxeda's Physicalized solution compare to AMD/SeaMicro's Physicalized solution as far as efficiency goes?

How does the Virtualized Network and Storage via AMD/SeaMicro "Freedom SuperComputer Fabric" affect absolute I/O performance? (Hopefully I am getting these terms and concepts straight :))
 
Last edited:

CPUarchitect

Senior member
Jun 7, 2011
223
0
0
Physicalization was a response to trying to cram ever more virtual services onto increasingly more expensive CPUs. We simply reached the point where the hardware and operating cost is sometimes lower when you have more CPUs which each consume less power. But it has to compensate the cost of duplicate hardware and decreased load balancing efficiency.

Basically, virtualization and integration are a good thing, but you can have too much of a good thing. Two strong oxen are better than 1024 chickens, but an elephant won't increase your productivity.

So I don't think physicalization with weak ARM processors is a solution. Having strong CPUs with virtualization does reduce the total hardware cost, it just shouldn't overshoot the mark and become the problem itself. And Intel got that message and started to focus on performance/Watt instead of absolute performance only.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
We look at Intel today and conclude they are too big to fail, that 10yrs from now they will simply be bigger and even more untouchable. But I'm not so convinced.

Ding Ding Ding

They could at the very least lose certain valuable markets. If that happens they deserve it in a certain light, their choking off anyone wanting to make x86 cpu's. I wonder how many times nvidia tried to get an x86 license? Now they went the Arm route. Arm has staying power.
 
Last edited:

Dravic

Senior member
May 18, 2000
892
0
76
What wall are you talking about? Either way, take a close look at those three statements. Intel and AMD are spending big bucks on dense servers (SeaMicro was worth 334 million). In contrast, Dell has only "started" testing ARM in servers. In fact they've been playing with it for over a year. And the message they're sending out now is that others are welcome to play too. :p

It doesn't matter if the services are high performance computing ones or not. With lighter ones you simply run more of them per CPU. But the 32-bit addressing space of ARM is rapidly exhausted when you add in virtualization and memory fragmentation. Also, workloads change, and you don't want to risk losing money because you can't run more demanding services. So there just is no future for 32-bit.

And besides, ARM has recently presented the ARMv8 64-bit ISA. So it's just a matter of time before they do have chips suitable for the server market. Nobody in their right mind will buy a 32-bit chip from then on. But my point was that ARM will also have to sacrifice performance/Watt for this transition, making it even harder to outperform Intel on any metric.

None of this is going to happen immediately, ARM wont be 20% of the server market next year, but for Intel, AMD, HP and Dell to look into the market segment it means there is something there. Will Intel shift it's R&D and come up with a suitable alternative to offset the new perf/watt paradigm? Of course.

But it wont be easy, they are already pouring money into mobile chips and I don't exactly see the big wins there yet. ARM is an established low watt solution already its just not used at in the data centers currently... but


From HP, November of last year..

http://techinsidr.com/hps-unleashes-quad-core-arm-servers-into-the-cloud/

Using technology from an Austin, Texas based startup, Calxeda, HP ($HPQ) managed to cram a whopping 288 quad-core ARM processors into a single 4U server. I’ll do the math for you – that is a mindblowing 1152 cores in a single server!

Plain and simple – this announcement is a big deal because it’s a radical shift in HP’s datacenter strategy. HP’s new Calxeda-based servers are focused on highly specialized cloud customers who home brew their own software solutions.


As I said, an army of cores/servers running custom web services applications.. if the next Facebook writes their apps on low power arm servers because it was CHEAPER and scales up, do you think the end users will care?

Intel themselves thinks it will be 6 to 10% of the market. My guess it that's a downplay and as cloud computing gets out of diapers it may be much bigger over the next couple decades.
 
Last edited:

Dravic

Senior member
May 18, 2000
892
0
76
So I don't think physicalization with weak ARM processors is a solution. Having strong CPUs with virtualization does reduce the total hardware cost, it just shouldn't overshoot the mark and become the problem itself. And Intel got that message and started to focus on performance/Watt instead of absolute performance only.


I agree but will it come from the Top down from Intel or the Bottom up from ARM(or some other Low Power CPU)
 

BenchPress

Senior member
Nov 8, 2011
392
0
0
They could at the very least lose certain valuable markets. If that happens they deserve it in a certain light, their choking off anyone wanting to make x86 cpu's. I wonder how many times nvidia tried to get an x86 license? Now they went the Arm route. Arm has staying power.
"In 10 years, we should be bigger than Intel" - NVIDIA, 2002

It's quite ironic that they ended up begging Intel for an x86 license and didn't get one (at least not for the money they were offering). So ARM is really their plan B, and a pretty poor one at that. Current designs are nowhere near the performance of Intel's processors, which is the performance level expected for serious gaming. And they have to single-handedly convince all major game developers to support yet another platform. And in the meantime Intel (and AMD) are destroying NVIDIA's low-end market with integrated GPUs. And to top things off, they've got some fierce competition in the ARM market.

Sucks to be NVIDIA right now. Hopefully they'll go for broke and invest everything they got into developing an ARM processor which can compete with Intel.
 

CPUarchitect

Senior member
Jun 7, 2011
223
0
0
Hopefully they'll go for broke and invest everything they got into developing an ARM processor which can compete with Intel.
Looks like that's the plan:

“The combination of NVIDIA’s leadership in energy-efficient, high-performance processing and the new ARMv8 architecture will enable game-shifting breakthroughs in devices across the full range of computing – from smartphones through to supercomputers,” said Dan Vivoli, senior vice president, NVIDIA.

Of course once you look past the marketing speak it seems quite unrealistic that they'll be able to compete against Intel any time soon. We're talking 2014 for the first ARMv8 processors at the soonest, and by then Intel will have Haswell parts and will be readying things for 14 nm. And NVIDIA hasn't got any experience designing high-performance CPUs so before blowing hundreds of millions of dollars on it they'll probably take things slow. But they're running out of time...
 

MaxPayne63

Senior member
Dec 19, 2011
682
0
0
"In 10 years, we should be bigger than Intel" - NVIDIA, 2002

12/31/01 NVDA: $21.30
12/31/02 NVDA: $3.84

Even though we can split the atom and could land a man on the moon I'd bet that slaughtering a goat and examining its liver is still a strong contender for the most accurate way to predict what will happen in ten years.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Physicalization was a response to trying to cram ever more virtual services onto increasingly more expensive CPUs. We simply reached the point where the hardware and operating cost is sometimes lower when you have more CPUs which each consume less power. But it has to compensate the cost of duplicate hardware and decreased load balancing efficiency.

Basically, virtualization and integration are a good thing, but you can have too much of a good thing. Two strong oxen are better than 1024 chickens, but an elephant won't increase your productivity.

So I don't think physicalization with weak ARM processors is a solution. Having strong CPUs with virtualization does reduce the total hardware cost, it just shouldn't overshoot the mark and become the problem itself. And Intel got that message and started to focus on performance/Watt instead of absolute performance only.

Based on the link you provided, it sounds like one aspect of the "Physicalization" strategy could bring disruption in the server CPU price points. (despite the cost of duplicate hardware).


From CPUarchitect's link said:
Intel Responds to Calxeda/HP ARM Server News: Xeon Still Wins for Big Data

Jon Stokes
November 3, 2011 6:01 pm

Intel’s Radek Walcyzk, head of PR for the chipmaker’s server division, called Wired today with Intel’s official response to the ARM-based microserver news from Tuesday. In a nutshell, Intel would like the public to know that the microserver phenomenon is indeed real, and that Intel will own it with Xeon, and to a lesser extent with Atom.

Now, you’re probably thinking, isn’t Xeon the exact opposite of the kind of extreme low-power computing envisioned by HP with Project Moonshot? Surely this is just crazy talk from Intel? Maybe, but Walcyzk raised some valid points that are worth airing.
Intel is way way ahead of ARM in this segment

The first thing Walcyzk was keen to emphasize was that Intel has officially been on the microserver bandwagon with since 2009, when the company announced support for the idea and began talking about it as a segment. Of course, what the term meant then was lower-power Xeons (Lynnfield to be specific). It was only this past march that Intel began talking about Atom in the server space, well after SeaMicro had paved the way by launching an Atom-based server product that the startup had been working on since 2007. Intel also came around on the Atom issue well after SupeMicro, SGI, and other system vendors had been shipping Atom-based server products since 2009. But Intel’s lateness to this game aside, what is clear is that the chipmaker is still very far ahead of ARM in this market by every conceivable metric.

When you combine the vast x86-based software ecosystem with the fact that multiple companies have been shipping Atom-based micro-server products for over two years now, the nascent ARM-based microserver movement is playing catch-up and will be for quite a while. Certainly there has been quite a bit of hype around ARM in the cloud for the past two years or so, but only now are we seeing prototypes like HP’s Redstone materialize, and real products won’t even ship until next year.
Xeon still wins at Big Data

Related to the point above is the fact that all of the ARM microserver numbers for efficiency gains that are being stuffed into slide decks and fed to the press are based on simulations (often with FPGA), and not on shipping systems. Meanwhile, Intel has had two years to gather data from live cloud workloads running on x86-based production systems, and the company claims that this data suggests something surprising about which chip is best choice for the majority of these types of power-sensitive, I/O-bound workloads.

Walcyzk told Wired that Intel definitely sees a huge demand for microservers. “This is a category that we think may be up to 6 to 10 percent of the entire x86 market by 2015.” And for two thirds of this growing microserver segment, Intel estimates that Xeon will actually be the best performance per watt per dollar choice, with Atom serving the remaining third.

The idea that Intel’s giant Xeon processors could be a better fit for Hadoop-style, “slow Big Data”, I/O-bound workloads will sound like heresy to anyone who’s watching this space closely right now, but I’m not so quick to dismiss the idea. This is because something has been bothering me about this microserver, “physicalization” trend since it first gained traction two years ago, and that’s the fact that higher levels of die-level integration (i.e. Xeon) should trump board- and chassis-level integration (i.e. a bunch of ARM chips in a rack) every time in terms of cost, efficiency, and performance.

I raised this point two years ago at Ars Technica, and I’ve yet to read a good response to it from the physicalization crowd:

In the final accounting, it still seems that if you’re going to buy, say, 32 cores worth of… processing power, then you’re better off in terms of both cost and wattage with eight quad-core sockets than you are with 32 single-core sockets, no matter how low-power the single-core parts are individually. But if this is true, then why would any vendor use board-level integration to produce a “physicalized” server solution? The likely answer has to do with how processor vendors like Intel price their products.

I ultimately answered the question above by attributing physicalization’s success to the vagaries of Intel’s market segmentation strategy:

In conclusion, Moore’s Law overwhelmingly favors die-level integration, and in theory this should give the price advantage to multicore products. But the real-world pricing structure of [chip] vendors in the multicore era makes it cheaper to buy cores individually. The rationale for physicalization, then, is that it exploits this margin difference in order to pit [Intel's] low-end parts against its high-end parts, in spite of the fact that the vendor’s entire pricing structure depends on the idea that these parts don’t compete with one another.

I then concluded that physicalization was a fad, and that a multicore plus virtualization combo would beat it in the long run. So I predicted what Intel claims that their data is now showing, i.e., that a single, properly priced, low-power, many-core Xeon socket can still beat a fistful of ARM sockets for non-compute-bound, highly parallel workloads.

I have to admit that I’ve recently moved a bit away from this position and toward the point-of-view of the ARM camp on this question. I now think that the optimal architecture for Hadoop is a bunch of RAM banks with a cheap CPU core attached to each bank, which is essentially what Calxeda has in the EnergyCard prototype, and what SeaMicro is selling. So for batch Big Data workloads, the real power consumption is going to be on the storage and I/O side of the machine—i.e., in moving data into RAM and keeping it there while a lightweight core grinds through it.

But the jury is definitely still out what the best architecture is for these workloads. All we know for sure is that Intel’s current Xeon is definitely not what the Hadoop crowd wants, hence the excitement around ARM and Atom. This doesn’t mean that a future Xeon iteration couldn’t slide into this space comfortably, but would have to be designed for a much lower set of power and price points than the current Xeon family.

Intel seems to be getting this message, because past March the chipmaker updated its microserver roadmap to include Intel Xeon parts that range from 45W down to 20W. The company will also have a sub-10W an Atom part available next year.

In the long-run, I agree with Intel’s Justin Rattner who told over a year ago that something like the company’s Single Chip Cloud Computer prototype represents the best architecture for these kinds of workloads. Instead of Xeon with virtualization, I could easily see a many-core Atom or ARM cluster-on-a-chip emerging as the best way to tackle batch-oriented Big Data workloads. Until then, though, it’s clear that Intel isn’t going to roll over and let ARM just take over one of the hottest emerging markets for compute power.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
China using MIPS does not make MIPS king any more than anyone using ARM or x86. Today, what's more true is that outside of high performance areas, MIPS, ARM, and x86 in final products will be largely similar, and their differences will be less ISA specific than SoC vendor specific (x86 has an edge as far as a nice compatible platform, but ARM is working on that). Once Intel gets a process or two farther down the line (IE, good low-end power/performance), consider it Intel+vendors v. AMD+vendors v. MIPS+vendors[+licensees] v. ARM+vendors[+licensees]. x86 will remain unbeatable in our PCs and notebooks, but elsewhere, where some of that performance may be secondary to costs, things might change, and the ISA will be an artifact of one of the companies involved, not a driving reason for any technical decision.

A few questions I have:

1. How will die size factor into decisions of x86 vs MIPS or ARM?

The way I see things the smaller the die, the cheaper the CPU vendor can sell the product while keeping the appropriate profit margins intact.

2. How much does FPU factor into the workloads seen by Physicalized Servers?

If not much FPU is needed, why not use a CPU core other than x86?

3. Lets say MIPS is able to reduce some sales of Opteron for AMD "Physicalized Server"? Does this really matter provided the AMD adopted MIPS design is able to keep an appropriate profit margin?

The way I see things a shift of sales to another ISA (eg, AMD MIPS) is not necessarily a bad thing provided the change in revenue doesn't shift from high profit to low profit at the same time.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
1. How will die size factor into decisions of x86 vs MIPS or ARM?

The Atom(3.7mm2 45nm), Cortex A9(6.5mm2 40nm), and Bobcat (4.6mm2 40nm) all have really tiny cores so I would say very little.

3. Lets say MIPS is able to reduce some sales of Opteron for AMD "Physicalized Server"? Does this really matter provided the AMD adopted MIPS design is able to keep an appropriate profit margin?

The way I see things a shift of sales to another ISA (eg, AMD MIPS) is not necessarily a bad thing provided the change in revenue doesn't shift from high profit to low profit at the same time.

AMD greatest strengths lies mainly with their ability to design x86 processors and GPU qualities that don't necessarily translate well to ARM or MIPS design.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
A few questions I have:

1. How will die size factor into decisions of x86 vs MIPS or ARM?

The way I see things the smaller the die, the cheaper the CPU vendor can sell the product while keeping the appropriate profit margins intact.
Yes, but here's an Intel advantage: Intel owns everything, or damn near it. Calxeda had to license ARM cores, implement them, and then they'll pay to ARM as they have them made or sold. I'm sure they had to license something for their network, maybe they went somewhere else for their RAM controller, and who knows what else.

If ARM develops more and more in-house, with minimal increases in royalties, they could make that work well, even for devices with many features. x86 has problems keeping power low enough, even though space isn't as much of an issue, but the margins aspect is an advantage specifically for Intel, as their costs are nearly only R&D, materials, and time. They don't have near the amount of middle men to deal with.

There's also an equalizing factor: low-end processors are so small that the pins are starting to take up most of the needed room, so the total cost differences aren't going to be quite so great for a teeny weeny chip.

2. How much does FPU factor into the workloads seen by Physicalized Servers?
Not much and well enough. The advantage with FPU of "physicalized" servers (people used to call servers on cards blades, when I was a youngin'!) is that each CPU can have its own dedicated RAM channel(s). For the most part, current cloud computing can get away with emulated FP, narrow FP, or even SIMD-only (high latency) FP.

If not much FPU is needed, why not use a CPU core other than x86?
It would be the other way around. Only recently have they finally gotten something together that will be competitive. While AVX2 will certainly best it, ARM's NEON is no slouch, given the clock speeds, and MIPS has been saturating busses and RAM with arithmetic for 15+ years. Also, such cloud computing type uses, if they need FP, will either be able to utilized tuned vector math libraries, or be stuck with plain low-ILP scalar FP, with very little in between. Either way, available FP implementations on x86, ARM, and MIPS are enough to saturate a RAM channel or two, so it shouldn't matter either way.

For a $1000 CPU running at 3GHz and 50-100W, needing 3-4 RAM channels saturate it, yeah, ticky FP details will really matter. At low power and high density, the situation looks very different, because for the same amount of wall power, you can just use many more whole CPUs, not sharing RAM bandwidth with each other. So, even if each CPU can more than saturate its own RAM, your task has been divided up between many of them. There is an added cost of more physical RAM, but the idea is that the rest of the system's cost savings and aggregate performance gains will make up for that.

If you can scale your task out near infinitely, which is what Mapreduce and Hadoop attempt to enable you to do, for a suitable workload, because SQL has hobbled most previous mass market attempts (SQL is all kinds of wrong, and we could have been doing this years ago if something better had become popular), you end up with performance limiters that aren't like in your desktop. If you can throw a task at 100 servers as well as 1 server, and actually implement that easier than 1 server, then once memory and network latencies are your main bottleneck, your CPU is fast enough. faster CPUs, beyond that point, will offer less in performance returns than adding more week CPUs to the network. Now, where fast enough is will vary, but won't generally be anywhere near a 3GHz Sandy Bridge.

3. Lets say MIPS is able to reduce some sales of Opteron for AMD "Physicalized Server"? Does this really matter provided the AMD adopted MIPS design is able to keep an appropriate profit margin?
First, AMD has no offerings, yet, and will likely get beaten to market by Intel on that front. Second, ARM is more likely, due to being well known, regardless of whether any given MIPS implementation is as good as or better than any given licensable ARM implementation. Third, MIPS, like ARM, will sit back and manage sales, support, and future R&D. If AMD adopted a MIPS design, they would be paying MIPS for it, and would then either have nothing to differentiate themselves with, or have to spend years designing a custom MIPS CPU. AMD's life blood is x86. Maybe that wouldn't be true, if things had gone differently years ago, such as acquiring a better CEO than Ruiz. But today, their choices are success with x86, or turning into the next VIA.

IoW, looking at the one we know is being produced, Calxeda and HP would be reducing sales of Opterons, not ARM. The same would go for MIPS. If AMD adopted ARM or MIPS, and the same thing happened, they would still be losing sales. It would be stupid for them, compared to making a better Bobcat. The other side of the coin is also true: better Bocats and Atoms, in systems with quality glue, could very well take the thunder away from Calxeda. With ARMv8 still vaporware, there is time for that to happen.

The way I see things a shift of sales to another ISA (eg, AMD MIPS) is not necessarily a bad thing provided the change in revenue doesn't shift from high profit to low profit at the same time.
True, but doesn't margin always compress, without extenuating circumstances? The primary advantage ARM has, IMO, is that it won't be any easier for Intel to compete with ARM as for ARM to compete with Intel, and those margins play a role in that.

ISA will matter very little for the final product, at least once ARMv8 comes out. But, that's a customer-visible issue. ISA can and does matter to Intel and AMD. ISA can and does matter to ARM and MIPS. ISA can and does matter to licensees of ARM or MIPS ISAs and cores. It just won't matter to your average CTO (though ARM has the advantage of getting lots of good press lately), since all of them run the same software, or close enough to the same software.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
The Atom(3.7mm2 45nm), Cortex A9(6.5mm2 40nm), and Bobcat (4.6mm2 40nm) all have really tiny cores so I would say very little.

According to this image and the following links http://www.arm.com/products/processors/cortex-a/cortex-a9.php (click under Performance Tab) , http://www.mips.com/products/cores/32-64-bit-cores/mips32-1074k/#specifications

AMD_Ontario_Bobcat_vs_Intel_Pineview_Atom.jpg


The die sizes are:

Atom: 9.7mm2 (Intel 45nm)

Bobcat: 4.6mm2 (TSMC 40nm)

Cortex A9 (without FPU): 1.5mm2 on TSMC 65nm

MIPS 1094Kf (with FPU): 4.1mm2 (TSMC 40G) for the dual core. This puts the single MIPS core at 2.05mm2 for TSMC's 40nm.

Now with those specs out of the way I wondered how the finished die size would compare if L2 caches were figured in.

As an example, each Calxeda Cortex a9 server quad core comes with 4MB L2 cache.

If AMD were to design either Bobcat or MIPS to compete with that for "Phyicalized Server" here is what I come up with (using the spec in the chip architect diagram of 3mm2 for each 512k of L2 cache)

MIPS 1074Kf quad core with 4MB L2 cache: 8.2 mm2 for the CPU cores and 24mm2 for the 4MB L2 cache= 32.2mm2 total die size

Bobcat quad core with 4MB L2 cache: 18.4 mm2 for the CPU cores and 24mm2 for the 4MB L2 cache= 42.4mm2 total die size

Going by those numbers I come up with the MIPS 1074Kf quad core having 76% of the die size as the Bobcat quad core.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
First, AMD has no offerings, yet, and will likely get beaten to market by Intel on that front. Second, ARM is more likely, due to being well known, regardless of whether any given MIPS implementation is as good as or better than any given licensable ARM implementation. Third, MIPS, like ARM, will sit back and manage sales, support, and future R&D. If AMD adopted a MIPS design, they would be paying MIPS for it, and would then either have nothing to differentiate themselves with, or have to spend years designing a custom MIPS CPU. AMD's life blood is x86. Maybe that wouldn't be true, if things had gone differently years ago, such as acquiring a better CEO than Ruiz. But today, their choices are success with x86, or turning into the next VIA.

I have been wondering if AMD could just buy MIPS....the entire company. (I looked it up and it no where near the price of ARM.)

With Rory Read's (AMD CEO) past connections to Lenovo and his strategy of "Emerging Markets", "Cloud" and "Low Power" I just have to wonder if the company can somehow connect the dots with MIPS for one of their server product categories.

IoW, looking at the one we know is being produced, Calxeda and HP would be reducing sales of Opterons, not ARM. The same would go for MIPS. If AMD adopted ARM or MIPS, and the same thing happened, they would still be losing sales. It would be stupid for them, compared to making a better Bobcat. The other side of the coin is also true: better Bocats and Atoms, in systems with quality glue, could very well take the thunder away from Calxeda. With ARMv8 still vaporware, there is time for that to happen.

For whatever reason, AMD seems pretty adamant about not putting RAS features on Bobcat. I suspect this has to do with sales of lower priced bobcats potentially eating into sales of more expensive Opterons.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
The older Diamondville Atoms were quite a bit more compact (an entire die including L2 cache, DMI and FSB is only 26 mm2). Also remember that these CPU have a fair amount of I/O so you could get into a pad limited scenario. For example the Calxeda SOC have the 72-bit DDR3 interface, the flash interface, USB, SATA, and the I/O for the fabric. A Seamicro cpu or SOC would be a bit simplier with only the memory and a PCI-E interface needed.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Also remember that these CPU have a fair amount of I/O so you could get into a pad limited scenario.

Sounds very interesting. Is there anyway you could explain what this means in layman's terms?
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
For example the Calxeda SOC have the 72-bit DDR3 interface, the flash interface, USB, SATA, and the I/O for the fabric. A Seamicro cpu or SOC would be a bit simplier with only the memory and a PCI-E interface needed.

So this would provide not only some die savings, but could it maybe also help with the overall die layout? (helping efficiency in other ways?)
 

Blades

Senior member
Oct 9, 1999
856
0
0
How many of you have an arm device (other than whatever phone/tab).. something like a PandaBoard? ARM has a looooooonng way to go.. Plus, there seems to be more of a demand for end-user multimedia/etc devices.. from their licensees... are they in line with the needs of a server environment?