Review Raspberry PI (ARM A72) vs EPYC for DC study. Interesting results.

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,478
14,434
136
OK, now this is a very specific test, but as much as they are disparate, I don't think other scenarios would be that different.

So I got an 8 gig Raspberry PI (8 gig, 32 gig HD, A72 v8 4 core 1.5 ghz). Its does the Open Pandenics covid 19 WCG unit, 4 cores times 7.5 hours

My EPYC 7742 does 128 of these same units at 2 ghz in 2.5 hours.

So it would take 32 PI's to do the same work, but 3 times slower. An 32 PIs at 5 watts each is 160 watts, vs the 7742 @250 watts. (motherboard, memory and all) So more power and 3 times longer for the same work is 2 times the power usage. (1200 watts total vs 625)

Cost... 32 PIs at about $60 each (including power supplies and cables, probably more) would be at least $1920.

The EPYC is about $4000+480+580 or $5060. (I paid $3000 less, that retail on ebay)

So the EPYC effectively less because it cost 2 times less electricity for the same work.

This seems WAY different than I am seeing when people talk about ARM and efficiency and power usage. Please tell me where I am mistaken in my math., But don't be a jerk if you find the errors of my ways. The run times are real.. The PI is at 7.5 hours and hasn't actually finished a unit yes, no other tasks running on the PI. And I am looking at several units on the 7742, one is at 97% in 2:21.


Edit: The first unit finished in 7:50, almost 8 hours. The next 2 units are at 8 hours and still running at 96 and 98%.

And my comment on "wins by a large margin" has to do with total power usage, which in data centers, and for me is a big deal.
 
Last edited:

piokos

Senior member
Nov 2, 2018
554
206
86
Sure but I mean manufacturers would publish these kind of benchmark results themselves if their claims were true. Ampere computing published nice charts showing "estimated" performance and power metrics of their ARM Altra Server platform in March: https://www.nextplatform.com/2020/03/03/ampere-aims-for-the-clouds-with-altra-arm-server-chip/
Why would AWS post this kind of data? They aren't selling you the chip nor the server. It's a trade secret.
The only reason I can think of is that the results are too underwhelming to be openly published.
Server OEMs share these data willingly, because it's a differentiating factor. It's not about the actual chip. It's about why you should buy Dell over Lenovo etc.

It's slightly different with ARM-based servers, because... no one buys them, no one cares and actually just a handful of popular companies even make something.
ARM absolutely makes sense in cloud and that's probably where it should stay (at least until most of us use ARM PCs).
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
ThunderX3 is not shipping in volume at this point, and Marvell's recent announcements indicate it may only ever ship in custom bespoke chips for specific large customers.

Exactly. That's why I thought he was talking about ThunderX3, since nobody he knows had even seen one.

I actually though X2 launched last year, not in 2018. Which is even worse because I've honestly never heard anyone even mention considering them. Maybe they just aren't easy to get here. Or no one is interested.
Because seriously, who buys experimental stuff like that? You either really need a local server - in which case you probably want something that can be quickly fixed/replaced... or you go cloud (ARM or not).

Cavium was selling ThunderX2 systems for a little while (at a very high price) through a partner website I think? In any case it appears to have been lack-of-interest more than scarcity.
 

piokos

Senior member
Nov 2, 2018
554
206
86
Are you sure about this? MacOS and Linux don't share a common source or kernel, not sure how you would get Linux support on Mac without some kind of translation layer, but then that wouldn't really be native. . .
I didn't say they share a kernel. I said Linux workflows work: same shell (zsh, bash...), similar (or same) software etc.

By comparison, developing for Linux on Windows is still quite an uncomfortable mess. You spend most of the time in some kind of VM - be it Docker, WSL or something else.
At least today we have things like Conda and modern IDEs that let you directly target a remote/virtualized terminal - so at least for a moment you can forget about the cmd horror and backslashes.
Cavium was selling ThunderX2 systems for a little while (at a very high price) through a partner website I think? In any case it appears to have been lack-of-interest more than scarcity.
I don't differentiate between the two unless there's a supply problem.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
I didn't say they share a kernel. I said Linux workflows work: same shell (zsh, bash...), similar (or same) software etc.

By comparison, developing for Linux on Windows is still quite an uncomfortable mess. You spend most of the time in some kind of VM - be it Docker, WSL or something else.
At least today we have things like Conda and modern IDEs that let you directly target a remote/virtualized terminal - so at least for a moment you can forget about the cmd horror and backslashes.

I don't differentiate between the two unless there's a supply problem.

Sharing a similar terminal/shell seems to a minor detail in the grand scheme of software development but I understand what you meant now.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com
Sharing a similar terminal/shell seems to a minor detail in the grand scheme of software development but I understand what you meant now.

As the Dev-Ops resource for our small company (and having been dev-ops for a company where I managed infrastructure for thousands if not tens of thousands of windows server VMs on 1TB+ Ram dense servers back in 2012) this always gets my goat.

Local development environment should be an IDE and source control. Push a branch, have it get built/go live on a managed VM somewhere that is crazy close to production spec. Run your tests, confirm it works, do the rest. What are you doing with those crazy local servers/containers and garbage? :sob:

In my perfect world the local architecture of your coding devices matters not so long as the tools work on it. The runtime architecture is somewhere else.

*Of course I know there are corner cases where this is less desirable/is impossible* but seriously. Have pity on us that are managing the infrastructure and actually expect your code to work on our standardized (OS/security/performance/dependencies/etc.) platforms :tearsofjoy:
 

xblax

Member
Feb 20, 2017
54
70
61
Why would AWS post this kind of data? They aren't selling you the chip nor the server. It's a trade secret.

No AWS probably has no incentive to publish such data. But Ampere themselves has. They want others to chooser their platform (vs Intel or AMD offers) and they also offer complete server systems themselves: https://amperecomputing.com/altra/

AMD and Intel are not so secretive about the performance when they launch a new platform. Sure their benchmarks are cherry-picked, but they still provide hard data and not only estimates.

ARM absolutely makes sense in cloud and that's probably where it should stay (at least until most of us use ARM PCs).

Does it really make sense in cloud though?
I think most of the general ARM perception comes from low power devices that have great idle power consumption and standby time. But unlike these low power devices which idle 99% of their time, cloud platforms are under load 99% of their time. I know no credible data that backs up ARM's efficiency in that scenario.

Software development for ARM isn't really a deal-breaker. For many popular programming languages that are used for server applications it doesn't even matter because they are platform agnostic (like Java/Python/Javscript). I'd say even for native languages software is rarely that hardware depended that it really matters if you build it for ARM or X86.

Where ARM really lacks is in terms of platform standardization. Porting the Linux Kernel from one ARM system to another is a PITA, because there usually is no way to automatically detect the system configuration/devices and every systems needs a custom bootloader. If Apple switches to ARM it will be interesting to see if they will try to establish a standard (like ARM UEFI) or if they will keep their system closed and locked-up like current smartphones.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
I think most of the general ARM perception comes from low power devices that have great idle power consumption and standby time. But unlike these low power devices which idle 99% of their time, cloud platforms are under load 99% of their time. I know no credible data that backs up ARM's efficiency in that scenario.
Actually Anandtech measured performance and power:
  • ARM Apple A13 @ 2.65 GHz has about 1330 pts in GB5 ST consuming 4.6 W
  • x86 AMD Ryzen 3950X @ 4.6 GHz has 1350 pts in GB5 ST consuming 19.4 W

So similar performance at more than 5x lower power consumption. ARM server can have 5x more cores than x86 system within same server TDP theoretically. But realistically only 2x more cores to compensate lack of SMT. But with much better performance per thread.



3950X%20Power.png



SPEC2006_S865.png
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com
Actually Anandtech measured performance and power:
  • ARM Apple A13 @ 2.65 GHz has about 1330 pts in GB5 ST consuming 4.6 W
  • x86 AMD Ryzen 3950X @ 4.6 GHz has 1350 pts in GB5 ST consuming 19.4 W

So similar performance at more than 5x lower power consumption. ARM server can have 5x more cores than x86 system within same server TDP theoretically. But realistically only 2x more cores to compensate lack of SMT. But with much better performance per thread.



3950X%20Power.png



SPEC2006_S865.png

Well, with some perfect scaling going on. Interconnecting cores is "expensive" - it takes silicon and power.

Where are the Apple 32+ core server CPUs that we can test to see how they operate under full load? That's a rhetorical question :p

Until then, it will be just Apples and Oranges.

:tearsofjoy: I'll show myself out.
 
  • Like
Reactions: Tlh97

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
Actually Anandtech measured performance and power:
  • ARM Apple A13 @ 2.65 GHz has about 1330 pts in GB5 ST consuming 4.6 W
  • x86 AMD Ryzen 3950X @ 4.6 GHz has 1350 pts in GB5 ST consuming 19.4 W

So similar performance at more than 5x lower power consumption. ARM server can have 5x more cores than x86 system within same server TDP theoretically. But realistically only 2x more cores to compensate lack of SMT. But with much better performance per thread.



3950X%20Power.png



SPEC2006_S865.png

You're comparing apples and pumpkins when we're talking about trees. This post is completely irrelevant.
 
  • Like
Reactions: Tlh97

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,478
14,434
136
Actually Anandtech measured performance and power:
  • ARM Apple A13 @ 2.65 GHz has about 1330 pts in GB5 ST consuming 4.6 W
  • x86 AMD Ryzen 3950X @ 4.6 GHz has 1350 pts in GB5 ST consuming 19.4 W

So similar performance at more than 5x lower power consumption. ARM server can have 5x more cores than x86 system within same server TDP theoretically. But realistically only 2x more cores to compensate lack of SMT. But with much better performance per thread.



3950X%20Power.png



SPEC2006_S865.png
Talk about cherry picking...First, you should use an EPYC something, a server CPU. Next, as said, you have to compare against an ARM CPU with 32 cores or more. Where is that ?
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Well, with some perfect scaling going on. Interconnecting cores is "expensive" - it takes silicon and power.

Where are the Apple 32+ core server CPUs that we can test to see how they operate under full load? That's a rhetorical question :p

Until then, it will be just Apples and Oranges.

:tearsofjoy: I'll show myself out.
We do have 64-core ARM Graviton2 which destroyed 32c/64t AMD Zen1 Epyc Naples. Putting aside power consumption due to 14nm vs 7nm to be fair. The RAW performance of ARM cores is dangerous already. I have to admit that Zen2 Rome is better than N1 core based G2. But Zen1 was destroyed. Quite a big achievement for ARM ecosystem.

And don't forget that 64-core G2 is monolithic CPU, the biggest on the world. And 80-core Altra or 128-core Altra MAX will take a crown however both are Neoverse N1 ARM core based. Pretty interesting achievement too. AMD cannot make 64-core Zen2 server chip due to 1005 mm2 total area. Probably only 32-core Zen2 monolith. N1 core has 1.4 mm2 vs Zen2 core 3.6 mm2.

Remember Neoverse N1 lacks TME instructions for better multi-threaded workloads like SQL etc This can speed up some specific server loads from +40% up to +500% (maybe that's why lost so much in some Phoronix tests?). x86 has these instructions since Haswell IIRC (it's called TSX). There was no info about TME in Neoverse V1 so lets see. But I guess V1 core is for HPC and not for SQL (this will handle better N2 which might have TME).

So yes, ARM Neoverse N1 is not there yet. But it's baiting.
And V1 neoverse with +50% IPC uplift looks super dangerous on paper. ARM said it will support chiplet architecture similar to AMD, DDR5 and HBM2E memory, up to 192 cores, SVE martrix multiplication. V1 has all parameters on paper to be server No.1 for 2021. If all this comes true then V1 can take a performance crown in servers and beat Zen3 Milan and Intel's Sapphire Rapids. That's why Nvidia bought entire ARM IMHO. Lets see next year battle among V1, Zen3 and SPR. And maybe Nuvia will leak some ES performance too. But definitely pretty bad situation for x86 in servers.
 

piokos

Senior member
Nov 2, 2018
554
206
86
No AWS probably has no incentive to publish such data. But Ampere themselves has. They want others to chooser their platform (vs Intel or AMD offers) and they also offer complete server systems themselves: https://amperecomputing.com/altra/
But ARM chips aren't really aimed at small business that may be interested in such comparisons, benchmarks, public power data and so on.
Big buyers, especially large datacenters, get this data from the manufacturer. Or measure everything they need during platform testing.

You have to understand that enterprises and consumers buy hardware differently.
For example: you want to set up a new database. You call a company that does this. They tell you what hardware is recommended, what they can provide and what you'd have to get yourself. They create a testing environment and help you try your workflows. And they tweak the system for your needs, so it doesn't have to reflect some figures from the datasheet.
In other words: you get most of the data yourself anyway.

Does it really make sense in cloud though?
I think most of the general ARM perception comes from low power devices that have great idle power consumption and standby time. But unlike these low power devices which idle 99% of their time, cloud platforms are under load 99% of their time. I know no credible data that backs up ARM's efficiency in that scenario.
The only data you need is the fact that AWS offers ARM instances that match x86 for 2/3 of the price.

It's cloud. You don't pay the electricity bills, you don't fix the servers, you don't have to take care of the heat. You pay for usage. Nothing else matters.
Software development for ARM isn't really a deal-breaker. For many popular programming languages that are used for server applications it doesn't even matter because they are platform agnostic (like Java/Python/Javscript). I'd say even for native languages software is rarely that hardware depended that it really matters if you build it for ARM or X86.
Out of the top 3 databases:
- Oracle and MS SQL Server are x86-only,
- MySQL started supporting ARM in the last release, but only on RH/CentOS/Oracle Linux

I could find other examples. Yes, you can develop whatever you want. But some things just don't work (yet).
Where ARM really lacks is in terms of platform standardization.
Which is precisely why it makes most sense in cloud services. Because the end user doesn't care what hardware is underneath.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Are you for real?
You didn't see the V1 core presentation slides?
ARM takes severs seriously (development of X1/V1 core started back in 2016 when Softbank bought ARM).

It looks like V1 is the main ARM's frontal attack in servers while N1 was only guns pre-warming:
  • 1st slide: +50% IPC over N1 (remember N1 core has IPC roughly similar to Zen2)
  • 2nd slide: +82% FPU IPC uplift over N1 (bfloat16 support allows double throughput for ML)
  • 3rd slide: coherent in-package chiplets (copy of AMD chiplet architecture + NPU accelerators on top of that)
  • 4th slide: N2 platform up to 192-cores and 350W TDP (water cooled like Fujitsu?)
  • 5th slide: Comparison to AMD Zen2 (traditional 64c) and Intel Cascade (traditional 28c) shows DOUBLE performance per thread and almost double performance per socket in compare to 64-core AMD Zen2 Rome despite having only 96-cores and no SMT.
  • 1st slide: you can see DDR5, HBM2E and PCIe5.0 for V1


It looks very real to me. BTW Graviton3 based on this V1 core is probably taped out already and they will boot first chip samples in Dec and aiming mass production for H2'21. I expect Amazon to use 5nm process.



Neoverse-crop-10.png



Neoverse-crop-12.png


Arm-Neoverse-2020-Roadmap-CCIX-and-CXL.png



Arm-Neoverse-N2-Platform-2020-Update.png



Neoverse-crop-15.png
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
Have pity on us that are managing the infrastructure and actually expect your code to work on our standardized (OS/security/performance/dependencies/etc.) platforms

Lol. Our IT always hides behinds standards and policy but then when you look closer it's just hiding so they don't have to admit how clueless they are and don't have to do any work. (Not saying you are but this "standardized platform" BS triggered me).

- We can't have any Linux servers - Windows only policy (because they need a gui because clueless)
- Can't have a non-standard PC like a workstation for doing actual compute work I need to do, not even a windows one
On the plus side I can simply shift and kind of blame for non-delivery on IT and my management knows IT is bad.
- Not allowed to run VMs on your laptop (learned this after needing to upgrade to new version of VMware workstation which was denied due to this reason)
- Windows 10 is a mess - there are different version out in the wild within the company. so it's not standardized at all
This is an issue because you get an UAT machine and everything works but then it doesn't work for some users because the app has a known bug for that specific win10 subversion running on these specific users. So the tests were a waste of time and you know have 10-100s of users with a broken new version of a software.

/end rant
 

Brunnis

Senior member
Nov 15, 2004
506
71
91

piokos

Senior member
Nov 2, 2018
554
206
86
Lol. Our IT always hides behinds standards and policy
If you don't like how your company runs the business, tell them. Or quit. Why mock them here? What next? You mum burnt the toast?

Standards and policies is what hold enterprise IT together. In fact, working in IT is mostly about standards and policies (and risk management).
- We can't have any Linux servers - Windows only policy (because they need a gui because clueless)
Windows-only policy is pretty standard in smaller companies - just easier to manage privileges and get support. It happens in larger companies as well. Essentially, they get a Linux server only when some solution can't run on Windows or is MUCH better on Linux.

If you think a Linux server would be a much better choice for something, tell your employer.
- Can't have a non-standard PC like a workstation for doing actual compute work I need to do, not even a windows one
Non-standard PC as in DIY? In a company? You're 17 or what?


Seriously, WTF?
That whole comment looked like you've been working for 2 months and you're shocked by all the formalism compared to building PCs in a basement.
And I don't know why it appears here.
Why don't you open an appropriate discussion, like "I don't understand IT". I may be able to help...



Insults aren't allowed.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

teejee

Senior member
Jul 4, 2013
361
199
116
If you don't like how your company runs the business, tell them. Or quit. Why mock them here? What next? You mum burnt the toast?

Standards and policies is what hold enterprise IT together. In fact, working in IT is mostly about standards and policies (and risk management).

Windows-only policy is pretty standard in smaller companies - just easier to manage privileges and get support. It happens in larger companies as well. Essentially, they get a Linux server only when some solution can't run on Windows or is MUCH better on Linux.

If you think a Linux server would be a much better choice for something, tell your employer.

Non-standard PC as in DIY? In a company? You're 17 or what?


Seriously, WTF?
That whole comment looked like you've been working for 2 months and you're shocked by all the formalism compared to building PCs in a basement.
And I don't know why it appears here.
Why don't you open an appropriate discussion, like "I don't understand IT". I may be able to help...

Please stop with your insults.
 
  • Like
Reactions: beginner99

LightningZ71

Golden Member
Mar 10, 2017
1,627
1,898
136
Lol. Our IT always hides behinds standards and policy but then when you look closer it's just hiding so they don't have to admit how clueless they are and don't have to do any work. (Not saying you are but this "standardized platform" BS triggered me).

- We can't have any Linux servers - Windows only policy (because they need a gui because clueless)
- Can't have a non-standard PC like a workstation for doing actual compute work I need to do, not even a windows one
On the plus side I can simply shift and kind of blame for non-delivery on IT and my management knows IT is bad.
- Not allowed to run VMs on your laptop (learned this after needing to upgrade to new version of VMware workstation which was denied due to this reason)
- Windows 10 is a mess - there are different version out in the wild within the company. so it's not standardized at all
This is an issue because you get an UAT machine and everything works but then it doesn't work for some users because the app has a known bug for that specific win10 subversion running on these specific users. So the tests were a waste of time and you know have 10-100s of users with a broken new version of a software.

/end rant
Having worked in IT for much of my professional career, I can understand your frustrations with things that seem just blindingly obvious to you, but, let me assure you, things are the way that they are because of lesson learned over decades of trying to make that hunk of sand, plastic, and metal reliably function, day in, and day out, and for us to be able to get another one for you ASAP when the day comes that something in it goes sideways.

Our budgets have been slashed to the bone in the name of quarterly profits. We can't hire seasoned professionals in most cases as management is convinced that this stuff is so simple, any old water filled meat sack with an A+ cert can pull it off, so, in most places, you've got a couple of guys that know how everything works, and a variable sized group of grunts that can maybe pull off throwing an image in bare metal and plunking it on your desk. No one wants to pay the price for an admin that can speak Linux from a command prompt and can diagnose why your ext3 boot volume won't mount on your super critical server at 2:00AM. They don't want to pay the real price for a few one off high end workstations when they know that your big standard optiplex with an i7 and maybe an add in card can still complete the job, just not as quickly as you just know that a quad video card Threadripper with 128GB of fast ram could do it in.

We keep things running and largely safe by having a fleet of as few different machines as possible running as few of a varieties of Windows 10 as we can manage (after all, deity forbid that the eight year old DB program that was hand written by an intern that was a friend of the CFO's kid will actually execute on anything more recent than build 1709) so that we can use as few bodies as possible to push out patches, hotfixes, BIOS updates, etc the instant they survive Steve taking an example machine and testing it there for a few minutes. After all, we've got an exec suite of post operative brain donors that, while squeezing us for manpower, and shipping off anything and everything to contractors and third world countries, are dead set on opening every blasted email that they receive from an unknown sender and rushing to click on every link possible in a race to see who can get every machine locked with ransomware the quickest.


So, yeah, please whine to me about how you can't get a fancy teen thousand dollar ferrari of a computer on your desk so that you can do whatever it is that you think is the absolute most important thing happening at the company twice as fast so that you can play on your phone for twice as long as you already do at your desk and tell me how it's IT and their flaming incompetence that's holding you back from curing cancer while happening to invent a faster than light engine for NASA on the side...

/RANT
 
  • Like
Reactions: NTMBK

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
Non-standard PC as in DIY? In a company? You're 17 or what?

When the standard is at best a quad-ocre ULV laptop, it's hard to get anything done besides power points.

The issue is standards are fine but some flexibility is needed and the lack of knowing when and where just scream incompetence and rigidity.
 

NTMBK

Lifer
Nov 14, 2011
10,208
4,940
136
It looks like V1 is the main ARM's frontal attack in servers while N1 was only guns pre-warming:

What's with all this violent rhetoric? I've noticed this in some of your other posts, too, talking about one chip "destroying" another. We're talking about two corporations designing computer chips. There's no guns, and no destruction. One company might sell more chips than the other- cool. But seriously? Chill out.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,248
136
Interesting thread at times.

Apple = Apple who? They're dead to me.
ARM = It's cool my phone and tablet.
Server = I could care less what's in them.

Carry on as it's somewhat interesting as a boredom buster.
 

soresu

Platinum Member
Dec 19, 2014
2,612
1,812
136
BTW Graviton3 based on this V1 core is probably taped out already
That's assuming a lot.

N2 will provide a very substantial increase in performance while having a significantly lower power draw and lower area per core.

In business segments that serve cloud uses they may prefer a higher number of cores/threads to higher absolute IPC when that IPC comes at the cost of significantly increased power draw and silicon area.

Though businesses that are pushing for absolute IPC will probably pick V1 for their solutions, ie off the shelf HPC like SiPearl that announced first use of Zeus.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com
I wanted to point out, without massive quoting, is that I was talking about Dev Ops. This may or may not be be part of IT.

This usually means the people who do the work standardizing and automating the infrastructure (usually cloud, either on prem, off prem or hybrid) resources needed so developers can spend time coding and architecting and doing the brain/creative stuff instead of worrying about why the code they made doesn’t work for some other guy in the group on his local instance only to find out it was some trivial configuration error.

Maybe setting up vms and servers is a fun distraction from the day to day development practices but you get paid too much for that. Get back to work 😛

(seriously though it helps scale solutions so that you don’t solve trivial problems n-many times, where n is the number of development and QA folks working)
 
Apr 30, 2015
131
10
81
There is some ARM Mobile road-map information appearing from Devcon 2020. Apparently AArch32 will not be present in 'Makalu', which is due in 2022. Maybe Anandtech can glean more from the event.
1602104224949.png