Discussion AWS Graviton2 64 vCPU Arm CPU Heightens War of Intel Betrayal

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Nothingness

Diamond Member
Jul 3, 2013
3,063
2,047
136
Wut?

Neoverse N1/Ares is BASED on Cortex A76/Enyo, but it is not exactly the same.

The used the same execution logic but the uncore elements in N1 are architected for server/datacenter use.
That's why I explicitly used the term "core". That's very similar to what Intel does with cores that are almost identical across their product range while the interconnect, cache sizes, etc change. Same thing, really. So yes, both Intel and ARM use almost the same core from tablet to server. And I can't see why Apple could not achieve that.
 

Nothingness

Diamond Member
Jul 3, 2013
3,063
2,047
136
Once again the issue is why do you even CARE about these numbers? Is the goal understanding or redacted?

Why do people say that IPC (and IPC equivalents) can't be compared across frequencies? Because the comparison is misleading IF you are trying to use IPC to gauge some aspect of the micro-architecture.
If I want to compare two branch predictors, I want to keep *everything* else identical to see which predictor delivers higher IPC. If I run one core at twice the frequency, now I can't tell if the lower IPC of the faster core is because the branch predictor is not as good, or if it's because the faster core is simply spending more cycles waiting on RAM.

BUT
- that's not what we are doing here AND
- the comparison doesn't go the way you want.

The comparison here is ultimately: what is a better design direction? Speed demon or brainiac? Of course "better" is a flexible word, but we're treating it as some combination of
- smaller core
- lower power
- higher performance (on GB, SPEC, browser, ...)

So what we ACTUALLY have is two cores that get more or less equal results across a wide range of code, one achieving that by
- 5GHz
- much higher power
- core ~twice as large (subject to quibbling about uncore, process, ...),
one achieving that at
- 2.6GHz.

Arguments about "exact" IPC are moronic in this context, demonstrating an utter inability to pick up on what is important, namely that core A achieves essentially the same results as core I through very different means.
So what do you do with that info?

At a business level, it suggests that core A has a bright future ahead of it.
At the DESIGN level, it is interesting to consider the various mechanisms by which core A manages to achieve such a spectacular degree of "work done per cycle".

Saying that core I is hampered by running faster is completely missing the point. Well, duh, OF COURSE core I is hampered by running faster! That's why team A put all their effort into a brainiac design, not a speed demon design. Team I is welcome to go back to the drawing board and run their core at 2.6 or 3 or 3.5GHz.
But there's something insane about simultaneously saying
- of course A can do well because they only have to run at low frequencies; everyone knows that at higher frequencies you spend ever more time waiting on DRAM AND
- therefore what team I should do is reach for ever higher frequencies...

The discussion the adults here are having is not about rah rah team A vs team I. It is about given the realities of power, transistor size (high frequencies means larger transistors and cells), frequency scaling (both transistors and metal) and likely smaller reticles going forward, how much more should future CPUs push on the speed side vs the brainiac side?
You're not helping if your contribution to that is tribal double-speak along the lines of "sure A does really well --- but they're cheating by using large caches [or smarter design or lower frequency or whatever]".
There's no such thing as "cheating". There is design that is more or less fit for the purpose and the future of technology. You're not helping team I by convincing their marketing team to double-down on even higher frequencies in spite of how those have proved a dead end over the past five years!

Maynard, you don't seem to be able to learn from your previous mistakes: don't point your gun at me, I try to be as factual as possible and I'm defending with data the ARM chips against some x86 fanatics. But I guess an Apple fanatics has a hard time swallowing that.
 
Last edited by a moderator:
  • Haha
Reactions: Tlh97 and lobz

Nothingness

Diamond Member
Jul 3, 2013
3,063
2,047
136
You should be ashamed to call yourself a moderator here. You're calling a troll one of the actual people designing these chips - it's utter insanity.
He really called me a troll? How ironic and funny. Isn't that a violation of forum rules to call someone a troll? Calling someone a fanboi is. I've put it on ignore, first time I had to put a moderator on ignore in more than 20 years in various forums.

For the love of god stop the incessant bickering and idiotic comments and denial and trolling. All of what you're all achieving is driving the actual people who have knowledge and able to give some insights on the topic away from the site in sheer disgust.
Though I've considered leaving the forum, at the moment I only play with the ignore button. There are still many people that have contradicting and/or interesting points of view :)
 
  • Like
Reactions: CHADBOGA and teejee

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
I'm defending with data the ARM chips against some x86 fanatics. But I guess an Apple fanatics has a hard time swallowing that.
Isn't that a violation of forum rules to call someone a troll? Calling someone a fanboi is.
@Nothingness

This is a very interesting set of quotes.

I noticed you said you're "defending with data". Did I miss one of your posts by accident that shows the data? Or can you explain what you mean?
 
  • Like
Reactions: Tlh97 and Hitman928

naukkis

Senior member
Jun 5, 2002
895
773
136
There is a big difference between SAYING score/GHz correlates to IPC, and PROVING it. Unless there is proof, it is nothing better than anecdote.


We have source code which have defined instructions to do whatever it will do. Machine-level instructions won't matter as there's multiple possible ways to translate those source code instructions even for same ISA. Whole Spec meaning is to offer source-code based benchmark which can be translated to cpu specific instructions freely.

Score/GHz is that benchmarks IPC. There's zero point of making machine instruction level comparison - in that race just adding desired amount of nops to instruction flow makes your IPC to rise - exactly to as high as wanted if cpu hardware is made to execute nops fast.
 
Last edited:

coercitiv

Diamond Member
Jan 24, 2014
6,626
14,036
136
Turns out that rebuking someone's absurd views on the topic does not help the poor state of the thread after all, general desire for absurd conflict still remains, and the result is pretty much the same.

Lesson learned, again, for the nth time, until the next time.
 
  • Like
Reactions: Tlh97 and Elfear

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Wall and wall of text and some of you guys explain nothing.

I already said this here i say it again and LOUD:
YOU CAN NO LONGER PROPERLY MEASURE IPC ON NEW CPUS IN THE NEW MULTICORE ERA!!!!!

These days even in single core single thread applications multicore cpus share resources like L2/L3 caches in order to improve performance, some do it better than others, some share more than others, this affecting single thread performance tremendously.
Now multi core with multi threading the best cpu given the best throughout and balance better the resources for each cpu core will also be the more efficient in multithreading apps.
Finally trying to measure multicore performance, basing it from the result of one single core/thread test is totally flawed because of all that.

So now we are moving the goal posts to saying that single-threaded performance/IPC doesn't matter anymore and overall multi-threaded throughput is king....

If only the ARM ISA had small, power and area-efficient cores that could be stacked wholesale into chips at low cost. Oh wait that's describes every single one of the Cortex cores they offer.

Neoverse E1 is touted to be under 0.5mm^2 on 7nm, imcluding SMT, nothing from x86 really comes close at this point in time.
 
Last edited:

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
We have source code which have defined instructions to do whatever it will do. Machine-level instructions won't matter as there's multiple possible ways to translate those source code instructions even for same ISA. Whole Spec meaning is to offer source-code based benchmark which can be translated to cpu specific instructions freely.

Score/GHz is that benchmarks IPC. There's zero point of making machine instruction level comparison - in that race just adding desired amount of nops to instruction flow makes your IPC to rise - exactly to as high as wanted if cpu hardware is made to execute nops fast.
1) Agree, pure IPC evaluation is purely scientific and has little application to the real-world, where raw SPECint, SPECfp, and any other benchmark is all we care about, and normalizing to GHz doesn't matter one lick. However, again, I didn't make the IPC claims though, someone else did, and I'm a curious person, so I'm curious about it, and haven't been able to verify their claims, and they haven't backed up their claims with proof. That's all this is.

2) I agree that the instructions given by SPECint are to achieve a certain task, but there are many steps along the way that can cause variances in results. So when someone says SPEC/GHz is IPC, I want to see the proof. The reason I ask this is:

- Shouldn't we consider the purported (but not verified) 8 or 10% difference in instructions retired between the two ISAs?
- Shouldn't we consider differences in benchmark results depending on compiler? Do we actually know the differences between Clang/LLVM as part of Xcode, and whether it has any benefit or detriment compared to Clang/LLVM on Ubuntu on the same machine? I ask because as I understand, Xcode compiles in a hardware/device-specific fashion to optimize the application for that device. To the best of my knowledge Clang/LLVM on Ubuntu and Windows doesn't necessarily do so. Isn't this a potential source of variance?
- Specific to SPEC/GHz, shouldn't we also consider differences between reported boost score and average clock speed actually seen during testing?

This is not a comprehensive list, and again, these are just questions I'm asking to those who claim that there is no real difference between SPEC/GHz and IPC. While individually small, such differences do compound. When we introduce a bunch of areas of small (and easily dismissible, it seems) error, then we end up with the un-verified (and possibly wrong) assumption that IPC scales with SPEC score.

Please understand, this is all just me being inquisitive. I get the sense that there is no way that the IPC lead with A13 is anything but substantial. I am just curious how big. And that requires showing proof that the above factors have been controlled for, which wasn't done before, as best I can tell.

And also when people started making very specific claims ("+83%", "+80%", etc.) about how big the IPC lead is, I got excited and intrigued that they actually had the proof, but it seems that they don't actually have the data.
 

name99

Senior member
Sep 11, 2010
496
382
136
https://www.anandtech.com/show/14892/the-apple-iphone-11-pro-and-max-review/2

Shows the A13 as a separate unit. An on board memory controller or cache can be shared by more than one type of compute unit. A core has one bus that loads in and out through the cache hierarchy. Now if the AMX system can transfer data directly register to register. AMX is in the CPU. I'll even accept it if it's on the same L1, maybe even L2 but then we're returning to Bulldozer land where 2 quasi cores with independent L1s share an FPU and L2.

Not the best source but...(bloomberg)

How can you talk about 'kids who understand the real issues', only to go ahead the next minute and equate 'in the core' and 'on the CPU'?

OMG!!! That's all I'll say.
You kids have fun.
 

RetroZombie

Senior member
Nov 5, 2019
464
386
96
So now we are moving the goal posts to saying that single-threaded performance/IPC doesn't matter anymore and overall multi-threaded throughput is king....
If it's 2020 and it still isn't, oh well...

Neoverse E1 is touted to be under 0.5mm^2 on 7nm, imcluding SMT, nothing from x86 really comes close at this point in time.
I was trying to measure something with my post. Let's try to measure another:
From the vega release: Second of all, we have a formal die size and transistor count for Vega 10. The GPU is officially 486mm2, containing 12.5B transistors therein. That amounts to 3.9B more transistors than Fiji – an especially apt comparison since Fiji is also a 64 CU/64 ROP card – ...
Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji.Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz...


How big would the arm core grow if designed to run at 2x the clock (instead of 2.0Ghz like most arm chips run), do you know?
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
If it's 2020 and it still isn't, oh well...


I was trying to measure something with my post. Let's try to measure another:
From the vega release: Second of all, we have a formal die size and transistor count for Vega 10. The GPU is officially 486mm2, containing 12.5B transistors therein. That amounts to 3.9B more transistors than Fiji – an especially apt comparison since Fiji is also a 64 CU/64 ROP card – ...
Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji.Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz...


How big would the arm core grow if designed to run at 2x the clock (instead of 2.0Ghz like most arm chips run), do you know?

If multi-threaded throughput is king (as you insinuate), why design a core for higher clock speeds? Far more efficient to put more cores in for throughput instead.

For all the grumbling about how Zen 2 and Intel cores are designed for higher clocks, server parts with such cores also largely run at 2.5-3.5ghz anyways, within the ballpark of contemporary ARM designs.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
No matter how much I would have liked to get a K12, AMD did the right thing back then, they were too weak to track two targets, and x86 still is the preferred solution for servers and end user machines.

Edit: note AMD also failed terribly with their Cortex-A57 based chip which reinforces their decision to cancel K12.
As Steve Jobs said: "Go where the puck is going and not where it is right now (Wayne Gretzky)"
And this is exactly what Amazon is doing - They think in long term.
Now AMD has a bunch of money and still betting at dying horse. Not sure how long this gonna work. With new cores based on ARMv9, SVE2 the situation will become much harder for x86 than now with relatively small and weak A76.
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
...that no OEM wants to sale and no costumer has an interest in. AMD has troubles getting their EPYC x86 into OEMs. How you think there chance of success with an ARM CPU would be? Let me help, you. ZERO.
I agree partly. But instead beating Intel by 10% better efficiency in x86 yard AMD would be able to smash Intel with 2x as good efficiency with K12. Such a huge advantage could bring much more customers than just x86 ISA legacy (and this will become even stronger in time). But who knows, maybe Zen3 will bring double IPC with double efficiency and x86 will enter smart phone market :)
 

Richie Rich

Senior member
Jul 28, 2019
470
229
76
@everybody: bla bla bla .... Apple will never move into servers .... bla bla bla...
Reality: ex-Apple architects starts the NUVIA up

@everybody: bla bla bla .... Cortex cores cannot scale up and if do it would consume same power as x86 .... bla bla bla...
Reality: Graviton2

There will be much more crying, sobbing and whining from guys like Thunder57, lobs, coercitive etc. in 2021 when Ampere Mystique and Graviton3 based on A77 will come. Yeah, prepare your tissues boys because I'm gonna pull out some older post of you. This gonna be fun :D
Because the majority agreed on a certain opinion does not mean that they are right. It could just mean they are sheeps....



Member callouts not allowed.
Dial back your rhetoric. It is not productive here.


esquared
Anandtech Forum Director
 
Last edited by a moderator:
  • Haha
Reactions: CHADBOGA

Richie Rich

Senior member
Jul 28, 2019
470
229
76
specint2006, single-thread, normalized to GHz
Graviton2 +14% over Rome

specint2006, single-thread, raw
Rome +16.2% over Graviton2

It appears that there likely is an IPC advantage of Arm in those given setups. which doesn't matter much since Rome can clock higher, giving it a 16% lead in raw single-threaded specint2006 score.
Graviton2 is beating Zen2 Rome in IPC by 14%, nice.
Rome's higher clock for ST is about +32% (1.14x1.16=1.32.....1.32 x 2.5 GHz = 3.3 GHz ST Rome Turbo clock, that's correct) which leads into 2.3x higher power consumption than Rome at 2.5 GHz (and approx 5x in compare to Graviton2 which is big price for turbo clock).
In other words, Rome system consumes 5x more electricity for just 16% performance advantage. Rome turbo helps in some HPC tasks but is it good for cloud service economy? Not sure.


Multi-thread
specint2006, multi-thread, raw

Graviton2 - 100%
7601 - 115%
7742 - 250%


Granted, none of this is apples to apples yet until they do a direct head-to-head comparison. Arm is doing something, but it's still many steps behind in the server market in my estimation. It's slower than 7601 in 2P (32 x 2 = 64 cores) and it's so much slower than 7742 that any amount of SMT isn't going to get it over the hump.

So the brand new 2020 Graviton2 is 40% the speed of 2019's 7742. And AMD still have Milan to release this year.
That's strange because Graviton2 was 1.4x faster than Zen1:
Gravito2 Anandtech test
I would expect 64 core Rome to be 1.5 faster than Graviton2. But 2.5x looks too much.

Don't forget die size:
  • 64-core Rome is 1004mm2 total die size
  • 64-core Graviton is around 250-300mm2 estimated (N1/A76 is 1.4mm2 x 64 = 90mm2, 32 MB L3 cache is approx 34mm2, so 124mm2 + MEM CTRL and I/O)

So 2P Rome system is 2008mm2 Goliath comparing to very tiny 300mm2 and cheap peace of Graviton2 silicon (8x smaller and cheaper). Not bad result for Graviton2 at all (when being crippled by tiny 32MB L3).
Especially when we take into account power consumption which is typically half of x86. What was the 2P Rome TDP? 2x180W?

Summary: Amazon can sell Graviton2's cloud service for much lower price than Rome (expensive to buy, higher power consumption). Let's see Amazon's Rome prices. But it looks like x86 world cannot win in cloud service. And Zen3 won't make that better. Graviton3 based on 128-core A78 at 5nm with higher IPC advantage over Zen3 will be cheaper and also faster than 2P Milan systems. I'm afraid x86 is done.
 
Last edited:
  • Haha
Reactions: CHADBOGA

Richie Rich

Senior member
Jul 28, 2019
470
229
76
Your "per thread" metric is completely meaningless. The only way Graviton2 makes any sense is running large clusters of small VM instances. Make the entire chip try to do something at once on a larger workload and it chokes. It's clearly not suitable for a lot of workloads that Rome handles with aplomb. And as I mentioned, Milan is on the way (read: actually already here). Never mind the costs involved moving your software over to the platform, and that's if it can be moved. Proprietary software vendors may not make the switch unless they see a lot of green.
For some type of workload which is hard to parallelize it matters a lot. There will have a G2 significant advantage.
Especially for desktop, laptop and gaming is performance per thread important. Imagine ARM competition for AMD Renoir 8c/16t for laptops. They can use 16x core A77 (ARM core has half area so resulting in same total area) running it at 2.5GHz and still get performance similar to Renoir clocked at 5GHz (and that's not even possible) while having 4x lower TDP. And that's the ARM power for which x86 has no answer.

A77 is big step forward (20% int IPC and 35% FPU IPC) and probably bigger than Zen3. Hypothetical G3 with 64x A77 cores paired with 128MB L3 cache would be very dangerous for Milan system. Especially with ability to put 2 or 4 dies into one socket (like Xeon or Naples) boosting performance per socket. Your assumption that all ARM server CPUs has to stay monolithic with small L3$ is wrong.
 

Nothingness

Diamond Member
Jul 3, 2013
3,063
2,047
136
Look at it this way- the ARM fanaticism makes a nice change from the old days, where Intel fans would explain how their fab advantage made it 100% impossible that AMD would ever catch up. I think AMD were six months from bankruptcy for about 5 years, according to this forum.
I know you were being sarcastic. But fanaticism is never a good thing. The ARM fanatics here brought some x86 (both Intel and even more AMD) fanatics that are no better. That makes sensible discussion a pain.
 

Nothingness

Diamond Member
Jul 3, 2013
3,063
2,047
136
Yea, I am really getting sick of the ARM advocates, as many of them have no clue about reality.
I have the same issue with AMD fanatics you know. Or Intel ones.

Just ignore ARM threads like many ignore threads where AMD advocates are making claims that prove they have no clue about reality.
 

soresu

Diamond Member
Dec 19, 2014
3,208
2,480
136
armnuke's crazy conclusion: SMT benefit is 43% :D :D :D
You really should try reading things properly.

He said relative benefit - as in perf of SMT relative to the performance from doubling the cores is 43%, NOT perf relative to the non SMT score at the same core count (which is 12% as he said and you even quoted him saying oddly).
 
  • Love
Reactions: spursindonesia

Andrei.

Senior member
Jan 26, 2015
316
386
136
First, I believe the data Anandtech posted, this crap about denial is trolling.
Second, I said desktops, I meant desktop, laptop, server, hedt etc.. I don't have to name every variant, again with the trolling.
You should be ashamed to call yourself a moderator here. You're calling a troll one of the actual people designing these chips - it's utter insanity.

Edit: To moderators: Then ban me already. The fact that a moderator is openly trolling here in this forum while adorning a big yellow Super Moderator tag yet somehow claiming he's not posting a moderator (what a super convenient rule) is absurd. Get your act together.

And also stop claiming you "warned me before" when you do stealth edits on previous posts with no notifications whatever. I don't randomly go check old posts. Wonder how long before you notice my edit here.


We noticed your edit hours ago. Just discussing the repercussions.
Administrator allisolm



And saying AMD/Intel cores for tablets are the same as servers again is trolling, we all know that even a 3600 cpu is far different from a 7742 EPYC. again with the trolling.
AMD and Intel are using the exact same microarchitecture across their product lines. The 3600 CPU core to a 7742 IS IDENTICAL. IT'S EVEN THE EXACT SAME SILICON DIE. Do you realise how you sound here when you're spouting such utterly incorrect nonsense?

You're the one trolling here out of sheer idiocy. I don't even know who I would report to you to at this point - just utter and complete shame on you.

Andrei: "Apple" has +80% or more IPC over "Intel"
- problem - "Apple" is generic and "Intel" is generic, does he mean that all Apple chips averaged have +80% IPC over all Intel chips? A13 over Intel 3930K? A6 over 9900K? Who knows?!?!? The only IPC data he provided showed A12 has +60% IPC over 6700K. Hence the statement, given the data provided, is wrong. What he should have said was that the A12 has a 60.9% IPC lead over the 6700K, because that's all the data that was presented.
- to correct this - 1) define "Apple" and define "Intel", 2) provide the IPC data showing +80% or more IPC of Apple over Intel
I've already stated that architectural instructions retired between x86 and AArch64 is within 10%. The data I've published on the chips has been out for months* and the A13 has a 83% PPC lead over the 9900K. That 83% figure at most in the worst case disparity between retired instruction count between the ISAs goes down to 75%. Your whole circus here is arguing about whether Apple is 83 or 75% ahead. It's an utterly and completely meaningless discussion with absolutely no point to the competitive positioning of the micro-architectures in the industry and what this whole thread was started about.

* Please stop pulling numbers out of random places. Your 6700K SPEC figure is crap. I actually bothered to run the figures across the same compilers with the same flags on all the platforms. There's a freaking article on the homepage right now with the latest figures: https://images.anandtech.com/doci/15603/SPEC-2006.png

For the love of god stop the incessant bickering and idiotic comments and denial and trolling. All of what you're all achieving is driving the actual people who have knowledge and able to give some insights on the topic away from the site in sheer disgust.
 
Last edited by a moderator:

Nothingness

Diamond Member
Jul 3, 2013
3,063
2,047
136
Turns out that rebuking someone's absurd views on the topic does not help the poor state of the thread after all, general desire for absurd conflict still remains, and the result is pretty much the same.
The problem is when some of the people who want to rebuke these absurd views also write absurd things. They were numerous and stubborn enough to make it a mess and render the discussion pointless.

Lesson learned, again, for the nth time, until the next time.
:)
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,129
15,274
136
Look, I'm big fan of SMT4 at super wide core. I know that some very low ILP code can benefit from SMT4 a lot. But....

But at what cost? There is always some trade off.
  • If SMT2 cost 10% more transistors and brings avg 28% (50% in SQL)..... that's a good deal.
  • If SMT4 cost 20% more transistors and brings avg 35% (80% in SQL).....that's lower performance gain per transistor but still a good deal.
  • But if you can fit two A77 cores in the same area of one Zen2 core that's a game changer. Especially when that small A77 is wider and has higher IPC by 8%. That's a great deal for Amazon and other ARM server contenders.

To sum up: SMT helps x86 to be less garbage. Apple is able to extract 83% more IPC without any SMT. ARM with his Cortex cores tries to follow number one in CPU business which is currently Apple. Nuvia CPU can have SMT2 or SMT4 however it will make number one even better in some server workloads.
Apple does not get 83% more IPC. Also, you can't just calculate it "per ghz" since the Apple is designed for low speed, it appear to have more IPC, but it won;t clock to 4 ghz.

The overall throughput of Ryzen is better than Apple, but each chip was designed for a different use, and optimized for that use.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
26,129
15,274
136
But the Anandtech article shows the following, and took the advice of the paper Andrei linked to (since Andrei of course wrote the article), and compared only SPECfp scores for A13 vs 9900K and 3900X, since one cannot, per those authors, use SPECint to compare different ISAs:
9900K @ 5.0 GHz gets 75.15, which is 15.03 pts/GHz
3900X @ 4.6 GHz gets 73.66, which is 16.01 pts/GHz
A13 @ 2.66 GHz gets 52.82, which is 19.86 pts/GHz
Meaning A13 has a 32% lead over 9900K and a 24% lead over 3900X.
Unless I'm reading this wrong.
The 32% and 24% ? I can believe that. But also, the A13 will never scale up to the speeds and number of core that AMD/Intel have.

The A13 was designed as a smartphone CPU, and does it quite well. The Intel/AMD CPUs are for desktops, and they do their job well (to differing degrees of course)

Trying to compare the 2 is insane IMO. Like comparing a ultralight with a jet fighter, they have completely different purposes.
 
Status
Not open for further replies.