Samsung Exynos Thread (big.LITTLE Octa-core)

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
OK, so you do reviews in a way that really makes me want to read them and enjoy reading them.

Please continue!

Thanks IDC. Your posts are very interesting and insightful, I'm glad someone who contributes a lot to this forum enjoyed my tests. ;)

Andrei. said:
The TechReport piece has some mistakes in it. I hope to have my article up here on Monday.

Looking forward to your article Andrei, especially the Exynos vs Snapdragon comparison.

Lepton87 said:
There are still going to be people who will claim it is still inferior to Apple SOC because it has better ST score and any result using more than X threads don't matter and is pure BS.

X - the number of threads Apple AX chip currently can handle which is 2 for a phone and 3 for a tablet. When apple releases a chip with more threads this will change, because software will only be ready when apple is.

Well Apple doesn't really need stellar MT performance thanks to iOS pseudo-multitasking. No joke, I probably multitask more in my phone than in my PC and coming from an S801-based Galaxy S5 I can definitely tell this Exynos Galaxy Note 4 is a lot faster/smoother (especially browsing web pages while runnings 2-3 apps in the background). Touchwiz got a lot better but it still requires powerful hardware.

lopri said:
My Fire phone gets 74 for that Volumark thing and Z Ultra gets 63. Consistently on both so I thought the scores are affected by resolution. (Fire phone = 720p, Ultra Z = 1080p) Both phones are running on S800/2.20 GHz.

Not too bad for Krait cores.

I'll post ''Part 2'' of the new/updated tests tomorrow or Monday. There is a chance I might have an LG G3 to test later this month, Cortex A57 vs Krait should be interesting.
 

NTMBK

Lifer
Nov 14, 2011
10,483
5,902
136
I expect the A53 to be the default for midrange phones- the A7 is already very competent, and it's the logical step up.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
AnandTech: ARM A53/A57/T760 investigated - Samsung Galaxy Note 4 Exynos Review

cover_678x452.jpg


70028.png


With significant performance improvements of the ARMv8 cores of the Exynos 5433 needing at least a AArch32 compiled code-base for them to be unleashed, it's natural to try to stick to Samsung's own solutions until the Android ecosystem gets updated. Curiously, the Snapdragon 805 also benefits from the stock browser, meaning Samsung is also extensively optimizing for that platform.

The scores on the Exynos version of the Note 4 are outstanding, beating out all existing devices in our more complex benchmarks. SunSpider is here the exception, but seeing the vast discrepancy between browsers and even the Snapdragon version matching Apple's new A8 has made us come to the conclusion that this benchmark has run it's time as a valid test case, and thus we're dropping it from our 2015 test suite from now on.

69814.png


69815.png


69816.png


69817.png


70977.png


69838.png


69842.png


69845.png


We're comparing the A7 in the Exynos 5430 versus the A53 in the Exynos 5433. Here we see an overall increase of 30% for the A53 cores. Both SoCs run the little clusters at the same frequency and thus it gives us a direct IPC comparison between the two architectures. I'd also like to mention that we're still working with an ARMv7 version of the benchmark (SPECint2000) and thus it doesn't fully take advantage of the Exynos 5433's ARMv8 cores, even though it's limited to AArch32 by software for now.

...There's not much to say here - the IPC improvements on the A57 seem to bring an average of 20-30% improvement on a per-clock basis. The pure integer benchmarks shouldn't change too much with AArch64 or A57, as most advantages of the chip are in FP workloads with the wider FP units.

Overall, the Exynos's CPU is much ahead of the Snapdragon. This is not only seen in in benchmarks but also real-world usage as the device is snappier and more fluid. Some people might notice microstutters on the Snapdragon version: Qualcomm is still relying on CPU hot-plugging for their power management. Hot-plugging is a Linux kernel operation which takes out a CPU out of coherency and is a slow and expensive operation which forces the device to stop for a certain time. This overhead has been vastly reduced over the years as things were optimized in the Linux kernel, but it is still very much an unfavorable mechanic to be used nowadays. Modern ARM Cortex CPUs now use their power collapse states via the CPUIdle framework, which avoids such issues.

The Note 4 with the Exynos 5433 is the first of a new generation, taking advantage of ARM's new ARMv8 cores. On the CPU side, there's no contest. The A53 and A57 architectures don't hold back in terms of performance, and routinely outperform the Snapdragon 805 by a considerable amount. This gap could even widen as the ecosystem adopts ARMv8 native applications and if Samsung decides to update the phone's software to an AArch64 stack. I still think the A57 is a tad too power hungry in this device, but as long as thermal management is able keep the phone's temperatures in reign, which it seems that it does, there's no real disadvantage to running them at such high clocks.

...On the GPU side, things are not as clear. The Mali T760 made a lot of advancements towards trying to catch up with the Adreno 420 but stopped just short of achieving that, leaving the Qualdomm chip a very small advantage. I still find it quite amazing that the Mali is able to keep up while having only half the available memory bandwidth, things will get interesting once LPDDR4 devices come in the next few months to equalize things again between competing SoCs. Also ARM surprised us with quite a boost of GPU driver efficiency - something I didn't expect and may have real-world performance implications that we might not see in our synthetic benchmarks.

So the question is, is it still worth to try and get an Exynos variant over the Snapdragon one? I definitely think so. In everyday usage the Exynos variant is faster. The small battery disadvantage is more than outweighed by the increased performance of the new ARM cores.

http://anandtech.com/show/8718/the-samsung-galaxy-note-4-exynos-review

Reading it right now, thanks for the great article Andrei and Ryan.
In my oppinion the CPU performance advantage over Snapdragon 805 far outweights the slight 5-10% GPU defficit in synthetic benchmarks. Battery life is similar, I'm a power user so I'd have to recharge either version every night.

Also, there has been reports that Exynos actully delivers better gaming performance (perhaps because of Mali's far lower driver overhead and the faster Cortex A57 CPU?). Asphalt 8 is an example:

www.gamebench.net/blog/samsung-galaxy-note-4-beats-every-other-phone-asphalt-8

The Cortex A57/A53 vs Cortex A15/A7 IPC comparison is in line with my test results too, most of the time Cortex A57 is 20-30% faster than an equally clocked Cortex A15 @ 32-bit.
 
Last edited:

Nothingness

Diamond Member
Jul 3, 2013
3,337
2,427
136
Interesting review indeed.

Thing to note: if one puts away AES and SHA1 scores away, Geekbench single thread speed up is about 25%, not that far from the 17% of SPEC. I don't mean that Geekbench is enough to give a full picture, but it looks like a good first step approximation.
 
Mar 10, 2006
11,715
2,012
126
Even more interesting to note: Apple's Enhanced Cyclone at 1.4GHz is quite a bit faster in SPECint2k than the A57 @ 1.9GHz practically across the board.
 

Nothingness

Diamond Member
Jul 3, 2013
3,337
2,427
136
Even more interesting to note: Apple's Enhanced Cyclone at 1.4GHz is quite a bit faster in SPECint2k than the A57 @ 1.9GHz practically across the board.
Note the compilers are different so this might play a significant part in the difference.

That being said Apple A8 is about 1000 SPEC/GHz vs 650 for the A57.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
Forgot the most interesting page, didn't you?

http://anandtech.com/show/8718/the-samsung-galaxy-note-4-exynos-review/2

Is that power consumption for 1 core or the entire CPU?

Power_model20nm_575px.png


The 25% power reduction is rather meager. I wonder what's IDC's opinion about 20nm.

Also, the at most 1.64x scaling is very low to say the least, when Intel is doing 2.2x across the board.
They are for 1 core. Please remember that that graph represents modelled power consumption and may not in fact represent reality. The A53/A57 graphs are actual measured power figures.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
They are for 1 core. Please remember that that graph represents modelled power consumption and may not in fact represent reality. The A53/A57 graphs are actual measured power figures.
The graph seems to confirm A15's miserable efficiency.

big-cluster_575px.png

And A57 doesn't really improve, or worse. For reference: 4 Cinebench Silvermont cores (2.4GHz) consume less than 2.5W, but I'm not sure how apples to apples this comparison is.

A57-power-curve_575px.png

From this graph, one can also see that SVM beats A57 to single core performance: SVM sustains 2.4GHz at 0.85W, while A57 uses the same power at 1.3GHz.

I'm a bit surprised to find out that A15/57 has 18 pipeline stages. It seems it should clock higher.

Great article.
 
Last edited:
Mar 10, 2006
11,715
2,012
126

Nothingness

Diamond Member
Jul 3, 2013
3,337
2,427
136
And A57 doesn't really improve, or worse. For reference: 4 Cinebench Silvermont cores (2.4GHz) consume less than 2.5W, but I'm not sure how apples to apples this comparison is.
That's hard to say given that no one knows how these 2.5W were measured. If it comes from Intel marketing presentation, then they should be taken with great care as any marketing figure.

From this graph, one can also see that SVM beats A57 to single core performance: SVM sustains 2.4GHz at 0.85W, while A57 uses the same power at 1.3GHz.
See above: where does the 0.85W comes from?

Andreï, what kind of test did you run to extract A53/A57 power consumption?
 
Mar 10, 2006
11,715
2,012
126
That's hard to say given that no one knows how these 2.5W were measured. If it comes from Intel marketing presentation, then they should be taken with great care as any marketing figure.

See above: where does the 0.85W comes from?

IDF 2013. It was an Intel demo, but they showed the real-time power consumption of Silvermont (in Bay Trail) to a bunch of folks. At full tilt, according to the demo Intel did, the Silvermont core used about 0.85W. If I remember correctly, they used Cinebench.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
That's hard to say given that no one knows how these 2.5W were measured. If it comes from Intel marketing presentation, then they should be taken with great care as any marketing figure.

See above: where does the 0.85W comes from?

Andreï, what kind of test did you run to extract A53/A57 power consumption?

Nonsense.

Interestingly enough, when I was at IDF, I saw a very sophisticated power demonstration of Intel's Bay Trail. Running the very intense PC CPU benchmark, Cinebench, the Z3770 did not exceed 2.5W. And, in Intel's presentation at IDF, Silvermont's lead architect claimed that at 2.4GHz (max turbo), Silvermont consumed "less than 1 watt":

1095245-1384851605853495-Ashraf-Eassa.png

(click to enlarge)


From what I saw measured at IDF, "less than 1 watt" meant about 0.850W.

http://seekingalpha.com/article/1848061-intel-vindicated-very-competitive-with-apples-a7

This is not marketing; distinguish between lead architect and sophisticated power demonstration, and marketing.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
25% is damn good none-on-node. Density scaling is interesting here...

"Damn good"? Intel's pre-qualified 14nm silicon already reduced power by 30%, 22nm was a up to a 50% reduction and TSMC and Samsung claim similar things for their 20+FinFET node. 20nm reduces power by only 23% at the highest clock speed and 20% at 1.5GHz. 20nm clearly suffers from the lack of FinFET.
 
Mar 10, 2006
11,715
2,012
126
"Damn good"? Intel's pre-qualified 14nm silicon already reduced power by 30%, 22nm was a up to a 50% reduction and TSMC and Samsung claim similar things for their 20+FinFET node. 20nm reduces power by only 23% at the highest clock speed and 20% at 1.5GHz. 20nm clearly suffers from the lack of FinFET.

Obviously FinFETs will improve that, but given all of the comments to the effect of "20nm is worthless" it clearly wasn't :)
 

dawheat

Diamond Member
Sep 14, 2000
3,132
93
91
While the Anandtech writeup was great as usual, it's a little less interesting over 4 months after the phone was introduced and with the Exynos 7420 being the shiny new thing everyone is talking about.

I really really hope they can do a follow-up with the 7420 not too long after the S6 is released.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Looks like thermal effects are causing a big compounding effect on the power consumption numbers for three and four cores past 1.6GHz. In practice, the OS probably wouldn't let you run all four cores near peak clock speeds for very long even if this weren't the case.

I suspect that 5433's A57 and A53 cores have an immature level of design optimization for Samsung's 20nm process. This isn't very surprising, since no one expected A57 to be out in a 2014 SoC. I really doubt the power consumption/MHz is supposed to take such a big nosedive with similar design efforts, especially since ARM's estimates are much less severe (and their performance estimates have proven to be pretty accurate). Here's hoping that S810 will have a complete power consumption test too, although there's no TSMC 20nm A15/A7 pair to compare to. Tegra K1-32 would be a good comparison for A15 power consumption on TSMC 28HPm.

The graph seems to confirm A15's miserable efficiency.

It's about 0.93W/core at 1.8GHz/4 core, although it's only a model. If accurate, that doesn't seem so miserable to me. It's hard to compare performance with Silvermont, it's really all over the place but A15 generally at least has higher perf/MHz. Of course Intel still has the more power efficient process with FinFETs, so it's inevitable that they'd do better, but that doesn't lay all the blame on the uarch.
 

Nothingness

Diamond Member
Jul 3, 2013
3,337
2,427
136
Nonsense.
Do you really have to be insulting when I ask a simple question?

http://seekingalpha.com/article/1848061-intel-vindicated-very-competitive-with-apples-a7

This is not marketing; distinguish lead architect and sophisticated power demonstration, and marketing.
So it's both "not marketing" and "marketing"? :D

Anyway this is Intel material and not coming from some third party.

IDF 2013. It was an Intel demo, but they showed the real-time power consumption of Silvermont (in Bay Trail) to a bunch of folks. At full tilt, according to the demo Intel did, the Silvermont core used about 0.85W. If I remember correctly, they used Cinebench.
So there was no third party measures done? No real power virus run?