Samsung outs Exynos 9 Series 9810

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
Actually, seems that Qualcomm made their little cores way stronger than expected.... I think that the Kryo Silver seems NOT to be on ARM A55 or even A57 tier... but maybe is near on ARM A72 tier... That might explain why it has way better MT score and sucks on ST one compared to Exynos which uses custom Big cores and ARM A55 stock ones.

Even Antutu points that.

So I am expecting to see the Snapdragon 470 in order to see how strong is Kryo silver.
 

Nothingness

Diamond Member
Jul 3, 2013
3,294
2,362
136
Actually, seems that Qualcomm made their little cores way stronger than expected.... I think that the Kryo Silver seems NOT to be on ARM A55 or even A57 tier... but maybe is near on ARM A72 tier... That might explain why it has way better MT score and sucks on ST one compared to Exynos which uses custom Big cores and ARM A55 stock ones.

Even Antutu points that.

So I am expecting to see the Snapdragon 470 in order to see how strong is Kryo silver.
The explanation is that the big cores of 9810 have to dramatically lower their frequency when heavy multithreading is going on, rather than the smaller cores being stronger on SD845.
 
  • Like
Reactions: CatMerc

dark zero

Platinum Member
Jun 2, 2015
2,655
140
106
The explanation is that the big cores of 9810 have to dramatically lower their frequency when heavy multithreading is going on, rather than the smaller cores being stronger on SD845.
It could be possible, unless the MT score for Exynos is still higher to Snapdragon one.

Also we didn't count that GPU is still the weak point of Exynos.
 

Lodix

Senior member
Jun 24, 2016
340
116
116
It could be possible, unless the MT score for Exynos is still higher to Snapdragon one.

Also we didn't count that GPU is still the weak point of Exynos.
It is not possible, it is how it works. When 1 core is on it reaches 2'7GHz, 2c goes down to 2'3GHz and 3-4c to just 1'8GHz.
 

eastofeastside

Junior Member
Nov 19, 2011
17
3
81
Sorry, for being 'that guy' asking this question, but what might, hypothetically, 8 M3 cores @2.9Ghz in a single octacore configuration be like in, say, a next-gen game console... ;), performance wise?

Would such a configuration even be possible within thermal design limitations?
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,055
3,862
136
Sorry, for being 'that guy' asking this question, but what might, hypothetically, 8 M3 cores @2.9Ghz in a single octacore configuration be like in, say, a next-gen game console... ;), performance wise?

Would such a configuration even be possible within thermal design limitations?
No your not you have been pushing this agenda for months across multiple forums.

Your missing the forest for the tree you want to focus on. Game consoles are about the sum of parts, your m3s in this configuration will have less performance per core without major infrastructure improvements and will be using much slower but bigger memory (no soldered lpddr here). Then you have everything else to worry about. The 12+tf gpu, the potenical mixed memory configurations , SATA etc interfaces.

The next xbox and ps5 will be Zen+gcn, it is simple risk, reward,cost analysis.
 

LightningZ71

Platinum Member
Mar 10, 2017
2,374
3,000
136
I can only see one advantage to moving a "big" and "stationary" console to an ARM architecture, and that's to make cross development with mobile games easier. That is a very low priority there because mobile games have rather major differences in interface. The Switch (and the shield/portable that shares much of it's hardware) is an odd situation. They needed to be good both in a fixed installation situation and a mobile one as well. For them, the ARM base architecture made sense as the implementations that were available were highly focused on the mobile market and featured aggressive power efficiency capabilities while still maintaining good performance.

Can an ARM architecture that's optimized from the ground, through the process, all the way to the board be an effective main console? Yes, it's possible. The problem is getting the whole platform optimized for performance first, and then getting the games developers on board. Having Nintendo laying the foundations for developer houses to have those capabilities is a big deal to be sure, but Nintendo also has a long history of wanting exclusivity and focusing on their own in house properties over third party ones. I'm sure that, for the next major generation of consoles, there will be rumors going around that MS and Sony are nosing around ARM based designs. I also don't think that it will amount to much more than keeping AMD and Intel (should the choose to bid for the project, which they might with a renewed focus on GPU performance) competitive on their pricing. I do think that Nvidia, in a continuing effort to diversify their portfolio to protect against renewed focus on APU performance from both AMD and Intel destroying their volume low end business, will present an updated shield at some point that will have impressive performance capabilities for a modest platform price. They will, in combination with Nintendo, attract a few developers to put together nice games for them. Where that goes long term is anyone's guess.
 
  • Like
Reactions: eastofeastside

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Exynos 9810 has a similar GB single-thread score (approx. 3700) as Ryzen mobile despite a 2.8Ghz vs 3.6Ghz max clock difference?

Geekbench can vary quite a lot if you look at the scores.

For example the best 2700U with 3.8GHz top clock can get 4400 with Android 64-bit. Android tends to be the best with Linux and MacOS behind, then Windows.

Also seems to scale very well with clocks making it questionable whether its more of a mobile benchmark and such gains won't translate into desktop application performance if it ever happened.
 
  • Like
Reactions: eastofeastside

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Geekbench can vary quite a lot if you look at the scores.

For example the best 2700U with 3.8GHz top clock can get 4400 with Android 64-bit. Android tends to be the best with Linux and MacOS behind, then Windows.

Also seems to scale very well with clocks making it questionable whether its more of a mobile benchmark and such gains won't translate into desktop application performance if it ever happened.

The OS is irrelevant. Only relevance is in the compiler, which happens to be different when targeting Android/Linux. So even if we equalize the compiler Exynos 9810 is ahead of Ryzen clock for clock. And of course looking at a low level CPU benchmark, that never translates linearly to application performance - but qualitatively it will translate. With other words, that Ryzen suddenly starts to get faster than Exynos for other applications is not a very reasonable assumption.
 

eastofeastside

Junior Member
Nov 19, 2011
17
3
81
No your not you have been pushing this agenda for months across multiple forums.

Your missing the forest for the tree you want to focus on. Game consoles are about the sum of parts, your m3s in this configuration will have less performance per core without major infrastructure improvements and will be using much slower but bigger memory (no soldered lpddr here). Then you have everything else to worry about. The 12+tf gpu, the potenical mixed memory configurations , SATA etc interfaces.

The next xbox and ps5 will be Zen+gcn, it is simple risk, reward,cost analysis.

It's the 'forest' of the larger, long-term, strategic business case for ARM that I want to bring focus to. The 'tree' of the immediately apparent technical choice of Ryzen is too obvious.

Every next-gen anticipation period the message boards rally around some apparently obvious assumption, only to have the wool pulled out from under by the inevitable twist of development.
 
Last edited:

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Libraries play a role in some of the benchmarks of Geekbench too (libm and libc).

Oh ok, i did not know that. Is this documented somewhere?
I mean when developing a benchmark i would try to keep 2 things constant: source code and compiler. When using 3rd party libraries, which are only available as binary i would not even use the same source code on different platforms.
 

Nothingness

Diamond Member
Jul 3, 2013
3,294
2,362
136
Oh ok, i did not know that. Is this documented somewhere?
I mean when developing a benchmark i would try to keep 2 things constant: source code and compiler. When using 3rd party libraries, which are only available as binary i would not even use the same source code on different platforms.
I don't think it's documented but both HTML5 tests show significant use of string functions while LLVM test is doing lots of memory management.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
Oh ok, i did not know that. Is this documented somewhere?
I mean when developing a benchmark i would try to keep 2 things constant: source code and compiler. When using 3rd party libraries, which are only available as binary i would not even use the same source code on different platforms.
libc and libm are low level enough that I don't think it matters. In an ideal world those functions are specifically tuned to the µarch of that system. Nobody in their right mind implements their own memcpy or memmove. I'm also not compiling SPEC with a static libc/libm but use the system's one.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
libc and libm are low level enough that I don't think it matters. In an ideal world those functions are specifically tuned to the µarch of that system. Nobody in their right mind implements their own memcpy or memmove. I'm also not compiling SPEC with a static libc/libm but use the system's one.

My point is rather, that no low level CPU benchmark should use libc at all - it is just one more uncertainty in the equation when compiling for different runtime environments, which imo is avoidable. Of course you have to link against the libraries, because they typically setting up the runtime environment (e.g. main() is called from there) and all the i/o related stuff like outputting the results are libc based.
 

Nothingness

Diamond Member
Jul 3, 2013
3,294
2,362
136
My point is rather, that no low level CPU benchmark should use libc at all - it is just one more uncertainty in the equation when compiling for different runtime environments, which imo is avoidable. Of course you have to link against the libraries, because they typically setting up the runtime environment (e.g. main() is called from there) and all the i/o related stuff like outputting the results are libc based.
Why do you think Geekbench only is low level? When you have some tests named HTML5 Parse and LLVM that should ring a bell :)
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
My point is rather, that no low level CPU benchmark should use libc at all - it is just one more uncertainty in the equation when compiling for different runtime environments, which imo is avoidable. Of course you have to link against the libraries, because they typically setting up the runtime environment (e.g. main() is called from there) and all the i/o related stuff like outputting the results are libc based.
I don't think that's reasonable at all. Asking benchmark developers to not use standard C libraries seems utterly senseless.
Why do you think Geekbench only is low level? When you have some tests named HTML5 Parse and LLVM that should ring a bell :)
It's not like they are using a system WebView or anything, those libraries are compiled and delivered within the GB4 shared lib. Again using standard C libs seems perfectly fine to me.
 

Nothingness

Diamond Member
Jul 3, 2013
3,294
2,362
136
It's not like they are using a system WebView or anything, those libraries are compiled and delivered within the GB4 shared lib. Again using standard C libs seems perfectly fine to me.
That's not fine, that's even mandatory: most apps use standard C libs, so benchmarks should do the same.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
That's not fine, that's even mandatory: most apps use standard C libs, so benchmarks should do the same.

It would be fine for an application benchmark but not fine for a benchmark, which is supposed to test CPU performance under different runtime environments. I am not sure, why everyone seems to be fine with benchmarks behaving so different depending on the environment you run in.

I don't think that's reasonable at all. Asking benchmark developers to not use standard C libraries seems utterly senseless.

But that's precisely i would ask benchmark developers. Why do you think its senseless or not reasonable?

Why do you think Geekbench only is low level? When you have some tests named HTML5 Parse and LLVM that should ring a bell

There is nothing high level in a parser. Only because it happens that you are parsing using a HTML5 grammar does not make it anymore high level. Parsing is done once you have constructed an abstract syntax tree or just constructing a sequence of derivation of the grammar rules. Similar argument for LLVM.
 
Last edited:

Nothingness

Diamond Member
Jul 3, 2013
3,294
2,362
136
There is nothing high level in a parser. Only because it happens that you are parsing using a HTML5 grammar does not make it anymore high level. Parsing is done once you have constructed an abstract syntax tree or just constructing a sequence of derivation of the grammar rules. Similar argument for LLVM.
In the world I live in (compilers) parsing is the process in which you build the syntax tree... And all of these tasks are anyway higher level than GEMM or SFFT or AES or Dijkstra. If you want to have a glimpse at how these tasks behave you can always hack the executable and run perf to see how the icache and branch predictors are impacted. You'll see these are not low level.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
In the world I live in (compilers) parsing is the process in which you build the syntax tree... And all of these tasks are anyway higher level than GEMM or SFFT or AES or Dijkstra. If you want to have a glimpse at how these tasks behave you can always hack the executable and run perf to see how the icache and branch predictors are impacted. You'll see these are not low level.

I guess we have a different view of what high and low level is. Sure a parser is more complex and less regular than say GEMM but that is not what qualifies high level in my opinion. You can also stress you cache hierarchy with GEMM if you chose the dataset size accordingly. And branch predictors can impact the performance of very small footprint benchmarks like Dhrystone significantly. In fact when i once forgot to enable branch prediction on a Cortex R7 the Dhrystone results was almost half of the expected value.
 

Nothingness

Diamond Member
Jul 3, 2013
3,294
2,362
136
I guess we have a different view of what high and low level is. Sure a parser is more complex and less regular than say GEMM but that is not what qualifies high level in my opinion. You can also stress you cache hierarchy with GEMM if you chose the dataset size accordingly. And branch predictors can impact the performance of very small footprint benchmarks like Dhrystone significantly. In fact when i once forgot to enable branch prediction on a Cortex R7 the Dhrystone results was almost half of the expected value.
I was talking about icache, aka instruction cache, sorry if that wasn't clear. High icache miss rates are typically the sign of complex programs.

And when I talk about branch predictors being hammered I'm talking about advanced ones, Intel-class :) Cortex-R7 doesn't have an indirect branch predictor IIRC, and anyway Dhrystone indirect branches are always to the same address so a standard PC->target PC table (called BTAC or BTB) is enough. Basically every pipelined CPU will suffer a lot when there's no branch prediction, even in-order CPU.
 

NTMBK

Lifer
Nov 14, 2011
10,423
5,727
136
Wow, the 9810 is a trainwreck.

gpu-crash_575px.png


The issue that did actually shock me was that that while the prompt forcibly quit GFXBench, running 3DMark Sling Shot while the device was warm kept crashing and rebooting the phone. The last time I’ve had a phone crash on me like that was Huawei’s Mate 7 on its release firmware, and that was because it was removing the thermal limits through benchmark detection. Samsung doesn’t seem to do any benchmark detection here and it’s throttling – but still having the thermal drivers insufficiently calibrated and allow for the SoC to reach thermal panic is quite embarrassing.

https://www.anandtech.com/show/12520/the-galaxy-s9-review