Samsung Exynos Thread (big.LITTLE Octa-core)

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Hmm.. It seems like Samsung really hit a jackpot with its 14nm. And the S810 looks worse by the day (as if that were possible)..

Thank you for the clarification, Andrei.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
I don't have them yet. I'll get back to you on that. I think David is wrong in his assumption though, I'm sure the CPUs are hitting the 650mV range at the low end. The GPU on the unit I played with at MWC had this voltage curve;

MHz mV
772 825
700 787
600 743
544 706
420 668
350 662
266 656

Thanks, very enlightening. I'm sure you already saw the thread then, including the reply where I cited your response.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Galaxy Note 5 will reportedly tout Ultra HD Super AMOLED display with record ppi

According to a new report Samsung is going to introduce the UHD display with the Galaxy Note 5 which will arrive later this year, in the fall to be precise. Production of the UHD panels is expected to start by August which would make sense as Galaxy Note handsets are traditionally unveiled at IFA in September. Galaxy Note 5 is said to have a 5.89-inch display with 748 pixels per inch. Apparently a dual edge variant will be offered as well, it’s 5.78-inch curved display will tout a record high 762 ppi. Keep in mind that these are unofficial claims and should be taken with a grain of salt for now.

www.sammobile.com/2015/04/09/galaxy...ultra-hd-super-amoled-display-with-record-ppi
www.phonearena.com/news/Note-5-coul...-dual-edge-version-with-record-762ppi_id68088

4K brings more than twice the pixels of QHD. The next Exynos is going to need a powerful GPU to power that gourgeous AMOLED display.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
VR needs like 1500ppi to be viable. The GearVR with Note 4 was a pixelated mess.
A phone is not a VR device. VR is a rarely used application that doesn't justify all the wasted pixels and energy.
 

kpkp

Senior member
Oct 11, 2012
468
0
76
A phone is not a VR device. VR is a rarely used application that doesn't justify all the wasted pixels and energy.

But the VR for the masses is most likely going to be a phone.
However I agree with you at this point the display density development is outpacing the other necessary components needed for a balanced device.

@Andrei
Do you have any data that suggests that an AMOLED display with 1 pixel the size of x^2 consumes less then a display with 2 pixels the size of x^2?
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
Makes as useful a distinguishing feature as anything else you'd think? They've been trying some awfully contrived things to make top end phones stand out in recent years!
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
@Andrei
Do you have any data that suggests that an AMOLED display with 1 pixel the size of x^2 consumes less then a display with 2 pixels the size of x^2?
From the active matrix's view yes it consumes less, but I don't know what behaviour the emission layer has and how it scales with area.

From what I know density has actually no real role in power. It's the fill-factor which is essential and dictates overall efficiency of the display. If higher density is better or detrimental to fill-factor or not I don't know either.
 
Last edited:

Andrei.

Senior member
Jan 26, 2015
316
386
136
Thanks, very enlightening. I'm sure you already saw the thread then, including the reply where I cited your response.
Median bin A57 is from 700mV (800MHz) to 1062mV (2100). Best bin is 625 to 987mV.

For A53 it's 668 at 400MHz to 1062 at 1500Mhz for median bin and best bin is 606 to 1000mV.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
Thanks, I updated on RWT too. We'll see if anyone responds this time.
I still think it's a bit misleading to compare desktop silicon with mobile silicon, there's much more restriction in terms of possible voltage in mobile due to the sheer wide ranging environmental factors. A phone SoC needs to work down to -20C or so. I see that Samsung's earlier voltage tables went down a good 100mV less than what the device is running on release. It went down to 550mV!
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Cortex-A72 Geekbench result in from Mediatek's MT8173

ST: 1559
MT: 3216

Judging from the scores, it looks like a 2+2 configuration! (it is identified a quad-core by Geekbench)

http://browser.primatelabs.com/geekbench3/2282514

I am guessing it is manufactured on TSMC's 28nm or 20nm. 28nm is more likely, and definitely not 14/16nm. Its clock frequency is unknown, but it is presumably similar to A57's and my educated guess puts it around 2.0 GHz. Geekbench reads A53s running @1.4 GHz.

v. A57 (Exynos 5433, 20nm) <- A57 here is running 32-bit
v. A57 (Exynos 7420, 14nm)
v. A8 (Apple A8, 20nm)

It looks promising for med-to-high end phones until Qualcomm gets its act together. Per-clock performance is still not at Cyclone level, but given the design philosophy and die size differences, it comes surprisingly close. A bulk of the gains in comparison to A57 comes from FP and memory performance. It looks like a more complete and refined A57. It will be interesting to see whether Samsung will follow the suit with its version of A72 or go custom route in the future.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Interesting. The much higher single threaded floating point scores at least sort of match expectations set by David Lutz' Linkedin profile, where he claimed a 30% higher SPECFp score (I believe this has since been scrubbed). You can see at least some of the ST FP tests performing better on the MT8173 than the 7420, while few of the integer tests do.

I doubt this is really at 2GHz. That'd make many of the tests lower IPC than the A72, although it's possible that other system or configuration factors could account for this I doubt it'd be so widespread. I'd expect the clock speed to be closer to 1.8GHz, which would make the worse cases about the same IPC. And it'd put the highest IPC improvement around 40% for integer and 50% for FP. Which falls vaguely in line with Peter Greenlaugh's description of improved performance. This also assumes performance changes linear with clock, but I think over these clock differences and for these benchmarks it should be pretty close.

This is also assuming both are running in 64-bit.
 

lopri

Elite Member
Jul 27, 2002
13,310
687
126
Is there an optimal distance between the frequencies of little cores and big cores in a big.LITTLE configuration? Or is it strictly governed by power/thermals?
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
Is there an optimal distance between the frequencies of little cores and big cores in a big.LITTLE configuration? Or is it strictly governed by power/thermals?
Highest little core freq should have less performance than smallest big core freq.

Also the MT8173 should be 1807MHz.
 

geoxile

Senior member
Sep 23, 2014
327
25
91
Cortex-A72 Geekbench result in from Mediatek's MT8173

ST: 1559
MT: 3216

Judging from the scores, it looks like a 2+2 configuration! (it is identified a quad-core by Geekbench)

http://browser.primatelabs.com/geekbench3/2282514

I am guessing it is manufactured on TSMC's 28nm or 20nm. 28nm is more likely, and definitely not 14/16nm. Its clock frequency is unknown, but it is presumably similar to A57's and my educated guess puts it around 2.0 GHz. Geekbench reads A53s running @1.4 GHz.

v. A57 (Exynos 5433, 20nm) <- A57 here is running 32-bit
v. A57 (Exynos 7420, 14nm)
v. A8 (Apple A8, 20nm)

It looks promising for med-to-high end phones until Qualcomm gets its act together. Per-clock performance is still not at Cyclone level, but given the design philosophy and die size differences, it comes surprisingly close. A bulk of the gains in comparison to A57 comes from FP and memory performance. It looks like a more complete and refined A57. It will be interesting to see whether Samsung will follow the suit with its version of A72 or go custom route in the future.

Didn't they already say they had a custom microarch on the map to succeed A57?
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
AnandTech's Galaxy S6 Review out, some Exynos 7420 performance bits:

73768.png


73769.png


73770.png


73762.png


73756.png


73771.png


73774.png


73778.png


73780.png


73787.png


Exynos 7420 reigns supreme as the fastest phone SoC right now.
Gotta say Exynos 5433/7410 helds its own quite nicely in the CPU tests but the slightly better than expecting GPU scaling going from MP6 to MP8 makes me wonder if Sammy intentionally capped its performance not to make S805 look too bad in comparison - by forcing a BW bottleneck on the GPU side and not enabling 64-bit support on the CPU side.
Thank god they sticked with Exynos 7420 for all S6 variants.

* I'll add my results later.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91

GF's yield issues stem from the fact they refuse to do a copy-exact port of Samsung's 14nm. They are trying to bring it in and ramp it up on the cheap. Converting as much of it as they can to run on existing and antiquated 32nm equipment and chemicals.

Of course it can be done, but it comes at the expense of time (your fab engineer's time) and lower yield ramp rate (including the possibility of being capped at a lower yield limit).

On the flip-side, the industry doesn't need yet-another-higher-cost-node right now. It needs a foundry that can figure out how to deliver lower cost/xtor on these advanced nodes.
 
Mar 10, 2006
11,715
2,012
126
GF's yield issues stem from the fact they refuse to do a copy-exact port of Samsung's 14nm. They are trying to bring it in and ramp it up on the cheap. Converting as much of it as they can to run on existing and antiquated 32nm equipment and chemicals.

Of course it can be done, but it comes at the expense of time (your fab engineer's time) and lower yield ramp rate (including the possibility of being capped at a lower yield limit).

On the flip-side, the industry doesn't need yet-another-higher-cost-node right now. It needs a foundry that can figure out how to deliver lower cost/xtor on these advanced nodes.

That's unexpected, especially since GloFo/Samsung implied that they were "copying exact" in the presentation materials they gave when this deal was announced!

The deal is unprecedented in modern foundry history, with GF essentially acknowledging the two companies will use a &#8220;copy-smart&#8221; approach that involves synchronizing materials, process recipes, and tools.

http://www.extremetech.com/computin...uddy-up-for-14nm-while-ibm-heads-for-the-exit
 
Last edited:

Sweepr

Diamond Member
May 12, 2006
5,148
1,143
136
Reading the review now, some very interesting stuff.

We previously mentioned that Samsung&#8217;s 14nm process in general will lack any significant die shrink due to almost unchanged metal interconnect pitch, but this assumption was in comparison to their 20nm LPM process from which the 14nm LPE process borrows its BEOL (back end of line) from. Opposite to what we thought, the Exynos 5433 was manufacturered on a 20LPE process which makes use of a quite larger metal layer. The result is that one can see a significant die shrink for the 7420 as it is, according to Chipworks, only 78mm² and a 44% reduction over the Exynos 5433's 113mm². This is considerable even when factoring in that the new SoC had two added GPU shader cores.

As one can see in the table, we can achieve well up to -250mV voltage drop on some frequencies on the A57s and the GPU. As a reminder, power scales quadratically with voltage, so a drop from 1287.50mV to 1056.25mV as seen in the worst bin 1.9GHz A57 frequency should for example result in a considerable 33% drop in dynamic power. The Exynos 7420 uses this headroom to go slightly higher in clocks compared to the 5433 - but we expect the end power to still be quite lower than what we've seen on the Note 4.

This new GPU is clocked a bit higher as well, at 772 MHz compared to the 700 MHz of the GPU in the Exynos 5433. We see the same two-stage maximum frequency scaling mechanism as discovered in our Note 4 Exynos review, with less ALU biased loads being limited to 700MHz as opposed to the 5433's 600MHz. There's also a suspicion that Samsung was ready to go higher to compete with other vendors though, as we can see evidence of an 852 MHz clock state that is unused. Unfortunately deeply testing this SoC isn&#8217;t possible at this time as doing so would require disassembling the phone.

www.anandtech.com/show/9146/the-samsung-galaxy-s6-and-s6-edge-review

Apparently they could push the GPU to 852MHz, which they didn't cause S810 was underwhelming.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
That's unexpected, especially since GloFo/Samsung implied that they were "copying exact" in the presentation materials they gave when this deal was announced!

http://www.extremetech.com/computin...uddy-up-for-14nm-while-ibm-heads-for-the-exit

Copy-smart is not called copy-exact for good reason. The word "smart" was invoked to imply (truthfully it turns out) that the process would be copied exactly for only those steps and conditions for which it is easy, low-cost, or critically gating for them to do so.

This is pretty much standard operating procedure for node fanouts in the industry, Intel is the only outlier there with their much more true copy-exact policy.

If you tell a fab engineer that their job is to do nothing more than copy-paste some other engineer's recipes, onto the exact same tool and with the exact same materials and chemistries, then you have an engineer who will rightly recognize there is very little value-add left in the equation for themselves to claim when it comes around to assessment and bonus time.

So there is a personal career-path motivation involved for engineers to convince their boss that they can craft a different recipe on an existing tool with less expensive materials, saving the company money (and that becomes their value-add hook) and thus deserve a better assessment/raise/bonus/etc.

The allure of saving money is irresistible to most managers, as they too need to show their bosses that they (as management) have added value in managing of said engineers. And so the whole thing rolls up on itself from the bottom-up.

Intel avoids this inherent conflict-of-interest situation by tying fab monies to timeline for ramp with copy-exact, the value added by the engineers (and their managers) is in how quickly they can ramp with the copy-exact, not in trying to find corners to cut from a cost-savings angle.

Anyways, what GF is doing is not new to the industry or to anyone who has worked in the industry for any fab-owning company outside of Intel. But it does smack of penny-wise/pound-foolish nonsense given that the very point of having bleeding-edge technology is to sell it to customer's who are willing to pay for it...only you aren't going to be selling much if your yields are so bad that no one will trust their global product rollout timeline to you and your fab production line.