Samsung Exynos Thread (big.LITTLE Octa-core)

Page 29 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Yakk

Golden Member
May 28, 2016
1,574
275
81
http://www.anandtech.com/show/7083/nvidia-to-license-kepler-and-future-gpu-ip-to-3rd-parties

Nvidia so far haven't managed to license the IP out but this could be the open door they've been after. I think Nvidia is 50/50 and AMD 10% chance, safest bet would be PowerVR probably.

Oh, and I forgot about nvidia suing Samsung in court for years which Samsung fought vehemently.

I would also say Power VR, if Apple doesn't force their hand not to. Or in-house (given their monstrous budgets and experience Samsung could allocate more than enough resources if it wants, like they did door their Exynos ) is pretty much what Samsung would consider.
 

jpiniero

Lifer
Oct 1, 2010
14,600
5,221
136
That MT result would be pretty good if it can be sustained. 5W is a lot for a phone though.
 

Andrei.

Senior member
Jan 26, 2015
316
386
136
8895
http://m.gsmarena.com/exynos_8895_w..._image_processing_speeds_by_70-news-20511.php

"The Exynos 8895 is also said to be Samsung’s first chip manufactured on the 10nm process and run at a max power draw of just 5 watts. The chip also allegedly scored 2301 for single-core and 7019 for multi-core in Geekbench."
If they do 5W at the rumored target of 3.4ghz for all cores then yeah ok but if it's again throttled and that's just 3ghz then it's disappointing and no real improvement over the 8890.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
If they do 5W at the rumored target of 3.4ghz for all cores then yeah ok but if it's again throttled and that's just 3ghz then it's disappointing and no real improvement over the 8890.
It looks the same to me. Why dont they go a73?
Smaller leaner and probably far more efficient.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Probably because they're using their own custom core (or an evolution of it) again. They could use the stock ARM cores, but the risk of Qualcomm's custom cores having a huge edge like they did with some of the early Krait-based designs wouldn't be good for Samsung if Exynos were stuck using stock ARM cores.
 
Mar 10, 2006
11,715
2,012
126
Probably because they're using their own custom core (or an evolution of it) again. They could use the stock ARM cores, but the risk of Qualcomm's custom cores having a huge edge like they did with some of the early Krait-based designs wouldn't be good for Samsung if Exynos were stuck using stock ARM cores.

Lol, Qualcomm's custom cores lately have been lackluster, look at GB4 results. It's game over for them with custom cores, IMO.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
As a business you can't assume that your competitors will flounder in every endeavor. Qualcomm had demonstrated that in the past they were capable of designing a custom core that had better performance than stock offerings, so if you're Samsung, you'd better assume that it could happen again and plan accordingly, which is why they have started doing their own custom designs as well. If their own design is better than the stock ARM core, it works out even better for Samsung.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
Ok...
About the GPU Issue from Samsung:
- Forget nVIDIA, they have a dispute with Samsung (as well with Qualcomm)
- Power VR is a close friend of Apple and the best design goes for them. Even Mediatek could only get the same GPU from the previous year Apple.
- AMD could have a chance. Samsung is helping them with Vega and Zen. That doesn't come for free. So AMD is forced to help Samsung.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
MediaTek is also more a budget SoC maker, so it may not have been a question of whether or not they could get the newest PowerVR GPU, but that is was much cheaper to use an older design. Also, Apple only owns about 10% of the company so I don't think they could outright prevent Samsung from using a PowerVR GPU.
 

F-Rex

Junior Member
Aug 11, 2016
19
5
81
Ok...
About the GPU Issue from Samsung:
- Forget nVIDIA, they have a dispute with Samsung (as well with Qualcomm)
- Power VR is a close friend of Apple and the best design goes for them. Even Mediatek could only get the same GPU from the previous year Apple.
- AMD could have a chance. Samsung is helping them with Vega and Zen. That doesn't come for free. So AMD is forced to help Samsung.

The dispute between NVIDIA and Samsung could be solved by à strategic partnership between them.
 

Yakk

Golden Member
May 28, 2016
1,574
275
81
The dispute between NVIDIA and Samsung could be solved by à strategic partnership between them.

Or the dispute started because they couldn't negotiate something upfront to begin with and relations broke down. Forcing Samsung to go to Qualcomm instead.

Hence why nvidia would try to block them both if they lost the opportunity. But Qualcomm having bought AMD's IPs weren't worried.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
The dispute between NVIDIA and Samsung could be solved by à strategic partnership between them.

Neither has a lot of reason to work with the other. Samsung would rather build their own SoC and Nvidia would rather sell Samsung their SoC. Neither are in a particularly week position such that they'd be likely to give in so it's less likely they could reach terms agreeable to both parties.
 

lopri

Elite Member
Jul 27, 2002
13,209
594
126
8895
http://m.gsmarena.com/exynos_8895_w..._image_processing_speeds_by_70-news-20511.php

"The Exynos 8895 is also said to be Samsung’s first chip manufactured on the 10nm process and run at a max power draw of just 5 watts. The chip also allegedly scored 2301 for single-core and 7019 for multi-core in Geekbench."
According to Andrei's measurement the Kirin 950 (4xA72, 2.3 GHz) consumes 3.7W while beating the Exynos 7420 (4xA57, 2.1 GHz) which consumes 5.4W. The former also does a lot better in sustained performance as well. 3.0 GHz for 5W on 10nm does not sound very impressive unless there is a huge architectural change to the existing M1 core (which is a slightly modified A57 core iirc).

http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3

Using the same power-virus at varying thread-counts across all devices and SoCs, we end up with the above table of power consumption. Immediately the Kirin 950’s power figures stand out as being much better than what we’ve come to be used to from ARM’s big cores. At a maximum system load power (total power consumption during a scenario minus average idle power) we can see that the Kirin 950 only reaches 3.7W at full frequency on all big cores. The same scenario on the Exynos 7420 for example reaches a much higher 5.4W. When looking at the per-core increases we see that it seems that the Kirin 950 uses only about 900-700mW of power. The diminishing power with thread count is something that I’ve observed in the past with some SoCs and CPU microarchitectures, so it seems to be a characteristic of the power virus I’m using that starves the cluster of resources and makes each additional thread/core become more bottle-necked.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
According to Andrei's measurement the Kirin 950 (4xA72, 2.3 GHz) consumes 3.7W while beating the Exynos 7420 (4xA57, 2.1 GHz) which consumes 5.4W. The former also does a lot better in sustained performance as well. 3.0 GHz for 5W on 10nm does not sound very impressive unless there is a huge architectural change to the existing M1 core (which is a slightly modified A57 core iirc).

http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3

Yep. And if we take Andreis a73 roundup
http://www.anandtech.com/show/10347/arm-cortex-a73-artemis-unveiled/3

A73 vs a72 on same freq and process is:

10% faster
20% more power efficient and
25% smaller

Now selecting a soc is often everything than the hardware itself but the new exynos needs an enourmous boost to make just the slightest sense especially as Samsung is cutting capex.

But imo also indicates Samsung is more traditionally vertically integrated than what is presented. Or lets say they have a very wide interpretation of what should be included in cost.
 

dark zero

Platinum Member
Jun 2, 2015
2,655
138
106
According to Andrei's measurement the Kirin 950 (4xA72, 2.3 GHz) consumes 3.7W while beating the Exynos 7420 (4xA57, 2.1 GHz) which consumes 5.4W. The former also does a lot better in sustained performance as well. 3.0 GHz for 5W on 10nm does not sound very impressive unless there is a huge architectural change to the existing M1 core (which is a slightly modified A57 core iirc).

http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3
Remember something... the GPU from Samsung is a Mali T880-MP12, while the Huawei's one is only MP4, the difference is abyssmal.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
According to Andrei's measurement the Kirin 950 (4xA72, 2.3 GHz) consumes 3.7W while beating the Exynos 7420 (4xA57, 2.1 GHz) which consumes 5.4W. The former also does a lot better in sustained performance as well. 3.0 GHz for 5W on 10nm does not sound very impressive unless there is a huge architectural change to the existing M1 core (which is a slightly modified A57 core iirc).

http://www.anandtech.com/show/9878/the-huawei-mate-8-review/3

Those power numbers were under tests that only exercised the big CPU cluster. Idle consumption was subtracted out. In practice the numbers would also include PMIC efficiency loss and some amount of dynamic RAM power. But if the SoC was allowed to run full tilt with both CPU clusters, GPU, and all other peripherals enabled and at maximum clock speed they'd all use far more than 5W of power. That makes Samsung's 5W number meaningless without context, since we don't know how much it'll allow everything to run at. 5W is more of an upper limit of how much an SoC like this could realistically be allowed to be used in a realistic implementation.
 

lopri

Elite Member
Jul 27, 2002
13,209
594
126
Remember something... the GPU from Samsung is a Mali T880-MP12, while the Huawei's one is only MP4, the difference is abyssmal.
As Exophase explained above the power numbers are drawn from the CPU, specifically the big cores, so the GPU does not come into play. BTW the Kirin 950 has a lot fewer GPU units (4 v. 12) but it also run them at higher clock. (900 v. 600)

They are probably configured to run target products (the 950 mostly 1080p, Exynos has to drive 1440p)
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
BenchLife @ Translate said:
Samsung Electronics mentioned in a press release, compared to 14nm FinFET, the new 10nm FinFET process will be able to enhance the effectiveness of the performance of 27%, 40% lower power consumption and 30% increase in area efficiency.

Samsung Electronics is currently the first-generation 10nm FinFET process is 10LPE, 10LPP launched the second generation process in the second half of 2017.

https://benchlife.info/samsung-mass-production-10nm-soc-with-10lpp-10172016
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I so hope the 30% area improvement number is an embarrassing typo. Else, Samsung would achieve Intel 14nm density with triple patterning and 2D interconnect.

I hope Ashraf doesn't write another Fool article about how much Intel's advantage has shrunk, because with these numbers that Samsung has now sent to the media, Samsung is 2 years behind Intel 14nm with a significantly costlier node :eek:.

But I will assume that is not the case and Samsung is mistaking because even Intel's competitive analysis and Samsung's own published SRAM numbers show an almost 40% scaling or 1.6x density.

EDIT: It's Samsung's regular SRAM that scales minus 30% or 1.4x. It goes from 0.070µm² to 0.049µm².

So not Intel 14nm density, but very close to Intel 0.0588µm².
 
Last edited:

Snafuh

Member
Mar 16, 2015
115
0
16
witeken said:
TSMC marketing team: we're 5 years behind Intel,
witeken said:
In any case, this strongly suggests that TSMC is indeed going to be 4 years behind Intel.
witeken said:
Samsung might be a bit earlier, but I'm not sure in what kind of volumes, but they will still be 3 years behind.
witeken said:
Let's forget that Intel has been shipping *hundreds* of millions of FinFETs since early 2012, which puts Samsung 3 years behind Intel.

From 5 years to 2 years. So they are catching up according to you ;)