Qualcomm s820 news

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
1484 at 1.57 GHz is extremely impressive, considering the targeted boost frequency of A72 (and as you mentioned some of the results on geekbench even go as high as 1698), but can we be certain that geekbench is reporting the frequency correctly for this chip?

Always questionable, not just if Geekbench is reporting accurately but the reports themselves. There are many outlier reports for this chip on Geekbench's site, I chose that particular comparison as both the A7 and MT8173 scores seemed in-line with the bulk of reports.

Anyway, I suspect that this time next month we'll have some real products with A72 in them to talk about. The initial benchmarks look to me like it will have A8 levels of single thread performance. Of course multi-thread perf is already terrific on these high core count phones, but useless for normal users.

What I find most interesting though is how fast the A72 appears to be going from announcement to a product launch. In the past it seems like it has always taken > 18 months from ARM announcing a new arch to actual shipping products. A57 for example was announced in Oct 2012, made it into a product in June 2014.

If we actually see products with an A72 next month, that will be just 8 months. Speeding up the lag from launch->shipping products may actually be more important to ARM than this particular design.
 

Bryf50

Golden Member
Nov 11, 2006
1,429
51
91
ARM cores have always been competitive with Qualcomm's CPUs on performance.

A9 was higher performance than the original Kraits, and A15 was easily Krait 400 level.

CPU performance is just one part of the mobile SoC equation.
A9 wasn't even close to the original Krait. http://www.anandtech.com/show/5559/...mance-preview-msm8960-adreno-225-benchmarks/2

My Galaxy S3 with krait was the first time an android device really felt right performance wise. At the time, the Cortex-A9 SOCs always felt sluggish and it would be much longer before Cortex-A15 was in any products.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
MediaTek already has an 8173 out with 2x A72 and 2x A53. It's been demo'd in tablet and chromebook form.


Looks like at 1.6Ghz the 8173 is getting 1300-1700 single thread. It's spec'd to clock up to 2.4 Ghz when on a 14/16nm process though (8173 is on 20nm).

It's beating an Apple A7 in single thread, and at 2Ghz+ should beat an A8 as well. I tend to ignore multi-core results on phones, as long as it has 2 cores.

And this is a low end part. Huawei is supposed to release a 2.4Ghz quad A72+Quad A53 on Sept 2.

Also QualComm 618 and 620.

http://news.softpedia.com/news/huaw...eiled-at-ifa-2015-on-september-2-487603.shtml

https://browser.primatelabs.com/geekbench3/compare/3177220?baseline=2705894
Isnt the 8173 on 28nm tsmc?
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
Isnt the 8173 on 28nm tsmc?

Actually yes, it was initially thought 14 or 20 but is actually 28nm. However, ARM has targeted the 14/16nm nodes for full performance (that's in the AT release article from Feb).

This is supposedly an early Snapdragon 620 :

Snapdragon-620-test-device-nbspresults.jpg



For comparison :

1422339765_geekbench-810-comparison.jpg
 
Apr 30, 2015
131
10
81
ARM have stated that their A72 will operate at up to 3 GHz sustained, in larger form-factors; presumably in larger tablets, or perhaps in Windows 10 all-in-ones, and up to 2.5GHz sustained in a mobile phone.

They have also claimed that their new tools allow very rapid development of new SoCs; used in conjunction with TSMC's 16nm process, with ARM's POP IP for that process, this may enable a new generation of A72 SoCs next year.

They have further stated that processor-power is approximately doubling every year:
A15 2014: 1.0
A57 2015: 1.9
A72 2016: 3.5 at 2.5 GHz in a mobile phone.
The latter implies 4.2 as fast at 3GHz.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
They have further stated that processor-power is approximately doubling every year:
A15 2014: 1.0
A57 2015: 1.9
A72 2016: 3.5 at 2.5 GHz in a mobile phone.
The latter implies 4.2 as fast at 3GHz.

What SoC had the A15 running at just 1.0 GHz?

Also there were A15 SoCs running at 1.7 GHz all the way back in 2012 (nexus 10 and Samsung Chromebook)

Edit: Sorry, just realized you weren't referring to frequencies with the above numbers, just ignore this post.
 
Last edited:

Mondozei

Golden Member
Jul 7, 2013
1,043
41
86
What I find most interesting though is how fast the A72 appears to be going from announcement to a product launch. In the past it seems like it has always taken > 18 months from ARM announcing a new arch to actual shipping products. A57 for example was announced in Oct 2012, made it into a product in June 2014.

If we actually see products with an A72 next month, that will be just 8 months. Speeding up the lag from launch->shipping products may actually be more important to ARM than this particular design.

That is not thanks to ARM. There was an interview with one of the ARM guys over at PCPER a few months ago where he mentioned, without specifying which company or which arch, that the tech companies have become terrifyingly fast at adopting their newest products. He said what used to take half a year took just 3 weeks in one of their latest launches. He meant the initial adoption period to silicon, not the full "we get access to we have a consumer product out" cycle.

I took that to mean the A72 but the interviewer didn't press on. I also assume he meant MediaTek, but again, he refused to disclose details. The recent A72 news from Mediatek does seem to confirm my initial guess.

I also think QC has been taken by surprise by this stunning speed of late. If A72 is competitive with the 820, then by spring of 2016, Samsung could be again ditching QC in favor of the A72 on 14 nm and see similar performance.

Ofc, as QC likes to point out, the SoC is a total package, modems, DSPs, sensors and so on. In these areas, QC still has a large advantage. Not to mention 3G, which is an area many competitors have stopped invested in. You get a lot less dropped calls on snapdragons than on mediatek SoCs, which many users often mistake for problems with the network when it is actually the phone(can source but too lazy to google link now).

3G is going to be most relevant for markets like India, ASEAN, Latin America etc for the next 5 years at least, possibly longer. Network quality will matter a lot more to people than synthetic benchmarks in Geekbench, and the OEMs know it.

Still, always fun to see QC getting a run for its money on the more vain aspects of the SoC space.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
I know it's not entirely fair to compare the advances of desktop vs. mobile chips however if you look at the speed increases here it's quite stunning (for both CPU and GPU). Just compare over the last fives years - how fast things the new mobile SoC's are compared to say when the 2600K was released in relation to the latest Skylake processors. It's almost embarrassing.

I have a QC Snapdragon 805 in my Note 4 and with CM12.1 installed this phone is really fast. I thought SoC for smartphones was slowing down but it appears not with these latest scores. I'm guessing 4K resolution screens and 4-8GB of RAM will be common for flagship smartphones in a few years.

Crazy.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I know it's not entirely fair to compare the advances of desktop vs. mobile chips however if you look at the speed increases here it's quite stunning (for both CPU and GPU). Just compare over the last fives years - how fast things the new mobile SoC's are compared to say when the 2600K was released in relation to the latest Skylake processors. It's almost embarrassing.

I have a QC Snapdragon 805 in my Note 4 and with CM12.1 installed this phone is really fast. I thought SoC for smartphones was slowing down but it appears not with these latest scores. I'm guessing 4K resolution screens and 4-8GB of RAM will be common for flagship smartphones in a few years.

Crazy.

Recently I read The Pentium Chronicles, about the development of the P5 architecture. The first x86 OoO architecture. It was completely new. Everything wasn't done, it had to be pioneered. My point: today, not. Out of order is old and mainstream. "We" know how to make such superscalar processors.

It's not embarrassing. It's like if the phone space is on 45nm and catching up to 14nm. 45nm was terrifying a decade ago, not anymore.
 

J Rock

Banned
Jul 20, 2015
17
0
0
I know it's not entirely fair to compare the advances of desktop vs. mobile chips however if you look at the speed increases here it's quite stunning (for both CPU and GPU). Just compare over the last fives years - how fast things the new mobile SoC's are compared to say when the 2600K was released in relation to the latest Skylake processors. It's almost embarrassing.

I have a QC Snapdragon 805 in my Note 4 and with CM12.1 installed this phone is really fast. I thought SoC for smartphones was slowing down but it appears not with these latest scores. I'm guessing 4K resolution screens and 4-8GB of RAM will be common for flagship smartphones in a few years.

Crazy.



I would have to agree that it is starting to get embarrassing for intel with the continuous 5%/yr improvements when ARM and it's licensees are doing closer to 50%/yr and haven't slowed down.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Recently I read The Pentium Chronicles, about the development of the P5 architecture. The first x86 OoO architecture. It was completely new. Everything wasn't done, it had to be pioneered. My point: today, not. Out of order is old and mainstream. "We" know how to make such superscalar processors.

It's not embarrassing. It's like if the phone space is on 45nm and catching up to 14nm. 45nm was terrifying a decade ago, not anymore.

I disagree. Phones have had OoO chips for a while now and some of the chips are already at 14nm (sure not Intel 14nm) but you get the point.

Keep in mind even with these massive speed increases YoY they still have to take battery life into consideration, no pulling from the mains here. Perhaps it's more of an X86 vs. ARM thing but Intel is clearly falling behind where the real big innovation is taking place - the smartphone market.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Recently I read The Pentium Chronicles, about the development of the P5 architecture. The first x86 OoO architecture. It was completely new. Everything wasn't done, it had to be pioneered. My point: today, not. Out of order is old and mainstream. "We" know how to make such superscalar processors.

It's not embarrassing. It's like if the phone space is on 45nm and catching up to 14nm. 45nm was terrifying a decade ago, not anymore.

P5 was in-order. And what you say now about mobile processors could apply to P5 and P6 too, where the technologies Intel had been releasing with successive CPU generations (pipelining, superscalar, OoOE) had already been pioneered in various mainframe/minicomputer/workstation class CPUs. Intel was bringing it to a new cheaper, lower power, and more mainstream application. Just like mobile SoCs have been.

It's Netburst where Intel really first started doing a lot of novel new techniques for CPU uarch. A lot of it didn't go so well.
 

Guest1

Member
Aug 11, 2014
28
0
0
I would have to agree that it is starting to get embarrassing for intel with the continuous 5%/yr improvements when ARM and it's licensees are doing closer to 50%/yr and haven't slowed down.

Yet in most reviews the reviewers noted the Intel powered Baytrail Chromebooks were much zippier than the ARM variants which were arguably "more advanced" than Baytrail. Somebody has to invent the new methodologies before the foundries copy them ;-) ARM will soon hit that wall as well between the frequency limits and foundry capabilities. Then they would be thrilled with "only" a 5% improvement. When windows 10 comes out there will inevitably be comparison between ARM powered machines and Intel powered machines. Let's see how the reviewers rate their performance. I have a feeling the Intel powered variants will feel snappier than the ARM variants.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
I am kinda tired of people who are comparing intel and arm with performance increases.

Intel is taking a big architecture and scaling it down so it uses less power. You are getting the same or better performance in smaller form factors.

Arm is taking a small architecture and scaling it up. Furthermore the top performance you see is under bursty type software. With ARM any sustained use will cause the cpu to throttle.

Comparing the total cpu performance of just the cpus when the cpus use 2 watts, 4 watts or 8 watts is a big deal. One is total performance and one is performance per watt. Both ARM and Intel are dramatically improving performance per watt but you only see ARM with large increases in performance for one is scaling down and the other is scaling up.


Both companies are doing a tremendous job increasing performance per watt but there are completely different problems scaling up vs scaling down, and thus comparing just performance and not performance per watt is stupid.

Your I7 2600k that is overclocked with a 4.6 to 5 ghz is overclocked is probably using 200 watts of power just for the cpu on load since it is not at stock speeds but instead overclocked with voltages that are between 1.4 and 1.5 volts. It is only at much lower voltages that you stay within the intended 95 watt tdp.

The new 22nm haswells and 14nm broadwells and skylakes do the same work at much lower power consumption consumptions in the 47 to 70 watts range, that the 2011 i7 overclocked does at 200 watts.
 
Apr 30, 2015
131
10
81
I read somewhere on an ARM web - site, in 2013 I think, that their cores would be doubling in performance year - by - year. If true, then A72 will be succeeded by an even more powerful core in 2017, maybe on 10nm; maybe this will be Ares.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
I am kinda tired of people who are comparing intel and arm with performance increases.

Intel is taking a big architecture and scaling it down so it uses less power. You are getting the same or better performance in smaller form factors.

Arm is taking a small architecture and scaling it up. Furthermore the top performance you see is under bursty type software. With ARM any sustained use will cause the cpu to throttle.

Comparing the total cpu performance of just the cpus when the cpus use 2 watts, 4 watts or 8 watts is a big deal. One is total performance and one is performance per watt. Both ARM and Intel are dramatically improving performance per watt but you only see ARM with large increases in performance for one is scaling down and the other is scaling up.


Both companies are doing a tremendous job increasing performance per watt but there are completely different problems scaling up vs scaling down, and thus comparing just performance and not performance per watt is stupid.

Your I7 2600k that is overclocked with a 4.6 to 5 ghz is overclocked is probably using 200 watts of power just for the cpu on load since it is not at stock speeds but instead overclocked with voltages that are between 1.4 and 1.5 volts. It is only at much lower voltages that you stay within the intended 95 watt tdp.

The new 22nm haswells and 14nm broadwells and skylakes do the same work at much lower power consumption consumptions in the 47 to 70 watts range, that the 2011 i7 overclocked does at 200 watts.

No Intel messed up at 14nm and it's obvious when looking at performance per watt metrics. Just look at Haswell vs. Skylake. Although none of the reviews really pointed this out it's embarrassing the Skylake K series chips have a higher TDP than their predecessors which taken into consideration offers virtually no improvement in overall performance. They clealry messed up and I suspect that's why they launched the leakiest K series parts first instead of where they traditionally launch in notebooks or iMacs etc.

I would like to understand your logic behind the whole "scaling down is harder than scaling up" statements. This a nonsense. Chip manufacturers have a transistor budget and target a specific TDP and build the fastest and most efficient chips they can around these limitations. Almost all mobile chips are "Bursty" as they quickly hit thermal limit thresholds due to form factor and battery constraints as well as fabrication limitations, but the advances shown by the OP show they are tackling problem this while still dramatically improving performance.

The big vs small design statement is cop out for Intel apologetics. They messed up and should be called out for it. A 50~ percent increase in performance over 5 years is embarrassing. Maybe this is just a reflection of no real competition but they can't get away with this crap in the mobile market.

That being said Intel does have multiple chip designs, some of which directly address mobile device markets and with new phones like the Asus Zenphone 2 they are clearly making progress. They still have a long way to go before catching up to QC, Apple or Samsung (SoC development is a lot more than efficient chip design) but with enough Gorilla marketing they could make a small dent with a few more design wins.
 

shady28

Platinum Member
Apr 11, 2004
2,520
397
126
...
I have a QC Snapdragon 805 in my Note 4 and with CM12.1 installed this phone is really fast. I thought SoC for smartphones was slowing down but it appears not with these latest scores. I'm guessing 4K resolution screens and 4-8GB of RAM will be common for flagship smartphones in a few years.

Crazy.

I think 2014 through early 2015 has been very slow as far as performance increases. The Snapdragon 805 circia 2014 is not really much faster than a Snapdragon 800 that came out in 2013. On top of that, the faster Snapdragon 810 turned out to have heat and hence throttling issues that nullified any performance advantages it might have had.


These A72 based chips look to rectify that and get some pretty significant single thread performance gains (like, 30-40% over Krait 450 in the 805).
 

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
No Intel messed up at 14nm and it's obvious when looking at performance per watt metrics. Just look at Haswell vs. Skylake. Although none of the reviews really pointed this out it's embarrassing the Skylake K series chips have a higher TDP than their predecessors which taken into consideration offers virtually no improvement in overall performance. They clealry messed up and I suspect that's why they launched the leakiest K series parts first instead of where they traditionally launch in notebooks or iMacs etc.

I would like to understand your logic behind the whole "scaling down is harder than scaling up" statements. This a nonsense. Chip manufacturers have a transistor budget and target a specific TDP and build the fastest and most efficient chips they can around these limitations. Almost all mobile chips are "Bursty" as they quickly hit thermal limit thresholds due to form factor and battery constraints as well as fabrication limitations, but the advances shown by the OP show they are tackling problem this while still dramatically improving performance.

The big vs small design statement is cop out for Intel apologetics. They messed up and should be called out for it. A 50~ percent increase in performance over 5 years is embarrassing. Maybe this is just a reflection of no real competition but they can't get away with this crap in the mobile market.

That being said Intel does have multiple chip designs, some of which directly address mobile device markets and with new phones like the Asus Zenphone 2 they are clearly making progress. They still have a long way to go before catching up to QC, Apple or Samsung (SoC development is a lot more than efficient chip design) but with enough Gorilla marketing they could make a small dent with a few more design wins.

First I never said the "scaling down is harder than scaling up" that is you putting words into my mouth. You are not understanding what I am saying if you thought my comments can be summed up by that. Scaling up in performance is hard when your best chips are already running at 4 ghz, but if they were only running at 1 ghz it is far easier to scale up. Scaling down in power consumption is hard if you have your high end chips already running at 0.7 volts, and you want to go even lower, but it is easier when your high end chips are at 1.2, 1.3, or 1.4 volts and you want to scale down even lower in voltage.

I can go very in depth into this if you want, but I rather not do so if you already are familiar with enough of the physics and math. Me explaining in full is kinda rambling and I am trying to be short for it is easier for people to follow what I am saying the less words I use. (note this is me trying to be short :p)

Power Consumption is governed by this formula

P=C*V^2*F

Power = Capacitance times Voltage times Voltage times Ffrequency

When you have a 4ghz chip it is harder to increase that performance by 25% vs a 2ghz chip and then increasing the performance 25%, this is due to that formula not being a linear formula. Each time you increase the frequency you need to increase the voltage due to the effects voltage have on Capacitance (clearing the circuit so it is ready to turn on or off depending on what it needs to do next) for the higher the frequency the faster you need to make the capacitance "blank out", and by doing so you hit a power wall for besides increasing your power consumption linearly just by increasing frequency you also have to add far more voltage and voltage adds quadratically and not linearly to the power consumption.

For example desktop chips run their chip at their high stock and non turbo clocks at 1.2 to 1.3 voltage even on broadwell and skylake. Core M by contrast is targeting 0.7 volts and actually runs less than 0.7 at their non turbo base speed of 1100 or 1200 mhz depending on model. Lets pretend there is a 14nm desktop broadwell core i3 on the market, currently there is not and probably will never be with a skip directly to skylake, but lets pretend the Core i3 4370 @3.8 ghz Hawell chip was really a 14nm braodwell.

So if you take the identical desktop i3 and downclock it to 1.2 ghz of the Core M and keep it at the same voltage at 1.2 volts that it would take to run a 3.8 ghz chip vs the 0.7 volts of the 1.2 ghz core m and you turn off turbo the desktop chip will use

1.2*1.2/0.7/0.7=2.93 times more power, even though they run the same clock speed just because you are wasting heat due to voltage. But it is actually a higher number once you restore that chip to 3.8 ghz for now you also have to 3.6 times as much power just due to the difference of 3.8/1.2=3.66. Also the number goes even higher for when you increase the voltage you also increase the temperature due to resistance and this in turn modifies Capacitance physical properties of the material making the chip even less efficent. So 2.93*3.66*Temperature Loss in efficiency means you are using over 10 times as much power plus whatever the loss due to temperature to only gain 3.6x more speed.

Here is a great forum post by a user of this forums called I Don't Care, he is an elite member of this forums but he actually works as a cpu designer as his profesion, but he did a great hobby post showing the power consumption of an i7 2600k and an i7 3770k at various temperatures and voltages to illustrate this fundamental physical equation I showed earlier in real life conditions

It has wonderful graphs

i7-3770K vs. i7-2600K: Temperature, Voltage, GHz and Power-Consumption Analysis

http://forums.anandtech.com/showthread.php?t=2281195

-----

Thus it is harder for intel to make a desktop process faster than it is to make a mobile process more power efficent. It is easier for arm for they are starting at a smaller chip and scaling up.

Though even though it is harder for intel to increase total performance on a desktop chip, it is very easy for them to increase performance by watt by scaling down and making the process more efficient by doing tricks so it uses less voltage, increase capacitance with things such as finfets, reduce leaking, etc. This is just like it is easy for ARM to scale up.

But at the same time it is harder for ARM to get more efficient not on performance but on minimum power consumption for they have been doing that for so long.

Booth companies have challenges but they are purposefully emproaching on each other turfs for you can't fight physics and it is harder for them to be even better at what made them already famous, and thus the low hang fruits is to do the things they have not specialized in.

-----

I am sorry but this is not opinion but flat out physics, electrical properties, chemistry, and math, there is a limit how much you can out engineer the fundamental rules of the universe. Thus looking at performance per watt is a more equal comparison at judging improvements of the cpus over generations, even if you can flat out care less about performance per watt for your desktop like mine does not care if you save 10 or 20 more watts, for it is a 200 to 300 watt machine inside a case and power supply that could handle a 1000 watts if you really wanted to push it. I purposefully did not pick a 1000w psu but instead a 620w seasonic that is fanless unless I play a video game, yet my case can handle 3x or 4x gpu for I picked it due to quietness and easy of assembly and tinkering even though I just use a single gtx 680. It sucks but it is harder to make a desktop cpu faster in single thread, a desktop gpu is different, adding more cores is different, adding a faster ssd or faster ram is different but single thread is very hard to improve today on desktop form factors. It sucks but it is true.

Now getting cpu performance that we used to have in a desktop in 2008 to 2011 in a cell phone / 7 inch tablet form factor is far easier but even that is not easy. I am glad we are now starting to see this from both intel and arm, and if AMD had better fabs and more engineers to spend on fine tuning their cpus we would see this from AMD as well.
 
Last edited: