Qualcomm moves Cortex A72 to the mid-range

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136
Are people still having issues to discern the difference between marketing slides and presentations given for developers (hence: IDF).
Yeah sure, Intel as others don't lie to devs. I never trust any figure found in slides from a company about its products.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Yeah sure, Intel as others don't lie to devs. I never trust any figure found in slides from a company about its products.

Crystalwell's been on the market for almost 2 years now. I think AnandTech has tested it..
 

imported_ats

Senior member
Mar 21, 2008
422
64
86
Considering that most companies will go for a mass market product, why build the software when the hardware hasn't been there. And advising people to never move away from 2-3 cores? Damn, Apple better stuck themselves with 3 cores forever!

We've had large multi-context machines for decades. The issues hasn't and isn't hardware to run the software. The issue is and will continue to be that actual useful multi-threaded programing is extremely difficult and only made more difficult by the vast majority of application areas lacking extreme data sets and data independence. We've had consumer level multi-context hardware in wide spread use for a decade.

The reality is that multi-context software generally doesn't provide any significant end user performance benefit for the majority of application spaces.
 

jdubs03

Golden Member
Oct 1, 2013
1,291
904
136
Yeah sure, Intel as others don't lie to devs. I never trust any figure found in slides from a company about its products.

I use the slides as a baseline so I can compare actual results when independent reviews come out. I admit though I don't keep track of all the nuances, cataloging claimed vs. actual data, don't care enough. But from what I saw with Broadwell's Gen 8 GPU being claimed at 25-50%, from Gen 7.5 HD 4400 to Gen 8 HD 5500 there were some benchmark increases within that range, and some a little below + some higher.

Obviously there will always be marketing spin, but it's doubtful that a company is going to egregiously lie in their slides. There has to be some acceptance that the actual results won't deviate too much. Their goal is to be as trustworthy as possible (while also portraying themselves as making significant improvements), so I think you might be going a bit far by not putting any trust in them.
 
Last edited:

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
Are people still having issues to discern the difference between marketing slides and presentations given for developers (hence: IDF).

I'll remember you saying that.

You were actually supposed to copy and paste that line into Google, but I guess you didn't bother.

It got me to "Design with Intel Iris Graphics: Great Performance on Innovative Form Factors", and there was nothing there on Crystalwell on Page 13. Or anywhere about a 30% increase from Crystalwell.

You can also just take my word for it that there are a bunch of benchmarks showing a 1.2 to 1.7x or so improvement.

Why? No. Proof.

While I tend to agree with Nothingness that taking Intel's word for something isn't always the best way to go (independent verification is helpful), I did find that slide deck useful.

At least the slides I saw was pure marketing. What slides was he talking about? Because you know, it was a reference to follow our nose in Google searching.


Crystalwell's been on the market for almost 2 years now. I think AnandTech has tested it..

And they had this to say, "I would also like to see an increase in bandwidth to Crystalwell. While the 50GB/s bi-directional link is clearly enough in many situations, that's not always the case."

The reality is that multi-context software generally doesn't provide any significant end user performance benefit for the majority of application spaces.

Believe what you wish, if something like LibreOffice can get a multiplication boost in performance by going Heterogenous, I'm sure most applications will have a benefit in some form.
 

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136
I use the slides as a baseline so I can compare actual results when independent reviews come out. I admit though I don't keep track of all the nuances, cataloging claimed vs. actual data, don't care enough. But from what I saw with Broadwell's Gen 8 GPU being claimed at 25-50%, from Gen 7.5 HD 4400 to Gen 8 HD 5500 there were some benchmark increases within that range, and some a little below + some higher.

Obviously there will always be marketing spin, but it's doubtful that a company is going to egregiously lie in their slides. There has to be some acceptance that the actual results won't deviate too much. Their goal is to be as trustworthy as possible (while also portraying themselves as making significant improvements), so I think you might be going a bit far by not putting any trust in them.
When I used "lie" I exaggerated :) What I mean is that a company will always show what makes their products look good, no matter who they are talking to, investors or devs. In fact, I'm not sure I could show a slide where Intel lied, but I certainly could show some where the benchmark selection made it look better than it was (cf. some slide about early Atom vs ARM). That's why I don't trust these slides unless some external reviews prove they are right (and this is what you say for that case).
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
It got me to "Design with Intel Iris Graphics: Great Performance on Innovative Form Factors", and there was nothing there on Crystalwell on Page 13. Or anywhere about a 30% increase from Crystalwell.

Why? No. Proof.

At least the slides I saw was pure marketing. What slides was he talking about? Because you know, it was a reference to follow our nose in Google searching.
Sorry, my mistake. I had already downloaded the PDF earlier, and the PDF you're referring to has a very slightly different name, which I didn't catch.
Here's the right one...

SZ14_ARCS001_100_ENGf filetype:pdf
 

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
That's why I don't trust these slides unless some external reviews prove they are right (and this is what you say for that case).

Don't forget Core M slides.

But when they are so specific as the slide will show, its hard to say they are wrong. Maybe it will perform like that on only those tested ones, but nevertheless, true for what they are showing.

Also, 30% gains seem right to me. Iris Pro on the mobile is 50-70% faster than HD 4600, which is a Desktop one. We can see that DOUBLING the execution units makes a little difference looking at how HD 4400 vs HD 5000 performs(and even doubling TDP like Iris 5100). The rest HAS to be the eDRAM. I'd say in this case, real world is probably even better than marketing.

A question. I thought this thread was for talking about Qualcomm chips?
 
Last edited:

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
Sorry, my mistake.

Forgiven. I checked the slides, I wish the information given wasn't as hidden as it is. But it basicly says that Iris Pro 5200 is 20%-30% faster than Iris 5100. Don't think we have seen that proven anywhere, has there?

Also, 30% gains seem right to me. Iris Pro on the mobile is 50-70% faster than HD 4600, which is a Desktop one.

The gain is pointed out as from the Crystalwell Platform, so the amount of EU's is not an issue. Meaning, it's gains over the Iris 5100 iGPU.




On topic, why should Qualcomm make their own custom chips? What is the A72 lacking?
 

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136
I did some forum archaeology, and it seems a little comment from someone made the discussion move on to another topic:

http://forums.anandtech.com/showpost.php?p=37188887&postcount=68
I like digressions :)

BTW, it can be seen from the benchmarks that the comparison is CRW 128MB vs 0MB.
Yeah, saw that in the deck slide, that's definitely interesting, but I'd like some third party confirmation. I wonder if the L4 cache/memory can be disabled in the BIOS...

To get back on subject, I wonder what will remain of the 3.5x claim ARM made about improved A72 efficiency since Qualcomm will be using 28nm and not some smaller process. Anyway what matters to me is the supposed 10-50% IPC improvement, which will be there no matter what the process is.
 

Nothingness

Diamond Member
Jul 3, 2013
3,309
2,382
136
On topic, why should Qualcomm make their own custom chips? What is the A72 lacking?
There are at least two reasons:

  • probably less royalties to pay to ARM
  • being different from the crowd of other SoC (for instance, they can clock their cores at different speeds, something all previous ARM cores have lacked and since this wasn't mentioned for A72, it probably doesn't have that either).
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
A72 on 28nm should be about apple a8 perf level. Now 28nm is lowest cost per transistor so the cost for consumers will be modest. But Its still interesting they seem to beliewe that kind of power and ipc in a phone is soon midrange !
 

NTMBK

Lifer
Nov 14, 2011
10,455
5,841
136
I wonder if they might be using one of the SOI 28nm processes? Might help keep power consumption under control, which seems necessary when the A15 was already borderline.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
I wonder if they might be using one of the SOI 28nm processes? Might help keep power consumption under control, which seems necessary when the A15 was already borderline.

For the small vendors it perhaps makes sense but qcom is huge volumes? Do we have any estimates of what soi adds to wafer cost?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
For the small vendors it perhaps makes sense but qcom is huge volumes? Do we have any estimates of what soi adds to wafer cost?

10%, but if I remember correctly, the density is also worse.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
10%, but if I remember correctly, the density is also worse.

And even if power reduction is say aprox 30% for load and idle is it then worth it? On marginal cost yes big time but the transition is also costly and it probably also depends what the 28nm can be sold for without soi. Soi is perhaps better viewed as a means to lengthen the depreciation period for the equipment. I guess the new 14nm is extremely costly.

But anyway the a72 must be far better than even a57 for power management as the bl will at best not work optimally at a72 launch time. A15 was kind of buggy in its own. As recent AT 810 review said it took 3 iterations to fix it so there is plenty room for improvement even on 28nm.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Worse than 20nm, but I thought it was a little better than 28nm bulk? Could be wrong, I don't obsess over it like Seronx ;)

Haha.
Yeaa it should be a bit better for density - not worse.
Seronx is a bit quiet btw.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
I wonder if they might be using one of the SOI 28nm processes? Might help keep power consumption under control, which seems necessary when the A15 was already borderline.

Worse than 20nm, but I thought it was a little better than 28nm bulk? Could be wrong, I don't obsess over it like Seronx ;)

Remember that outside of IBM's SOI process, the sub-32nm SOI process tech you are going to be reading about and thinking of in terms of foundry offerings and access for fabless entities is the FD-SOI variant developed STM and licensed to anyone who'd take a license because STM's internal market volume doesn't/can't justify continued R&D investments.

And for the STM FD-SOI they decided to take a page from the foundry market in terms of node label marketing and have labeled their 28nm FDSOI node as being their "20nm node".

So if you want to think about comparisons between a 28nm HPC node versus at "20nm" FDSOI node, just be aware that the 20nm FDSOI node is essentially a 28nm node with FDSOI substrate.

Begun these node labeling wars have.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
Remember that outside of IBM's SOI process, the sub-32nm SOI process tech you are going to be reading about and thinking of in terms of foundry offerings and access for fabless entities is the FD-SOI variant developed STM and licensed to anyone who'd take a license because STM's internal market volume doesn't/can't justify continued R&D investments.

And for the STM FD-SOI they decided to take a page from the foundry market in terms of node label marketing and have labeled their 28nm FDSOI node as being their "20nm node".

So if you want to think about comparisons between a 28nm HPC node versus at "20nm" FDSOI node, just be aware that the 20nm FDSOI node is essentially a 28nm node with FDSOI substrate.

Begun these node labeling wars have.

What your guess then ? :) will some a72 be on soi?
 
Mar 10, 2006
11,715
2,012
126
What your guess then ? :) will some a72 be on soi?

No. These are mass market chips that need to have a competitive cost structure (read: high yields) and significant reuse of IP blocks that have been tested and proven on current TSMC 28nm.

These will be on 28nm TSMC process; the only question is whether it'll be HPm or HPC.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
No. These are mass market chips that need to have a competitive cost structure (read: high yields) and significant reuse of IP blocks that have been tested and proven on current TSMC 28nm.

These will be on 28nm TSMC process; the only question is whether it'll be HPm or HPC.

I agree here. One can add you can get power down by just lowering the freq. There is plenty room for that. Even an eg 1.4 ghz is going to be crazy fast for midrange anyway.