I'm not sure that comparison is apt from a performance perspective, but maybe the control allows them to iterate faster, as they know what's coming on the software side. Intel has to wait for vendors.
Intel is also executing pretty horribly. If they had Icelake last year, it'd have been faster and used less power compared to Amber/Whiskey.
Amberlake may be pretty comparable in performance to the A12 for ST. What Apple's execution advantages afford them is the ability to put that performance in a Smartphone. That is a substantial advantage, but not out of the world as "performing like a desktop chip" quotes tell you.
Don't get me wrong is great performance in such a small TDP,but if you look throw the design,ie,only dual cores but very wide ones,you can tell its been developed for mobile operations,at lower clockspeeds on effiency orientated lower leakage nodes. Looking at the "desktop" cores from AMD and Intel,they probably give up potential core IPC just so they can clock higher,scale to more cores,etc. Even the process nodes they use which are deemed as "high performance" probably mean more leakage,etc.
Can I see a desktop equivalent ARM based core - sure,but I don't think it will just as easy or as small a chip as people think it will be,even if X86 is a shoe-horned thing.
In fact I would rather see more coverage of RISC-V and it is a shame when Linus pays more attention to that than mainstream tech websites:
He even looked at the first production RISC-V chip.
Bit off topic, but Intel is moving rapidly away from them. The XMM 7560 LTE modem in the latest iPhone uses an x86 core. Earlier Linkedin reports had it pegged as an Atom core. I'd bet its like a 400MHz Silvermont. Their server chipset uses an embedded Quark core since Skylake. Their modem division uses Atom cores too.
The fact that they can use Silvermont on an LTE modem while being competitive on a power usage front is big - at least they know how to make efficient cores so it can be proliferated further. They should have done this long time ago. Point of x86 everywhere is so they get to learn from it and apply it on their main cores.
I didn't know that.
As it is, the A12 is very competitive with top of the line desktop Core/Zen parts as it is in a lot of the browser benchmarks and Geekbench. I imagine it's very competitive in a lot of tasks. Is 4+ Ghz really needed?
I'm actually kind of interested as to how they are connecting the big and small cores; and if an A12X comes along with more CPU cores.
They are kind of synthetic benchmarks,and again very few and very wide cores,which need probably a lot of resources to keep them at peak utilisation,and again all done under a closed OS with tweaks. This is why they needed to rejig the memory subsystem. Wide cores,also means they want to keep at a lower clockspeed range - ever thought what pushing higher clockspeeds means(IPC given up for longer pipelines and use of leakier high performance processes),but then very wide designs have their own problems too. Its not the first time designers have tried to go wider and lower clocked.
Will a supercomputer have an A12 core?? Last time I checked IBM is supplying the CPUs for ORNL.You might as well compare a console to a gaming desktop. One runs games very efficiently with what it has,and the other brute forces it.
As usual at EVERY phone launch,more and more hype. Also for web browsing even a cheapo Android phone or old computer is fine.Most Android phones sold are not higher end devices worldwide and Android makes up 77% of the smartphone OS share. Things like web browsing and word processing have not needed much power for a very longtime.Its why smartphones are reaching 3 year lifespans,and laptops/desktops are being kept longer and longer.
People are instantly calling Intel,AMD doomed,just because Apple made a 7 billion transistor dual core,on the most efficient node the world has. All great,but they are just throwing transistors at the problem.
Is it going to run something like Fallout 4 fine,which is very single core bottlenecked,with 100s of NPCs,massive draw calls,etc?? Be able to do software 4K transcoding on only two cores,etc??
This is all hype TBH,as Apple,Samsung need to find new ways to increase phone prices to increase margins,as more and more people are holding off upgrading. What happened to the PC market is now happening to the phone market.
In reality what would be more useful is not all this pointless increases in processing power for a phone,but phones which can have have functions like GPS,data,etc used a lot but last a week between charges.
Things like trickle solar recharging through the screen which I believe Apple and Samsung looked at,but nothing has come out of it.
I would imagine for most phone users,their battery running out at an awkward time is more of an issue than X amount more performance,when ordering some trainers using the Amazon app,or watching Taylor Swift,etc on YouTube.
Dropping power draw more and more makes more sense,in terms of battery liffespan,as more charge/discharge cycles is what destroys batteries - some of the phones and tablets with liquid cooling,is really a step backwards. They should not need it.
Battery tech is what is the limiting factor nowadays and wearing down the batteries quicker with largish power draws,to push benchmark figures is great as companies can have built-in obsolescence selling more phones(worse with Android). Most of the batteries are dumped as its difficult to recycle them - it is the hidden cost of smartphones and tablets.