Apple CPUs "just margins off" desktop CPUs - Anandtech

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

IntelUser2000

Elite Member
Oct 14, 2003
5,853
104
126
Again talking about cores. USER8000 was talking about dies,
No, but at this point we are arguing about something that gets us nowhere.

So actually the situation would really be the opposite of what you describe. It's the Skylake core that is 4 times as big as the Vortex core.
If we take Intel's 10nm = TSMC's 7nm literally then a Skylake core may be little over 2x the size of A12's core.

Intel's main cores don't seem particularly efficient in terms of die size. Perhaps they don't care that much though. You can see with ARM dies and Intel's small core(Atom) dies that the CPU core portion takes a tiny portion of the total die. Power measurements however show those tiny blocks are also responsible for a large portion of power use.

Contrast that to Intel's main cores. Their CPU cores take up significant part of the die. It could be related to high clock requirements necessitating the blocks be spread out to reduce thermal hotspots. Managing thermals and power use are the biggest problem for modern MPUs.

Packaging costs start to dominate under 80mm2 so they don't have a big reason to really make it much smaller anyway. There's likely an optimal balance between the two, as making the die even larger will result in diminishing returns as the contact to the heatsink does a reasonable job of lifting heat away once a certain core size is reached.
 

Nothingness

Golden Member
Jul 3, 2013
1,875
30
106
@CatMerc I mostly agree with what you wrote except that we should not care what Intel screwed up. What matters is what is available now and this is what should be compared. The reasons why Intel seems to stand still (and I'm the first to say accusing them of standing still is unfair and wrong) are a don't care in such a comparison.

We'll revisit when Intel next gen is here :)
 

beginner99

Diamond Member
Jun 2, 2009
3,975
111
126
For the A12 look at Andrei's article, the big core should be 2,07mm², compared to 8,73mm² for the Skylake core. So actually the situation would really be the opposite of what you describe. It's the Skylake core that is 4 times as big as the Vortex core.
2.07mm2 is obviously without caches so they would need to be removed from skylake die size as well. Then we need to decide if we are talking server skylake or client. I went with client as with the server one you have AVX-512 support which increases core size for 0 benefit in this specific test.

But fair enough. I just believed what the previous poster I responded to wrote which might have been wrong.
 

CatMerc

Golden Member
Jul 16, 2016
1,111
36
106
@CatMerc I mostly agree with what you wrote except that we should not care what Intel screwed up. What matters is what is available now and this is what should be compared. The reasons why Intel seems to stand still (and I'm the first to say accusing them of standing still is unfair and wrong) are a don't care in such a comparison.

We'll revisit when Intel next gen is here :)
My point is making this an x86 Vs ARM comparison and calling the differences inherent to the architecture is silly.
 

Etain05

Junior Member
Oct 6, 2018
7
0
16
No, but at this point we are arguing about something that gets us nowhere.

If we take Intel's 10nm = TSMC's 7nm literally then a Skylake core may be little over 2x the size of A12's core.

Intel's main cores don't seem particularly efficient in terms of die size. Perhaps they don't care that much though. You can see with ARM dies and Intel's small core(Atom) dies that the CPU core portion takes a tiny portion of the total die. Power measurements however show those tiny blocks are also responsible for a large portion of power use.

Contrast that to Intel's main cores. Their CPU cores take up significant part of the die. It could be related to high clock requirements necessitating the blocks be spread out to reduce thermal hotspots. Managing thermals and power use are the biggest problem for modern MPUs.

Packaging costs start to dominate under 80mm2 so they don't have a big reason to really make it much smaller anyway. There's likely an optimal balance between the two, as making the die even larger will result in diminishing returns as the contact to the heatsink does a reasonable job of lifting heat away once a certain core size is reached.
I agree with all this, my point was that it doesn't make sense to say that Apple couldn't scale to a 28-core part because the die of such a chip would be too big, or it wouldn't be competitive price-wise with Intel.

As far as clocks go, Apple doesn't need to increase them that much, simply because it has better IPC, or performance per clock, however you want to call it. And as far as simply throwing in more cores, if Intel, AMD, Cavium, Qualcomm, and others can do it, I see no reason for believing that Apple couldn't do it.
 
Jun 24, 2001
22,050
58
106
Apple had a fraction of the market of MS and I would therefore guess far fewer 3rd party software manufacturers that needed to adapt to that.
What? Did that change? ;)

I'd say that all we have to do is look at the vast amount of software that runs on some sort of x86 architecture and then ask ourselves what the incentives are for anyone to change that, at least in the short term. I have a hard time seeing it.
I don’t think you realize just how much ARM development has taken off. I’m willing to say that there is more consumer software development for ARM than X86 and it’s been that way for years. Even Microsoft tried to force all Windows Store apps to support ARM.
 

USER8000

Golden Member
Jun 23, 2012
1,489
2
136
Again talking about cores. USER8000 was talking about dies, but he also forgot that the A12 actually also has 4 smaller cores, a NPU, and many other IP blocks which are not guaranteed to be present on the Intel die.
No I didn't and this is what you don't seem to understand. It needs those lower end cores to drop idle power lower in the first place,whereas the X86 cores do it via agressively downclocking,etc parts of the core,so that is additional functionality added there. Then you also ignore that the Intel CPUs have an integrated GPU,security processors,fixed function hardware for Quicksync and so on. A huge percentage of the chip is not CPU too.

Maybe you need to read why BIG.Little was implemented,so those small cores and big cores are a whole unit. Also again some of you seem to not really understand that increasingly clockspeed is, not what PC forum people seem to think,ie,crank the voltage up and hey presto!!

Then as usually feeding into the hype of another phone launch,you forget Apple is using a cutting edge TSMC 7NM node,and Intel is still stuck on 14NM.

Wide cores have existed for years - in Russia they used the very wide Elbrus design which runs at lower clockspeeds. Intel hired many of the people involved with it - they certain have looked at very wide designs too.

Another example is Jaguar - Jaguar IPC is not massively worse off than Piledriver,but it obviously cannot scale that highly in clockspeed,etc. Trying to push clockspeed up alone takes power and transistors. Look at Vega - AMD themselves said a huge percentage of the transistor budget was to enable higher clockspeeds:

https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2

Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single (ever shortening) clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.
Why don't some of you actually read the articles on Anandtech about Infinity fabric and Mesh - look at how much power is devoted to that alone,to enable massive scaleability,let alone the amount of transistors.









Look at IBM and its high performance cores,which are not X86,they share some of the same problems(and solutions) and they are totally different instruction set.

Do you think AMD,Intel and IBM have spent so much money on interconnects,devotes so much of the transistors and TDP budget of their CPUs to the interconnects,because they are meaningless??

So what happens when Apple starts wanting to jack up core count??

Then as you hit higher clockspeeds and more cores,you need memory bandwidth - those very wide cores need feeding,otherwise utilisation will go down. Read why Intel has been trying to push things like AVX for example.

Now include the power requirements of faster memory controllers not running low voltage DDR4 or DDR3.That also means more transistors and more power too. Now you could use larger caches too,but then again more transistors.

Then as others have suggested the higher power and higher TDP you go,you have issues about spacing key components to make sure cooling is more effective - don't even look at AMD or Intel,look at IBM.

You cannot just expect a low power,low clockspeed core made for tablets,to suddenly jump in clockspeed,add lots more cores,etc without chip sizes growing,etc and suddenly wipe out AMD,Intel and IBM in high performance computing.

Its a way to linear and simplistic way of looking at things. Some of you have forgotten that the CPU design people have worked not only at Apple,but IBM,AMD,Intel and so on.

There are all aware of different ways of going about things.

This is the problem nowadays - every new tech launch is a hype launch. People get overexcited and predict every company is doomed. All for nought.
 
Last edited:

USER8000

Golden Member
Jun 23, 2012
1,489
2
136
Some of you seem to think just because Apple is involved,that AMD,IBM,Intel,Samsung,etc have CPU design teams run by morons,who just add random stuff because they can and Apple is "Godlike". Its almost like some tech religion now - its hype built on hype and the tech press is just enabling it to ridiculous levels. Obviously all these diverse companies should just give up since Apple has makes their own CPUs which are "better" in any use case,even to simulate nuclear explosions,just like when Apple fans told me that if I used OS X,I would never go back to Linux or Windows,or you should never use Android since iOS "just works" which Jensen Huang kind of paraphased(probably for a laugh) during the Turing launch.

I honestly believe if Apple released a Trabant,people would say Ferrari,Aston Martin,etc should give up making sportscars,or Iveco should give up making commercial vehicles.

BTW,time to stop being a PC enthusiasts guys - the interwebs has just said Intel and AMD don't stand a chance,and all we need is the latest £1000+ iPhone to do everything.

To avoid scrappage costs,how about you donate to me all your old Coffee Lake and Ryzen CPUs and your Turing,Vega and Pascal graphics cards,as I always wanted a "retro" PC and your e-peen is not served by PCMR as that is out and ACMR(Apple Computing Master Race) is in.

PS: If you have any AMD or Nvidia shares,time to convert them into Apple bonds equivalents,aka,as Abe's,which will soon replace all forms of currency worldwide.
 
Last edited:

mattiasnyc

Senior member
Mar 30, 2017
302
140
96
I don’t think you realize just how much ARM development has taken off. I’m willing to say that there is more consumer software development for ARM than X86 and it’s been that way for years.
So what? The issue isn't whether or not there's a bunch of software development for ARM for consumers, it's if there's enough for x86 for it to be needed. I would maintain that's the case. In order for x86 to be threatened all large corporations that currently run x86-based software would have to switch, no?

And what is the profitability proposition here? Fair enough that there's consumer software being sold to consumers, but how much of that software is made by companies that could maintain and support it to corporations spanning not just multiple offices/departments but continents?

Even Microsoft tried to force all Windows Store apps to support ARM.
Are you thinking about Windows UWP? I think that still requires Windows on ARM to run on it though. Either way I think MS' goal with UWP was more along the lines of moving away from WIN32 software to more modern software frameworks, regardless of the hardware it runs on.

But I could be wrong.

Either way I really don't see x86 dying any time soon, nor do I see Apple taking a significant slice of the non-iOS/OSX CPU markets... certainly not server....
 

Etain05

Junior Member
Oct 6, 2018
7
0
16
No I didn't and this is what you don't seem to understand. It needs those lower end cores to drop idle power lower in the first place,whereas the X86 cores do it via agressively downclocking,etc parts of the core,so that is additional functionality added there. Then you also ignore that the Intel CPUs have an integrated GPU,security processors,fixed function hardware for Quicksync and so on. A huge percentage of the chip is not CPU too.

Maybe you need to read why BIG.Little was implemented,so those small cores and big cores are a whole unit. Also again some of you seem to not really understand that increasingly clockspeed is, not what PC forum people seem to think,ie,crank the voltage up and hey presto!!

Then as usually feeding into the hype of another phone launch,you forget Apple is using a cutting edge TSMC 7NM node,and Intel is still stuck on 14NM.

Wide cores have existed for years - in Russia they used the very wide Elbrus design which runs at lower clockspeeds. Intel hired many of the people involved with it - they certain have looked at very wide designs too.

Another example is Jaguar - Jaguar IPC is not massively worse off than Piledriver,but it obviously cannot scale that highly in clockspeed,etc. Trying to push clockspeed up alone takes power and transistors. Look at Vega - AMD themselves said a huge percentage of the transistor budget was to enable higher clockspeeds:

https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/2



Why don't some of you actually read the articles on Anandtech about Infinity fabric and Mesh - look at how much power is devoted to that alone,to enable massive scaleability,let alone the amount of transistors.









Look at IBM and its high performance cores,which are not X86,they share some of the same problems(and solutions) and they are totally different instruction set.

Do you think AMD,Intel and IBM have spent so much money on interconnects,devotes so much of the transistors and TDP budget of their CPUs to the interconnects,because they are meaningless??

So what happens when Apple starts wanting to jack up core count??

Then as you hit higher clockspeeds and more cores,you need memory bandwidth - those very wide cores need feeding,otherwise utilisation will go down. Read why Intel has been trying to push things like AVX for example.

Now include the power requirements of faster memory controllers not running low voltage DDR4 or DDR3.That also means more transistors and more power too. Now you could use larger caches too,but then again more transistors.

Then as others have suggested the higher power and higher TDP you go,you have issues about spacing key components to make sure cooling is more effective - don't even look at AMD or Intel,look at IBM.

You cannot just expect a low power,low clockspeed core made for tablets,to suddenly jump in clockspeed,add lots more cores,etc without chip sizes growing,etc and suddenly wipe out AMD,Intel and IBM in high performance computing.

Its a way to linear and simplistic way of looking at things. Some of you have forgotten that the CPU design people have worked not only at Apple,but IBM,AMD,Intel and so on.

There are all aware of different ways of going about things.

This is the problem nowadays - every new tech launch is a hype launch. People get overexcited and predict every company is doomed. All for nought.
There is just one thing that I don't understand...why do you keep pushing the narrative that Apple would need to increase clock-speeds for a supposed Mac or server chip?

Performance = Clock-speed * IPC

If Apple is able to match Intel's performance at lower clock-speeds thanks to its higher IPC, why would it need to chase Intel to the stratosphere on clock-speeds? Just so that people can brag about how high their Mac is clocked? It's like people still believe that a higher clocked chip is inherently better just because of the higher clock-speed.

If Apple manages with the A13 or A14 clocked at 2,8GHz to surpass the single core performance of any Intel chip, it won't need to increase the clock-speed for a supposed Mac or server chip.

That still leaves the obvious problem of interconnects, but let's not pretend that this is an unsolvable problem. Even if we want to pretend that Apple's interconnects would be worse than Intel's, and they'd consume 10% more energy, 10% of let's say a good 50% of an entire chip's power budget devoted to the interconnects would still mean only 5% more energy consumption. I don't know you, but considering the performance per watt of Apple's core, I think it more than compensates for that possible 5%.

You cannot just expect a low power,low clockspeed core made for tablets,to suddenly jump in clockspeed,add lots more cores,etc without chip sizes growing,etc and suddenly wipe out AMD,Intel and IBM in high performance computing.
No one expects that. There's literally no one who said that. Of course the die would grow immensely for a server part, and of course the power would balloon too. All that matters is if the die would increase but still be smaller than Intel's, and if the power would increase but still be lower than Intel's. As for clock-speed, no necessary change there, as I said.

And I'm not even suggesting that Apple would really become a server vendor. I do not expect them to ever sell a server chip even if they do make one. They'll probably use it internally for their own server farms and maybe the top of the line Mac Pro/iMac Pro.

As for the dimension of the dies, why are we still talking about it? We have the data, we can directly compare the dimensions of the cores, without having to think about all the other blocks that may be the same, may be present or not present on the two dies. Intel's Skylake core (even if we take into consideration the density difference between 14nm Intel and 7nm TSMC) is still way bigger than Apple's Vortex core.

It needs those lower end cores to drop idle power lower in the first place, whereas the X86 cores do it via agressively downclocking
Much good it did that for Intel. It's idle power is still way above ARM's.
big.LITTLE is a success, as shown by the fact that ARM actually has better idle and low power modes than Intel. I'm not sure I'm right but I remember Andrei himself discussing the matter and saying that big.LITTLE is the way to go.
 

IntelUser2000

Elite Member
Oct 14, 2003
5,853
104
126
Much good it did that for Intel. It's idle power is still way above ARM's.
big.LITTLE is a success, as shown by the fact that ARM actually has better idle and low power modes than Intel. I'm not sure I'm right but I remember Andrei himself discussing the matter and saying that big.LITTLE is the way to go.
CPU core-wise Intel is actually pretty decent. What's not good is their platform-level power. That has to do with YEARS of ARM being in Smartphones and Tablets with every component and software being made for low idle power.

You'll see pre-Haswell, Intel CPU cores used little power, but platform power was a lot. Haswell generation they focused a lot elsewhere and battery life improved 50-70%. However, they still have a lot to go.

That still leaves the obvious problem of interconnects, but let's not pretend that this is an unsolvable problem.
This is what I mean when I say for single thread applications, it disproportionately favors low power chips. Core i7 7Y75 @ 4.5W can perform similar, or even better than Xeon Platinum 8180 @ 205W, but you won't ever consider the former for server workloads. They are even clocked similarly for single thread. It's not because their server division is made of retarded people. It's because the additions balloon the power requirements that much.
 
Last edited:

Nothingness

Golden Member
Jul 3, 2013
1,875
30
106
No I didn't and this is what you don't seem to understand. It needs those lower end cores to drop idle power lower in the first place,whereas the X86 cores do it via agressively downclocking,etc parts of the core,so that is additional functionality added there. Then you also ignore that the Intel CPUs have an integrated GPU,security processors,fixed function hardware for Quicksync and so on. A huge percentage of the chip is not CPU too.
Take a look at dies from Intel and Apple chips and see what percentage each devotes to things that are not CPU: for Intel it's about 30-50% (for desktop chips), for Apple it's less than 10%.

Why don't some of you actually read the articles on Anandtech about Infinity fabric and Mesh - look at how much power is devoted to that alone,to enable massive scaleability,let alone the amount of transistors.
So with your nice graphs you've shown that the uncore always consumes significantly less than the cores especially for Intel chips :D

Then as you hit higher clockspeeds and more cores,you need memory bandwidth - those very wide cores need feeding,otherwise utilisation will go down. Read why Intel has been trying to push things like AVX for example.
You might have not noticed that Apple chips already have a bandwidth similar to typical desktop chips.

Its a way to linear and simplistic way of looking at things. Some of you have forgotten that the CPU design people have worked not only at Apple,but IBM,AMD,Intel and so on.
I'd argue that you are underestimating Apple design teams. What they have achieved is impressive, but I'm the first to say Intel design teams are also incredibly good (I have little doubt they are being held back by a management that is more interested into giving back lots of dividends rather than improving CPU).

This is the problem nowadays - every new tech launch is a hype launch. People get overexcited and predict every company is doomed. All for nought.
Yeah many people think that Intel is doomed. But they are incredibly resilient.

I'm not part of the crowd that thinks that making a server chip out of a CPU designed for low power is a good idea (even the Intel dinosaurs understood they needed a different design from Core to attack the mobile market). But I nonetheless think that this Apple chip would make a good desktop chip.
 

IntelUser2000

Elite Member
Oct 14, 2003
5,853
104
126
Nothingness:

The difference between standing idle, and moving frantically in a panic? Not much. That's why they are not just making refreshes like Kabylake and Coffeelake, but making a refresh of it like Coffeelake Refresh. Don't forget desperate measures like Cooper Lake, or Skylake-SP being repurposed as some sort of a HEDT chip.
 

Nothingness

Golden Member
Jul 3, 2013
1,875
30
106
I'm afraid I've long lost track of Intel code names, refreshes and other obscure naming convention.

I'd say Intel main problem is that they are too much linked to their process and that created the situation they currently face. The other issue they have is that their micro architecture already is very good and getting big gains will likely require restarting from a blank sheet.
 

IntelUser2000

Elite Member
Oct 14, 2003
5,853
104
126
I'd say Intel main problem is that they are too much linked to their process and that created the situation they currently face. The other issue they have is that their micro architecture already is very good and getting big gains will likely require restarting from a blank sheet.
Some people said Athlon 64 was the pinnacle of microprocessor engineering. We've come a long way since that.

You can always improve, even in areas of technology where Moore's Law rapid pace of improvements don't exist. Besides, the whole thread is about Apple doing way better there isn't it?
 

Nothingness

Golden Member
Jul 3, 2013
1,875
30
106
Some people said Athlon 64 was the pinnacle of microprocessor engineering. We've come a long way since that.
I'm not saying we've reached the top :)

You can always improve, even in areas of technology where Moore's Law rapid pace of improvements don't exist. Besides, the whole thread is about Apple doing way better there isn't it?
Yeah but Intel micro arch has foundations that are much older than Apple's. Of course during the years every block has been redesigned, but incremental changes limit what you can do. And that's why I'm impressed by Intel micro arch: despite having roots in quite old designs, it still is extremely good.

OTOH I'm also very impressed by what Apple is able to achieve. Did anyone notice that Apple claimed 15% CPU gain for A12, while SPECint2006 shows almost 30% of gain?
 

IntelUser2000

Elite Member
Oct 14, 2003
5,853
104
126
Yeah but Intel micro arch has foundations that are much older than Apple's. Of course during the years every block has been redesigned, but incremental changes limit what you can do. And that's why I'm impressed by Intel micro arch: despite having roots in quite old designs, it still is extremely good.
You mean the saying that its based on P6? Techreport and David Kanter from RWT has said Sandy Bridge totally broke free from the vestiges of P6. Nothing is totally fresh, it'd take too much time. Everything is really based on the "computer" Charles Babbage invented. :rolleyes:

If you look at their uarch advances, despite saying Tick/Tock, you could definitely see some are bigger Tocks than others. For example Nehalem's core change is minimal, the focus was on the I/O and uncore. Single thread improvement was in the 0-5% range if you ignore Turbo. Sandy Bridge brought new ideas and features like the physical register file and the uop cache. Haswell and Skylake merely adds upon Sandy Bridge, without changing anything big or having new ideas.

They say a clean sheet architecture takes 5-6 years, so it makes sense why Haswell and Skylake wasn't as big of a change, and still likely takes 5-6 years for a new idea uarch.

Yes, what Apple does is impressive. They are the darlings of the tech industry just like Intel was one 2 decades ago. Naturally top talent gravitates towards there. It's really not about tech, but the people working on it. Many of top Intel guys went there, including the ones in Israel, so probably Haifa.
 

CatMerc

Golden Member
Jul 16, 2016
1,111
36
106
Naturally top talent gravitates towards there. It's really not about tech, but the people working on it. Many of top Intel guys went there, including the ones in Israel, so probably Haifa.
I know for a fact that happened. I have disgruntled friends who weren't happy with their work going nowhere, now working for Apple. Some of them were there since Yonah.
 

Nothingness

Golden Member
Jul 3, 2013
1,875
30
106
Yes, what Apple does is impressive. They are the darlings of the tech industry just like Intel was one 2 decades ago. Naturally top talent gravitates towards there. It's really not about tech, but the people working on it. Many of top Intel guys went there, including the ones in Israel, so probably Haifa.
I got a job offer from Apple but I declined it: their working condition are way too particular for my taste. They have a culture of secret even in their engineering team that is too paranoid for me. I guess many talented people (I'm not counting myself here ;)) don't go there because of that or at least just stay a few years until they can monetize the hiring package (which is very good).
 
Oct 9, 2002
26,343
138
136
... Then you also ignore that the Intel CPUs have an integrated GPU,security processors,fixed function hardware for Quicksync and so on. A huge percentage of the chip is not CPU too. ...
A-series chips have a lot of stuff too. GPU cores, Secure Enclave, motion processors, neural engines, ...
 

Denly

Senior member
May 14, 2011
855
10
91
I only read till page 4, forgive me if this asked. What will happen to A12 if it need to support things like x86 CPU does? 6 SATA, 6-10 USBs, 2-3 screens and a few PCIe?
 

Andrei.

Senior member
Jan 26, 2015
260
6
116
I only read till page 4, forgive me if this asked. What will happen to A12 if it need to support things like x86 CPU does? 6 SATA, 6-10 USBs, 2-3 screens and a few PCIe?
Absolutely nothing will happen because none of that affects the cpu.
 
Oct 9, 2002
26,343
138
136
I only read till page 4, forgive me if this asked. What will happen to A12 if it need to support things like x86 CPU does? 6 SATA, 6-10 USBs, 2-3 screens and a few PCIe?
I guess it already does NVME, so there's something like a PCI-E bus.

Everything else can be put on that PCI-E bus.
 

CatMerc

Golden Member
Jul 16, 2016
1,111
36
106
I only read till page 4, forgive me if this asked. What will happen to A12 if it need to support things like x86 CPU does? 6 SATA, 6-10 USBs, 2-3 screens and a few PCIe?
Only effect it would have is power consumption. That's significantly more I/O to power than mobile devices usually have. I wouldn't be surprised if the overall power consumption exceeded the A12's consumption in iPhone.
 


ASK THE COMMUNITY

TRENDING THREADS