• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

iPhone 5 performance

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Keep in mind that even though iOS and Android phones have equal or higher amounts of RAM compared to the PS3 or Xbox 360, apps and games only get access to a small chunk. In iOS, an app get at most 1/4 the max amount of RAM on a device, and I'm sure a similar amount applies to Android as well. Use too much and the OS will send a warning, then kill your app. Game consoles on the other hand get an overwhelming majority of RAM you see in spec sheets, with some small amount reserved for background downloading.

Are you sure it's capped at 25%? I recall seeing a video where someone was analyzing the memory management in iOS and opened a game that appeared to use more than 25% of the total RAM. I'll have to do some looking for it to double check. I did a quick search and found one site that claimed the cap was 10%, but that was from a while ago, so perhaps it's changed over time, as such a cap would make a lot more sense when the original phone was released and memory space was seriously limited compared to today.

Also, given what we know about the memory management system in iOS, it seems really silly to limit an app to 25% of total memory.
 
Are you sure it's capped at 25%? I recall seeing a video where someone was analyzing the memory management in iOS and opened a game that appeared to use more than 25% of the total RAM. I'll have to do some looking for it to double check. I did a quick search and found one site that claimed the cap was 10%, but that was from a while ago, so perhaps it's changed over time, as such a cap would make a lot more sense when the original phone was released and memory space was seriously limited compared to today.

Also, given what we know about the memory management system in iOS, it seems really silly to limit an app to 25% of total memory.

Well isn't it like how in Android people use the V6 Supercharger which tweaks minfree to allow a good chunk of memory to be free?

It sounds like a waste where it's not used, but if you let your device get used down to below 50mb free, you can start seeing it stutter and slow down. At least in Android. Perhaps iOS has such strict limitations to keep the device snappy.
 
It sounds like a waste where it's not used, but if you let your device get used down to below 50mb free, you can start seeing it stutter and slow down. At least in Android. Perhaps iOS has such strict limitations to keep the device snappy.

Apple's memory management works differently than Android's does, so it doesn't really run in to that problem. Basically any app that isn't running or performing some active background task (outside of some system processes) gets suspended. In this state, it's still in memory, but it can be removed at any time in order to make room for anything else that needs it.

Here's a much more in-depth explanation of the process.
 
Wasn't Android originally built with the assumption that some devices may not have a GPU so that they would cost less? It seems that that kind of legacy has been around all the way up until Ice Cream Sandwich where requirements were raised (or really, put into place) such that not every 2.3 phone was really fast enough for ICS. And then Jelly Bean really kicked it up a notch by rearchitecting the lower levels of the OS with the idea that a fast GPU would be around, something that iOS has always assumed and one of the benefits of designing software with specific hardware in mind. The same could be said for Windows Phone, which even with very crappy hardware by today's standards has always managed to have performance on par with Android and far outstripping most Android phones.

No. All smartphone devices with a screen "must" (emphasizing that) have a GPU regardless. Because without a GPU you cannot display anything on the screen. Nothing at all.

What early Android builds assumed was that most devices would not have an architecture that is "good enough" to make offloading work to the GPU worthwhile. This involves having gobs of memory bandwidth and giving the GPU access to system RAM, rather than having a super powerful GPU. There's a difference.

Now, here's the crazy thing: even Jelly Bean still uses the CPU to draw interface graphics.

Then how come it's so much smoother? Well, simple. Google has now made it a requirement that all Android apps written for version 4.0 and up have to offload works to side-threads instead of the main thread. The main execution thread is thus freed up to do graphics works.

This coupled with a multi-core processor means Android is "supercharged" essentially...

But old apps that were built for earlier SDK versions? Well, they are not tied to this "requirement", and thus they still run "slow". You just don't see it because Android has bruteforce hardware... (quad-core CPUs or dual A15-level CPUs clocked at higher speeds)

Google has also applied the same requirement to core apps, and that's why you see the OS as snappier and smoother overall.

But Android is still not fully GPU-accelerated... or at least not as much as iOS is.

The whole iOS interface is GPU-accelerated... and has been like that by default for a while.

Keep in mind that even though iOS and Android phones have equal or higher amounts of RAM compared to the PS3 or Xbox 360, apps and games only get access to a small chunk. In iOS, an app get at most 1/4 the max amount of RAM on a device, and I'm sure a similar amount applies to Android as well. Use too much and the OS will send a warning, then kill your app. Game consoles on the other hand get an overwhelming majority of RAM you see in spec sheets, with some small amount reserved for background downloading.

The OS actually sends a warning freely whenever the GPU driver needs more RAM to convert to VRAM on iOS. On some Android devices, like... the Xperia Play, for instance, VRAM and RAM are separate so apps can freely use all of the available RAM (about 384MB if I'm not mistaken). So it doesn't have any fixed amount. It's essentially just "whenever the system cannot free up more memory).

It usually sends that warning only after the app goes into background, though. If the app is in foreground, iOS does try to let it "roam free".

In general, iOS apps actually have less memory to use than Android apps because the GPU on iOS is utilized more often.

But it equalizes somewhat because the SDK for Android is not as optimized as that of iOS, and the end result is that apps on Android end up using more memory than apps on iOS... even though they have more memory to use.
 
Last edited:
No. All smartphone devices with a screen "must" (emphasizing that) have a GPU regardless. Because without a GPU you cannot display anything on the screen. Nothing at all.

What early Android builds assumed was that most devices would not have an architecture that is "good enough" to make offloading work to the GPU worthwhile. This involves having gobs of memory bandwidth and giving the GPU access to system RAM, rather than having a super powerful GPU. There's a difference.

Now, here's the crazy thing: even Jelly Bean still uses the CPU to draw interface graphics.

Then how come it's so much smoother? Well, simple. Google has now made it a requirement that all Android apps written for version 4.0 and up have to offload works to side-threads instead of the main thread. The main execution thread is thus freed up to do graphics works.

This coupled with a multi-core processor means Android is "supercharged" essentially...

So essentially a dual core processor IS necessary. Any of those SGS1 phones will never be able to do things truly smoothly. The requirement for dual core isn't a killer though as that's an industry hardware standard now.

So my question is whether this is a good solution or not? I see this as making the CPU work harder, although maybe it's more efficient because the main thread is used for graphics.

Overall would basic operations on Android drain more battery than on an iPhone because the CPU is so busy?
 
No. All smartphone devices with a screen "must" (emphasizing that) have a GPU regardless. Because without a GPU you cannot display anything on the screen. Nothing at all.

What early Android builds assumed was that most devices would not have an architecture that is "good enough" to make offloading work to the GPU worthwhile. This involves having gobs of memory bandwidth and giving the GPU access to system RAM, rather than having a super powerful GPU. There's a difference.

Now, here's the crazy thing: even Jelly Bean still uses the CPU to draw interface graphics.

Then how come it's so much smoother? Well, simple. Google has now made it a requirement that all Android apps written for version 4.0 and up have to offload works to side-threads instead of the main thread. The main execution thread is thus freed up to do graphics works.

This coupled with a multi-core processor means Android is "supercharged" essentially...

But old apps that were built for earlier SDK versions? Well, they are not tied to this "requirement", and thus they still run "slow". You just don't see it because Android has bruteforce hardware... (quad-core CPUs or dual A15-level CPUs clocked at higher speeds)

Google has also applied the same requirement to core apps, and that's why you see the OS as snappier and smoother overall.

But Android is still not fully GPU-accelerated... or at least not as much as iOS is.

The whole iOS interface is GPU-accelerated... and has been like that by default for a while.

I feel like some of this information is a little off. Android has always used hardware acceleration for some UI operations and not for others, even going back to 2.x. I imagine this is still the same with Jelly Bean - probably a lot of things use the GPU, and others still might not. It doesn't really matter. Devs are encouraged to opt in for their apps to be rendered using a GPU path whenever possible now.

I don't believe there was any new "requirement" with Android 4.0+ to offload work to background threads, this has ALWAYS been the recommended approach, in any software application, on pretty much any platform, including pre-4.0 Android. Any excessive work done on the main thread will cause the app to visibly lag, pre- or post-4.0. Jelly Bean is smoother due to better vsync timing, triple buffering, and CPU speed boosts on touch events, that's really all there is it to. Link
 
So essentially a dual core processor IS necessary. Any of those SGS1 phones will never be able to do things truly smoothly. The requirement for dual core isn't a killer though as that's an industry hardware standard now.

So my question is whether this is a good solution or not? I see this as making the CPU work harder, although maybe it's more efficient because the main thread is used for graphics.

Overall would basic operations on Android drain more battery than on an iPhone because the CPU is so busy?

No, dual core has never been necessary for smooth operation. Threads do not directly correlate to cores, never have. You can use threading intelligently to help an application feel responsive and have that be effective on a single core CPU.
 
So essentially a dual core processor IS necessary. Any of those SGS1 phones will never be able to do things truly smoothly. The requirement for dual core isn't a killer though as that's an industry hardware standard now.

So my question is whether this is a good solution or not? I see this as making the CPU work harder, although maybe it's more efficient because the main thread is used for graphics.

Overall would basic operations on Android drain more battery than on an iPhone because the CPU is so busy?

No, a dual-core processor is not necessary, but is beneficiary to the overall smoothness of Android.

Aside from the multithreading requirement, Android 4.x does bring some real optimizations under the hood to make sure things wouldn't chug along as before.

Does it make the CPU work harder? Yeah... but the CPU power draw is like 10% of the screen power draw on average. It's never going to be significant enough for it to be trouble.

The only time when it's ever going to be significant enough is under extreme load, when it can jump to roughly 50% the power draw of the screen, but... that's not realistic as there is actually close to zero thing an Android user can do to stress the phone that much.

The situation is the reverse on iOS where you have a lot of games that do stress the hardware to its limits. And when under such conditions, it's obvious iOS won't last longer than Android on a charge.

But this is more of a platform problem than a problem with the hardware itself.

I feel like some of this information is a little off. Android has always used hardware acceleration for some UI operations and not for others, even going back to 2.x. I imagine this is still the same with Jelly Bean - probably a lot of things use the GPU, and others still might not. It doesn't really matter. Devs are encouraged to opt in for their apps to be rendered using a GPU path whenever possible now.

I don't believe there was any new "requirement" with Android 4.0+ to offload work to background threads, this has ALWAYS been the recommended approach, in any software application, on pretty much any platform, including pre-4.0 Android. Any excessive work done on the main thread will cause the app to visibly lag, pre- or post-4.0. Jelly Bean is smoother due to better vsync timing, triple buffering, and CPU speed boosts on touch events, that's really all there is it to. Link

I think a line needs to be drawn regarding what would constitute "hardware acceleration", as most people tend to think of it as "GPU acceleration" or something of the sort, but if you ask me... I call both of those terms BS.

Even "render using a GPU path" is not the correct way to put it.

The correct way to put it should be "use the GPU to animate screen elements instead of letting the CPU do it". Both Android and iOS rely still on the CPU to draw interface graphics even at this stage. That's nothing new, and I don't think it should surprise anyone. It's not just the incremental GPU upgrade that's making iOS faster... the CPU plays a pretty important role. Otherwise you'd hear people claim the iPad 3 is faster than the iPad 2... but... it's not. The reason iOS feels so smooth is because most of its interface animations are delegated to the GPU, which has direct access to system memory. On Android, the CPU was still used to handle certain animations up until a while ago. The reason was because... like I said, certain Android devices may not allow the GPU to access system RAM directly, and asking the CPU to send stuffs to the GPU to render is actually much slower as the CPU has to jump through more hoops to accomplish that.

Back to the "requirement" thing, it's actually a "requirement" that a developer uses AsyncTask for certain callbacks. If not, the OS starts throwing a tantrum about illegal access or null pointer... It's not just simply "slow". It'll outright crash the app.

On earlier versions of Android and on iOS, running a separate thread is not required for the same operations.

---

Edit: anyway, back to A6...

http://9to5mac.com/2012/09/26/iphone-5-a6-chip-to-dynamically-over-clock-up-to-1-3ghz/

It seems the max clockspeed is even higher than we thought.

That would explain why it performs so fast. And it would also indicate that... the A6 is more "plausible" than we've made it out to be. At 1.3GHz (or higher), I think it's more reasonable to believe the CPU scores it pulled off against quad-core Tegra 3 and Snapdragon S4.

And I think it might also mean that Apple clocked the SoC as high as they possibly could at this manufacturing process. Will be interesting to see how it goes against A15.
 
Last edited:
It seems the max clockspeed is even higher than we thought.

That would explain why it performs so fast. And it would also indicate that... the A6 is more "plausible" than we've made it out to be. At 1.3GHz (or higher), I think it's more reasonable to believe the CPU scores it pulled off against quad-core Tegra 3 and Snapdragon S4.

And I think it might also mean that Apple clocked the SoC as high as they possibly could at this manufacturing process. Will be interesting to see how it goes against A15.
Engadget is reporting that they consistently get a 1.3GHz reading on the new Geekbench, which was updated with better clock speed detection: http://www.engadget.com/2012/09/26/apple-a6-cpu-13ghz-geekbench-confirmed-overclocking/
 
Yeah, I'm getting the 1.3ghz number every time now. The more it closes that speed gap the less impressive it gets. It seems like its faster clock for clock, but not anywhere near as fast as the initial thoughts.
 
To be fair, though, it's only 100MHz higher than the conceived 1.2GHz, and it's still outperforming parts running at 1.6GHz or so, so I'd say... it still delivers pretty impressive performance per clock.

Digging deeper, 1066MHz RAM seems to indicate a bus widening to me. 1066MHz means bus is a fraction of that. Usually a multiple of 133.33MHz. In that case, I don't doubt that the CPU clocks up with a multiplier of 133.33MHz. Perhaps the real max clock is with a multiplier of 10, giving it 1.33GHz. A multiplier of 9 gives it roughly 1.2GHz, and a multiplier of 6 gives it a 800MHz clock, all of which coincide with reported clocks ranging between 800MHz to 1.2GHz.

What about the GPU? Well, sparing the crazy math involved, I think it's safe to assume a clock speed of 333MHz per core (2.5x multiplier of 133.33MHz, same multiplier as A5). That gives the PowerVR SGX543MP3 the almost exact GFLOPS performance as the A5X... which would explain those super high benchmark scores.

But that's just me. Apple may well have kept the bus the same 100MHz as before, and stuffs are scaling much more... linear than I thought. It'd allow a CPU clock of 1.30GHz and a GPU clock of 300MHz. Slightly lower than "expected", but still good enough.

All of that aside, it's still quite a beast of an SoC. Can't wait to see what developers (other than me) can do! Personally, I'm excited to see if I'll be able to do real-time facial recognition on it now, what with the better front-facing camera and faster processor. That'll open up some... pretty interesting new doors.
 
To be fair, though, it's only 100MHz higher than the conceived 1.2GHz, and it's still outperforming parts running at 1.6GHz or so, so I'd say... it still delivers pretty impressive performance per clock.

Digging deeper, 1066MHz RAM seems to indicate a bus widening to me. 1066MHz means bus is a fraction of that. Usually a multiple of 133.33MHz. In that case, I don't doubt that the CPU clocks up with a multiplier of 133.33MHz. Perhaps the real max clock is with a multiplier of 10, giving it 1.33GHz. A multiplier of 9 gives it roughly 1.2GHz, and a multiplier of 6 gives it a 800MHz clock, all of which coincide with reported clocks ranging between 800MHz to 1.2GHz.

What about the GPU? Well, sparing the crazy math involved, I think it's safe to assume a clock speed of 333MHz per core (2.5x multiplier of 133.33MHz, same multiplier as A5). That gives the PowerVR SGX543MP3 the almost exact GFLOPS performance as the A5X... which would explain those super high benchmark scores.

But that's just me. Apple may well have kept the bus the same 100MHz as before, and stuffs are scaling much more... linear than I thought. It'd allow a CPU clock of 1.30GHz and a GPU clock of 300MHz. Slightly lower than "expected", but still good enough.

All of that aside, it's still quite a beast of an SoC. Can't wait to see what developers (other than me) can do! Personally, I'm excited to see if I'll be able to do real-time facial recognition on it now, what with the better front-facing camera and faster processor. That'll open up some... pretty interesting new doors.

There were improvements in memory performance as well, which I'm sure had quite an effect.

I actually want to start talking about what the A6 means for the next iPad. If the GPU performance of the iPhone 5 only just matches the iPad 3 but with 4.3x more pixels, but only a 64-bit memory bus instead of 128-bit in the A5X, it's obvious that the A6 isn't going to be dropped into the iPad 4 unmodified. How soon do we start talking about PowerVR 6, or is Apple just going to go crazy with an MP6 core and higher clocks because that GPU isn't ready yet?
 
There were improvements in memory performance as well, which I'm sure had quite an effect.

I actually want to start talking about what the A6 means for the next iPad. If the GPU performance of the iPhone 5 only just matches the iPad 3 but with 4.3x more pixels, but only a 64-bit memory bus instead of 128-bit in the A5X, it's obvious that the A6 isn't going to be dropped into the iPad 4 unmodified. How soon do we start talking about PowerVR 6, or is Apple just going to go crazy with an MP6 core and higher clocks because that GPU isn't ready yet?

I get the feeling Apple will just go with the same PowerVR SGX543MP4 with higher clocks in the A6X or... whatever it is. And maybe they'll bump up CPU to quad-core.

They'll be able to claim 3-4x performance increase over A5X... and call it a true quad-core chip.

That'll give them an edge over competition still... because... really, there's not a lot of SoCs that can match the iPad 3 in terms of graphics power. Apple is in no hurry to make it even faster than that... plus their PowerVR SGX543MP code base is now mature, and will likely yield better performance with some optimizations.

The CPU upgrade will be more substantial with the iPad than a better graphics core. They have room to stretch their CPU cores on the iPad 4, and I don't think they'll let that go easily. Plus it'll give them a headstart on A15.

Not to mention the iPad needs that CPU advantage more. With 4x the CPU power of the current iPad 3, the iPad 4 will inch a lot closer to the performance of ultra portable laptops.
 
I get the feeling Apple will just go with the same PowerVR SGX543MP4 with higher clocks in the A6X or... whatever it is. And maybe they'll bump up CPU to quad-core.

They'll be able to claim 3-4x performance increase over A5X... and call it a true quad-core chip.

That'll give them an edge over competition still... because... really, there's not a lot of SoCs that can match the iPad 3 in terms of graphics power. Apple is in no hurry to make it even faster than that... plus their PowerVR SGX543MP code base is now mature, and will likely yield better performance with some optimizations.

The CPU upgrade will be more substantial with the iPad than a better graphics core. They have room to stretch their CPU cores on the iPad 4, and I don't think they'll let that go easily. Plus it'll give them a headstart on A15.

Not to mention the iPad needs that CPU advantage more. With 4x the CPU power of the current iPad 3, the iPad 4 will inch a lot closer to the performance of ultra portable laptops.

😵

There is... a lot wrong with the analysis of what you just said.

1) The idea of having more physical GPU cores is that you don't have to have the clock speed running as high, saving power (hence why the A6 is a SGX 543MP3 with higher clocks, not an MP2 with double the clock speed). Only down side is die space and cost, which isn't as much an issue in a tablet or when Apple will be making millions of these chips.
2) Why would Apple go with Cortex A15 when they built a custom core in the A6? If anything, they will stick those two custom cores and run them at even higher clocks since it has a larger battery. And I don't see quad-core making sense either on iOS. Few apps are ever coded with multiple cores in mind, they'd just be sitting idle. Every app benefits from a higher clock speed, not a higher core count. iOS isn't designed to have as many tasks running in the background as Android or Windows 8/RT.
 
1) If Apple will introduce a quad-core CPU, then die size will be at a premium. I don't think they'll want to fit in twice the GPU cores on top of that. It makes more sense for them to have 4 CPU cores and 4 GPU cores to balance things out. But of course, I could be wrong and they might well go for 2 CPU cores + 4 GPU cores just like the A5X to save die size and board space.

2) When I said "headstart on A15", I mean "bring to the market before A15", not that Apple would use A15.

Apple's push to custom CPU cores is already enough indication that they will push CPU performance and power consumption aggressively starting with this generation. I don't see the same push with GPU performance.

And you'd be wrong to think a quad-core design would only benefit multitasking. We have come a long way with multithreaded computing, and now having more cores does mean more performance. With a quad-core design, Apple can do things they weren't able to do before in single apps.

It doesn't have to be for multitasking.
 
Last edited:
To be fair, though, it's only 100MHz higher than the conceived 1.2GHz, and it's still outperforming parts running at 1.6GHz or so, so I'd say... it still delivers pretty impressive performance per clock.

My initial reactions are from when we were thinking it was a 1ghz clock. Then it was 1.2, now 1.3. Like I said, just less impressive now.

I actually want to start talking about what the A6 means for the next iPad. If the GPU performance of the iPhone 5 only just matches the iPad 3 but with 4.3x more pixels, but only a 64-bit memory bus instead of 128-bit in the A5X, it's obvious that the A6 isn't going to be dropped into the iPad 4 unmodified. How soon do we start talking about PowerVR 6, or is Apple just going to go crazy with an MP6 core and higher clocks because that GPU isn't ready yet?

It's not going to surprise me if the A6X in the iPad (2013) is a Dual-core A6 clocked at like 1.6-1.8 with a SGX543MP4 at the same clocks. That's mostly what I'm expecting. Quad-core would be a huge score, but I don't really think it's necessary.

And I'm expecting that if a mini iPad is announced anytime soon it's basically an iPhone 5 with an 8" 1024x768 screen.
 
My initial reactions are from when we were thinking it was a 1ghz clock. Then it was 1.2, now 1.3. Like I said, just less impressive now.

If it was outperforming S4 at less than 2/3 the clock speed while A15 is not even out yet, then that's not impressive. IMO, that's "impossible". Even A15 may not have as much of a lead over S4.

Right now, it's looking more "possible", and I think A6 will be roughly on par or slightly behind A15 when A15 hits.

Ah well, we'll see.
 
My initial reactions are from when we were thinking it was a 1ghz clock. Then it was 1.2, now 1.3. Like I said, just less impressive now.



It's not going to surprise me if the A6X in the iPad (2013) is a Dual-core A6 clocked at like 1.6-1.8 with a SGX543MP4 at the same clocks. That's mostly what I'm expecting. Quad-core would be a huge score, but I don't really think it's necessary.

And I'm expecting that if a mini iPad is announced anytime soon it's basically an iPhone 5 with an 8" 1024x768 screen.

Yeah, at this point with the 5th gen iPod Touch at $299, there's little point in going for the "cheap" title, and instead focus on performance. What's sad is that no one is expecting a Retina display, I can't go back to such displays, they're all terrible.
 
I don't know how much more they actually need to do for the GPU performance. I think even now the 534MP4 is plenty for the iPad and in comparison with other tablets/phones.

Maybe they'll be able to get the next gen PowerVR series GPU for the next iPad?
 
I don't know how much more they actually need to do for the GPU performance. I think even now the 534MP4 is plenty for the iPad and in comparison with other tablets/phones.

Maybe they'll be able to get the next gen PowerVR series GPU for the next iPad?

Generally GPU performance increases every 2 years. There wasn't much difference between the iPad 2 and iPad 3 in performance when you adjust for screen resolution, in fact, in some areas, there's a loss of FPS when the iPad 3 is running at native res. This is why most 3D games like Infinity Blade II don't run at the full 2048x1536 resolution.
 
If it was outperforming S4 at less than 2/3 the clock speed while A15 is not even out yet, then that's not impressive. IMO, that's "impossible". Even A15 may not have as much of a lead over S4.

Right now, it's looking more "possible", and I think A6 will be roughly on par or slightly behind A15 when A15 hits.

Ah well, we'll see.

I think that's why so many of us were caught with our mouths open. We were thinking the A6 was roughly 50% faster clock for clock over the S4 when in fact they should be performing fairly close because they should be fairly similar processors. But yes, now that we know the clocks are up in the 1.2 - 1.3 range, we are all accepting of the reality of the situation and it makes a lot more sense now.


Yeah, at this point with the 5th gen iPod Touch at $299, there's little point in going for the "cheap" title, and instead focus on performance. What's sad is that no one is expecting a Retina display, I can't go back to such displays, they're all terrible.

iPad mini at 1024x768 will be a pass in my book as well. Who knows though Apple may surprise us all like they did with the a6 chip.
I can understand the sentiment, but a 7.85" screen will have a higher PPI than the 9.8" in the iPad/2 so it should be a noticeable difference. Not over 200 though, but hopefully it's pretty decent. Probably about on par with the iPhone/3G/3GS, so not terrible (though those screens do look terrible compared to the 4)
 
Last edited:
I think that's why so many of us were caught with our mouths open. We were thinking the A6 was roughly 50% faster clock for clock over the S4 when in fact they should be performing fairly close because they should be fairly similar processors. But yes, now that we know the clocks are up in the 1.2 - 1.3 range, we are all accepting of the reality of the situation and it makes a lot more sense now.





I can understand the sentiment, but a 7.85" screen will have a higher PPI than the 9.8" in the iPad/2 so it should be a noticeable difference. Not over 200 though, but hopefully it's pretty decent. Probably about on par with the iPhone/3G/3GS, so not terrible (though those screens do look terrible compared to the 4)

Problem is that a 7.85" 1024x768 screen has a PPI that's on par with the original Kindle Fire which is rather bleh. I'm certain if they wanted to cheapen out, they could use those same screens made larger from the 3GS, but it looks so terrible compared to what Apple now offers with the iPad 3, iPhone 5, and even the $299 iPod Touch, it would be a serious step back. Even with it's smaller, carry around friendly size, I could never recommend anyone buy such a device at ANY price.

Needs retina. Needs near 100% sRGB. Anything less is crap. If people want a cheap iPad, go find an iPad 2 for $300 or something.
 
Back
Top