smartpatrol
Senior member
- Mar 8, 2006
- 870
- 0
- 0
Seems pretty obvious that Infinity Blade 2 is not running at the full 2048x1536. Not complaining though, it still looks damn good.
The reason the Tegra 3 is quad-core is to make up for the weaker GPU. Games can take advantage of the CPU for some tasks as well. There are some benefits to quad-core CPUs but... not many right now.I'm actually amazed. 4 1.3GHz Cortex A9 wasn't a lot faster than 2 1GHz Cortex A9. In which case, I guess I have to agree that Apple wouldn't want to move to 4 CPU cores or slightly faster cores at all, as the difference shown there is so minimal, but the thermal difference would have fried the iPad 3.
I'm hoping Apple goes for a dual-core Cortex A15, and not a quad-core A15.
That's true, but iOS is quite heavily threaded. There's plenty of APIs available for multi-threading, but simply put, there's just not a whole lot of uses for a quad-core aside from in games, and those uses can be put on a better GPU.I'm hoping for the same thing for the A6 as well. Since iOS doesn't really allow you to multi-task much, Apple doesn't really need a ton of cores. Multiple cores have become rather popular in computers, because we tend to multi-task a lot. I remember when I first got my Athlon 64 X2, and it was heaven over my older Athlon 64! While quad-core wasn't the same heavenly experience, it does offer a lot cleaner performance in a certain applications or heavier multi-tasking.
Looks like all that nonsense about games looking bad on the iPad didn't come true.
The reason the Tegra 3 is quad-core is to make up for the weaker GPU. Games can take advantage of the CPU for some tasks as well. There are some benefits to quad-core CPUs but... not many right now.
As in, compared to the PowerVR SGX543MP2, it's somewhat lacking. However, the CPU is able to handle some tasks a GPU might normally do as well, like dynamic lighting, etc.The Tegra 3's GPU is the best available for Android right now. So what is there to make up for? Android games can't expect better.
Also the quad-core can help a lot- when I force all the cores on manually it FLIES! Problem is that it also eats the battery alive in that mode, which is why most of the time it doesn't do that.
The Tegra 3, at its full 1.6ghz, eats alive any other mobile SoC on the CPU component.
http://www.youtube.com/watch?v=TQlu39SIH6M&feature=player_embedded
Ipad 3 vs ASUS transformer Prime comparison in games.
http://www.youtube.com/watch?v=TQlu39SIH6M&feature=player_embedded
Ipad 3 vs ASUS transformer Prime comparison in games.
http://www.youtube.com/watch?v=TQlu39SIH6M&feature=player_embedded
Ipad 3 vs ASUS transformer Prime comparison in games.
That's not a comparison, the Android version is essentially a paid ad for nVidia. =/
This is a very valid point. It is much like GLBenchmark, only nVidia pays for stupid things like games while PowerVR pays for a benchmark, who needs games when you have benchmarks to score high on!
Do you have proof that PowerVR paid for GLBenchmark?
I'm sure that if GLBenchmark was so biased, we wouldn't see Anand use it for his reviews.
I'm sure that if GLBenchmark was so biased, we wouldn't see Anand use it for his reviews.
With limited choice and most likely manufacturers pushing reviewers to use certain benchmarks in their "reviews" there's not much surprise that it's used a lot.
GLbenchmark is widely known to favor the PowerVR architecture which makes it just as dubious as Tegra Optimised titles being used. Gotta complain about the use of both if you want to be fair, or neither.
Have you been skipping the parts in his reviews where he constantly complains about the benchmarks not being accurate and he would like to see better tools to benchmark GPU performance?
Anand said:GLBenchmark 2.0 is the best example of an even remotely current 3D game running on this class of hardwareand even then this is a stretch. If you want an idea of how the PowerVR SGX 543MP2 stacks up to the competition however, GLBenchmark 2.0 is probably going to be our best bet (at least until we get Epic to finally release an Unreal Engine benchmark).
Anand said:It's obvious that GLBenchmark is designed first and foremost to be bound by shader performance rather than memory bandwidth, otherwise all of these performance increases would be capped at 2x since that's the improvement in memory bandwidth from the 4 to the 4S. Note that we're clearly not overly bound by memory bandwidth in these tests if we scale pixel count by 50%, which is hardly realistic. Most games won't be shader bound, instead they should be more limited by memory bandwidth.