Anandtech benches Krait and Adreno 225

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dagamer34

Platinum Member
Aug 15, 2005
2,591
0
71
Still, I don't understand why nobody else has used the SGX543MP2. There's going to be an Atom with that soon so it's not like it's Apple exclusive. Are all the various companies all too proud to use that solution and wanted to push their own?

Because chip designs are set into stone well over a year in advance. A properly planned product will not just suddenly switch GPUs "just to beat Apple". With the average SoC taking at least 12-18 months from official announcement to shipping product that a customer can buy, even if Qualcomm and nVidia were to start new projects the day after the iPad 2 came our, you wouldn't see the fruits if their labor until about Q2-Q3 of this year. And there's the fact that some vendors completely design their own GPUs, so you're looking at 2 years of time spent.

In this market of long development cycles, you can't compete with what a competitor has today, but what they will have in 18 months. And that's as close to clairvoyant as we might ever get.
 

StrangerGuy

Diamond Member
May 9, 2004
8,443
124
106
Damn, smartphone SoCs are a million times more exciting to look at than all the x86 stuff.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
PCMag.com has more benchmarks and compares them with the Prime and die Galaxy 2:http://www.pcmag.com/article2/0,2817,2400409,00.asp

And it looks not so great with other benchmarks. They are using the Balance Mode for the Prime, so it's not the fastest setting. And especially the Antutu Benchmark shows that Krait has only the clock-per-clock performance of A9...
 

Brian Stirling

Diamond Member
Feb 7, 2010
3,964
2
0
I think with Apple's volume purchasing strength and their early GPU dominance with the MP2, its allowing them to make the first jump into the retina tablet realm. Samsung, known for their powerful GPUs as well, will probably be the next one to release a retina tablet.

Apple is playing fast and lose with there marketing term "retina". If the dpi needed to reach this value is 300 or more than the new iPad3, even at 2048 x 1536 resolution will be a good bit below that threshold. Still, you're probably correct that Sammy will be the first to match Apple at the higher resolution point.

As I've said before I'm not a fan of the aspect ratio and size of the iPad but I will want to see that 2048 x 1536 display. Now, if they were to make a somewhat smaller tab with minimum bezel and an aspect of, say, 8:5 with the dpi of the new iPad3 I could be seriously tempted. I'd could get quite excited about an 8.5 inch tab (8.5" x 5.5" physical) with 2048 x 1280 resolution. And, while I'm at it how about 128GB storage AND a uSDXC slot so I can add even more.

Perhaps a better match for MY dream tab would be 8.5" (8.5" x 5.5") with 1920 x 1200 resolution -- that would be a perfect tab to watch HD video and would have a little higher dpi than the new iPad3...


Brian
 

MrX8503

Diamond Member
Oct 23, 2005
4,529
0
0
Apple is playing fast and lose with there marketing term "retina". If the dpi needed to reach this value is 300 or more than the new iPad3, even at 2048 x 1536 resolution will be a good bit below that threshold. Still, you're probably correct that Sammy will be the first to match Apple at the higher resolution point.

As I've said before I'm not a fan of the aspect ratio and size of the iPad but I will want to see that 2048 x 1536 display. Now, if they were to make a somewhat smaller tab with minimum bezel and an aspect of, say, 8:5 with the dpi of the new iPad3 I could be seriously tempted. I'd could get quite excited about an 8.5 inch tab (8.5" x 5.5" physical) with 2048 x 1280 resolution. And, while I'm at it how about 128GB storage AND a uSDXC slot so I can add even more.

Perhaps a better match for MY dream tab would be 8.5" (8.5" x 5.5") with 1920 x 1200 resolution -- that would be a perfect tab to watch HD video and would have a little higher dpi than the new iPad3...


Brian


Apple never stated what dpi is considered retina, just that 326dpi was retina for the iPhone4 for its screen size and viewing distance. Overall the retina term doesn't mean much other than to signify High Res.

Samsung will be the first probably to match Apple with their 2560x1600 display, but its pentile. Samsung wants to stick with their SAMOLED and one at that resolution is not possible yet.

As for the aspect ratio of the iPad and its size, apparently Steve Jobs and Jonathon Ives spent a lot of time with various mockups determining it was the overall best size for a tablet. Who knows if its the best size/ratio or not, I myself prefer 1920x1200 or 2560x1600.
 

alent1234

Diamond Member
Dec 15, 2002
3,915
0
0
Still, I don't understand why nobody else has used the SGX543MP2. There's going to be an Atom with that soon so it's not like it's Apple exclusive. Are all the various companies all too proud to use that solution and wanted to push their own?


it's not a CPU or package like you buy from nvidia. you're buying a license to design their GPU into your CPU SoC. if you had designed your SoC with another GPU in mind and tried to switch it's not like you can just copy and paste into your design software
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Still, I don't understand why nobody else has used the SGX543MP2.

Because outside of GLBench it isn't very good. Not saying it's bad, but GLBench makes it look like a rockstar due to the rendering technique they utilize, it is a shockingly terrible graphics benchmark for comparing different architectures. Just look at the Basemark numbers, 543MP2 is showing its' age and looking quite dated which it is. If you are rendering something with a lot of overdraw, the SGX543MP2 is fantastic, if you aren't it is at best mediocre.

Overall this product seems kind of, meh? Was expecting a lot more out of the move to 28nm. GPU is looking significantly improved, that is good as it was always their weakest spot. It seems like they have no glaring weakness, just hoped to see a lot more out the first 28nm part.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Because outside of GLBench it isn't very good. Not saying it's bad, but GLBench makes it look like a rockstar due to the rendering technique they utilize, it is a shockingly terrible graphics benchmark for comparing different architectures. Just look at the Basemark numbers, 543MP2 is showing its' age and looking quite dated which it is. If you are rendering something with a lot of overdraw, the SGX543MP2 is fantastic, if you aren't it is at best mediocre.

Overall this product seems kind of, meh? Was expecting a lot more out of the move to 28nm. GPU is looking significantly improved, that is good as it was always their weakest spot. It seems like they have no glaring weakness, just hoped to see a lot more out the first 28nm part.

The 543MP2 also takes up a huge amount of die size, the A5 is likely not a cheap SOC to make, it's a premium part that will only be used in premium i-devices.
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
Because outside of GLBench it isn't very good. Not saying it's bad, but GLBench makes it look like a rockstar due to the rendering technique they utilize, it is a shockingly terrible graphics benchmark for comparing different architectures. Just look at the Basemark numbers, 543MP2 is showing its' age and looking quite dated which it is. If you are rendering something with a lot of overdraw, the SGX543MP2 is fantastic, if you aren't it is at best mediocre.
The 543MP2 also takes up a huge amount of die size, the A5 is likely not a cheap SOC to make, it's a premium part that will only be used in premium i-devices.

These are good answers. Thanks for the explanation. It's a lot more logical than three years time not being sufficient to integrate it.
 

smartpatrol

Senior member
Mar 8, 2006
870
0
0
Because outside of GLBench it isn't very good. Not saying it's bad, but GLBench makes it look like a rockstar due to the rendering technique they utilize, it is a shockingly terrible graphics benchmark for comparing different architectures. Just look at the Basemark numbers, 543MP2 is showing its' age and looking quite dated which it is. If you are rendering something with a lot of overdraw, the SGX543MP2 is fantastic, if you aren't it is at best mediocre.

Overall this product seems kind of, meh? Was expecting a lot more out of the move to 28nm. GPU is looking significantly improved, that is good as it was always their weakest spot. It seems like they have no glaring weakness, just hoped to see a lot more out the first 28nm part.

GLBench seems to give an enormous advantage to the SGX543MP2. But, Basemark also shows the A5 coming out on top of the competition according to Anandtech's tests. See http://www.anandtech.com/show/5163/asus-eee-pad-transformer-prime-nvidia-tegra-3-review/3

So maybe the A5's performance is exaggerated, but I would hardly call it "quite dated" when it still outperforms all of its competition nearly a year after it was released.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Care to quantify that statement?

Sure.

First of all, Vellamo is Qualcomm's benchmark:

http://androidcommunity.com/qualcom...mobile-browser-performance-hands-on-20110714/

So no wonder it performs best in that benchmark.

Why does it? Because of Qualcomm has put real resources into NEON support.

I can tell this on current gen Snapdragons like a Skyrocket. If I try to play one of my test files on the stock player on a Skyrocket they can't play (note these same files play on a stock Exynos SGS2 so it is not the player it is the SoC). But when I load a NEON enabled player on the Skyrocket the files will play (with some stutter) due to the shear power of NEON. I remember reading how this gen Snapdragon had "full" NEON support or something like that and it shows.

The Krait has two NEON processors per core. So on a test from Qualcomm that emphasizes NEON it seems twice as fast.

The problem is that NEON isn't nearly that useful- if it was every Tegra 2 device would be unusable as that SoC lacks any NEON support. The truth is that all Qualcomm is good for is NEON, and so they are trying to push that angle.

The Snapdragons have been saddled with a relatively terrible GPU for a long time so it's not surprising GPU tests (as opposed to CPU tests) have traditionally been poor for them but CPU-wise they've always been comparable.

That is to say where the older Scorpion core beats out Exynos (i.e., Vellamo), it's not a huge edge nor does where Exynos beats out Scorpionin in a CPU test (e.g., Linpack) it's not an enormous edge either.

Nope. Not even close. The only reason Snapdragons seem "comparable" is because they are clocked at 1.5GHz while most Samsung and TI chips top out at 1.2GHz. When I overclock my Exynos to 1.4ghz it destroys the current get Snapdragons on EVERY CPU benchmark even though it is 100mhz slower.

The ONLY thing that was worth a damn about current Snapdragons (other than NEON) is LTE support. It is that reason that they were shoved into so many phones, which won't save Qualcomm this next generation.

But the new Krait clearly is almost twice as fast as the Scorpion clock for clock. It clearly shows how the next generation chips will perform.

Actually other tests show that Krait is already behind:

335104-qualcomm-s4-mdm-antutu.jpg


Looks like with Krait Qualcomm basically cranked out a competitor for the Exynos in my SGS2 a year later. That is pretty pathetic IMHO.

If Krait survives in the market it is due to the exclusivity deal with MS, the sweetheart deal with HTC, and the fact that Tegra 3's yields suck.

In fact, this news of Krait benches made me hug my Prime. Now that I know that the high-res Transformer will have the Krait (a WORSE GPU for a higher res screen) I wouldn't trade the two if you gave me the high-res one and $200 for my current Prime. Especially now that the Prime bootloader is unlocked.
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
It's undeniable that the Exynos is a better overall SOC, but we WERE talking about pure CPU performance where I claim the Snapdragons have always been similar in performance to its competitors like OMAP, Hummingbird/Exynos, Tegra.

I can tell this on current gen Snapdragons like a Skyrocket. If I try to play one of my test files on the stock player on a Skyrocket they can't play (note these same files play on a stock Exynos SGS2 so it is not the player it is the SoC). But when I load a NEON enabled player on the Skyrocket the files will play (with some stutter) due to the shear power of NEON. I remember reading how this gen Snapdragon had "full" NEON support or something like that and it shows.
The SGS2 has hardware support for more codecs. You are comparing a dedicated decoder to CPU decoding.

So strike this as an useful comparison.



Nope. Not even close. The only reason Snapdragons seem "comparable" is because they are clocked at 1.5GHz while most Samsung and TI chips top out at 1.2GHz. When I overclock my Exynos to 1.4ghz it destroys the current get Snapdragons on EVERY CPU benchmark even though it is 100mhz slower.
Sunspider and V8 are both non-NEON tests that use the CPU.

In 2.3.3, the SGS2 (1.2GHz) get 3029 while the Sensation (1.2GHz) gets faster score of 2925 for Sunspider. For V8, the SGS2 (1.2GHz) get 361 while the Sensation (1.2GHz) gets faster score of 511 in 2.3.3

On the other hand, Exynos does destroy everything in Linpack (even other A9 cores like Tegra and OMAP) despite it using NEON (which according to you gives the Snapdragons a crazy edge).

Can you give more than 2 examples of "EVERY CPU benchmark"?


Actually other tests show that Krait is already behind:

335104-qualcomm-s4-mdm-antutu.jpg
Antutu is multithreaded. Of course a Quad core will beat out a Dual core. Antutu also tests many other things like SD card speeds and GPU. How can you use that as a CPU comparison?



So I guess you haven't actually quantified this after all.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Antutu is multithreaded. Of course a Quad core will beat out a Dual core. Antutu also tests many other things like SD card speeds and GPU. How can you use that as a CPU comparison?

You can use the detailed table:
335109-qualcomm-s4-mdm-antutu-subscores.jpg

http://www.pcmag.com/article2/0,2817,2400409,00.asp

CPU-Performance is onpar with A9. And this level is nearly two years old...

And only CPU Performance:
Tegra3@1600MHz: 10544 points
Krait@1500MHz: ~4500 points

A quadcore A9@1600MHz is more than 2x faster. And Anand wrote that Krait needs alone for both cpus 1,5 Watt...
 
Last edited:

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
You're still looking at a dual core vs a quad core. The article there even mentions that:
Qualcomm says that's because Antutu's tests scale linearly with the number of cores - in other words, it's using multiple cores too well, from Qualcomm's perspective. That means a quad-core processor has a huge advantage over a dual-core processor, even a very fast one, on that specific test.
With that said, CPU FP performance looks _really_ bad.


In the end, S4 is supposed to be slower than the A15 designs coming out soon. So it'll be interesting how much slower it'll be.
 
Last edited:

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
This is a problem only for qualcomm. Krait was expected to be much faster on a perclock base than A9.
There will be quad core Kraits too you know. The FP scores are really bad but the INT scores is over 60% so per core the Krait is doing somewhat better.

Still, Krait doubled the Linpack of Scorpion and Scorpion wasn't that much slower than A9's (it was between the A8's and A9's in performance). I wonder why Krait does so poorly in the Antutu FP test.
 

joshhedge

Senior member
Nov 19, 2011
601
0
0
You must also be aware that apple owns a percentage of Imagination and thus could prevent any other SoC manufacturers from utilising the 543MP for a certain time period, likely Apples next release.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
It's undeniable that the Exynos is a better overall SOC, but we WERE talking about pure CPU performance where I claim the Snapdragons have always been similar in performance to its competitors like OMAP, Hummingbird/Exynos, Tegra.

Everyone else went with A9, Qualcomm didn't.

The SGS2 has hardware support for more codecs. You are comparing a dedicated decoder to CPU decoding.

So strike this as an useful comparison.

Actually that was just to say how I discovered the Snapdragon's best-in-class NEON performance.

Sunspider and V8 are both non-NEON tests that use the CPU.

In 2.3.3, the SGS2 (1.2GHz) get 3029 while the Sensation (1.2GHz) gets faster score of 2925 for Sunspider. For V8, the SGS2 (1.2GHz) get 361 while the Sensation (1.2GHz) gets faster score of 511 in 2.3.3.

I am not huge on browser benchmarks. I just did Sunspider on my stock speed SGS2 and I got exactly 2925. But again, browser benchmarks suck. I greatly increased my Sunspider and Browsermark on my Prime just by installing Chrome.

Can you give more than 2 examples of "EVERY CPU benchmark"?

Quadrant's CPU scores. Antutu.

Antutu is multithreaded. Of course a Quad core will beat out a Dual core. Antutu also tests many other things like SD card speeds and GPU. How can you use that as a CPU comparison?

Krait is next-gen for Qualcomm, designed to compete with the Tegra 3. I was comparing Krait to a competing chip on the market that I already have.

Plus the extra cores aren't why the Tegra 3 GPU comes out better.
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
Everyone else went with A9, Qualcomm didn't.
Qualcomm was already using their A8.5 back when Hummingbird was out. Why not talk about their advantage then? If it just so happens their cycle is smack in the middle of A8 and A9 where their performance also lays.


I am not huge on browser benchmarks. I just did Sunspider on my stock speed SGS2 and I got exactly 2925. But again, browser benchmarks suck. I greatly increased my Sunspider and Browsermark on my Prime just by installing Chrome.
This is true and I 100% agree. With that said, Anandtech got those numbers and they kept variables close by using the same Android versions and stuff. It does show that they're close in performance in terms of real life applications.



Quadrant's CPU scores. Antutu.
I've looked at those scores and they're really not that far apart with one exception: Antutu FP.

This is extremely strange because Snapdragon's high NEON scores would indicate it has strong FP performance. The Antutu INT scores are again fairly close so as not a monstrous difference.

The FP performance is worth more investigation but anything that's FP heavy ought to be using VFP or NEON (regular apps are more dependent on INT performance).



Krait is next-gen for Qualcomm, designed to compete with the Tegra 3. I was comparing Krait to a competing chip on the market that I already have.

Plus the extra cores aren't why the Tegra 3 GPU comes out better.

True enough but one is in a Tablet and the other is in a Phone. The extra cores are also exactly why the Tegra3 is better in Antutu since Antutu is multi-threaded. Extrapolating from that, Krait is still better core against core. And again, Krait is expected to have a quad core version.


As an aside, clock for clock performance really doesn't matter as much as core vs core within the same power envelope. Krait is more deeply pipelined for higher clocks after all. The real test is when real hardware comes out and we can compare power.

In the end, the real question is how well A15 will perform when it comes out (and whether it comes out in a timely manner). I'm actually more interested in that as it's expected to outperform Krait.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Qualcomm was already using their A8.5 back when Hummingbird was out. Why not talk about their advantage then?

I wasn't around here really then.

If it just so happens their cycle is smack in the middle of A8 and A9 where their performance also lays.

I disagree. Tegra 2 (and A9 design) beat Snapdragon to the market as a dual-core solution. Also the Eyxnos has competed head-to-head with current gen Snapdragons within the product cycle.

This is true and I 100% agree. With that said, Anandtech got those numbers and they kept variables close by using the same Android versions and stuff. It does show that they're close in performance in terms of real life applications.

With the CPU they are closer than on the GPU to a Exynos.

True enough but one is in a Tablet and the other is in a Phone. The extra cores are also exactly why the Tegra3 is better in Antutu since Antutu is multi-threaded. Extrapolating from that, Krait is still better core against core. And again, Krait is expected to have a quad core version.

From what I understand Krait will be used in tablets. Maybe the quad-core version though.

You are right Krait seems faster clock-for-clock, but Nvidia built Tegra 3 with a real ceiling of 1.6ghz so later Tegra 3 products can do what Qualcomm did this year and compete with a higher clock. Krait doesn't seem like such a huge jump in CPU that the extra clock speed won't make up for it- my Prime at 1.6ghz gets a 148824 in Browsermark.

Also the Prime GPU seems certainly better, which matters more than CPU in the high-res devices that are coming.

I too am looking forward to A15 designs for real comparisons. Especially Samsung- TI's GPUs always turned me off. It is like they prefer to be a year behind.
 

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
I disagree. Tegra 2 (and A9 design) beat Snapdragon to the market as a dual-core solution. Also the Eyxnos has competed head-to-head with current gen Snapdragons within the product cycle.
That's true, Tegra2 did beat out the dual core Scorpion by half a year (although single core Snapdragon was already an A8.5 design when up against all the A8 single cores. This is what I meant by being in that timeframe).

With that said, Tegra2 was a really poor SoC honestly. It had even poorer codec support than Snapdragon, didn't have NEON and its GPU was only better than Adreno 205 (which isn't saying much) and wasn't faster than the older Hummingbird's SGX540.

Furthermore, Tegra2 designs were all meant to run at 1GHz while all the dual core Scorpions were meant to be run at 1.2-1.5GHz. In practice the differences weren't huge.


With the CPU they are closer than on the GPU to a Exynos.
That's all I'm saying =). It's plain that on the Android side, Exynos is tops for graphics crunching.


You are right Krait seems faster clock-for-clock, but Nvidia built Tegra 3 with a real ceiling of 1.6ghz so later Tegra 3 products can do what Qualcomm did this year and compete with a higher clock. Krait doesn't seem like such a huge jump in CPU that the extra clock speed won't make up for it- my Prime at 1.6ghz gets a 148824 in Browsermark.
Keep in mind that Snapdragons are designed to be clocked higher in the first place as well. Even the S3 Snapdragons are designed to run at 1.5GHz for instance and the one in the Touchpad hits 1.7GHz for everyone (while some people including me can hit 1.9GHz). From the PC Mag link where the Antutu benchmarks were from, it says S4 is meant to go to 2.5GHz.

Krait has a deeper pipeline and is meant to go in the same direction so clocking up to increase speed isn't any edge.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
With that said, Tegra2 was a really poor SoC honestly.

I 100% agree, Tegra 2 sucks. I can forgive no NEON support, but no high-profile h264 support is still kinda shocking. I mean, do it poorly rather than not do it.

I would MUCH rather have a Snapdragon dual-core over any Tegra 2. Heck, you are correct I would probably prefer a single-core Hummingbird.

That is why I waited until the Prime to upgrade from my Nook Color. I almost can't stand Android tablets of the Tegra 2/Honeycomb generation (and I have messed with a bunch of them). A combination of rush jobs of two different companies.

It's plain that on the Android side, Exynos is tops for graphics crunching.

Second only to Tegra 3 of buyable SoCs.

With that said, Tegra 3 isn't perfect. It is obvious from problems people have with the Prime that yields are kinda crazy. In fact in a recent update Asus cut a 100mhz off the top speed to deal with those who were having crashes and reboots due to bad silicon, until the community had a shit fit and they put it back the way it was. I guess those with problems are supposed to RMA.

In fact, it seems Asus expected much more out of Tegra then they got as the Primes have a hidden and built in 1.6ghz mode you can find after rooting. Mine can do it all day (on all four cores) and just get a little hot, but for some people it is lockup city.

Even beyond that, the lower-end 10 inch Tegra 3 tablet Asus is going to release to replace the original is supposedly only going to be 1.2ghz at peek instead of my Prime's 1.4ghz. I guess Nvidia's worse silicon is going in those tablets to dump them.

Finally, it seems Asus has switched to Krait for its high-res model. If that doesn't tell the story of dissatisfaction with the Tegra 3 (since that is a GPU downgrade with a higher-res screen) nothing does.
 
Last edited:

ChronoReverse

Platinum Member
Mar 4, 2004
2,562
31
91
Finally, it seems Asus has switched to Krait for its high-res model. If that doesn't tell the story of dissatisfaction with the Tegra 3 (since that is a GPU downgrade with a higher-res screen) nothing does.
Wow seriously? Not even going to wait for the Adreno 3xx version?

Well, timeline is everything. In the end whether or not Krait should be a considered a good design is how soon the A15 designs come afterward.

It's interesting how both OMAP and Tegra3 seem to have some overheating issues at high clocks. The GNex's OMAP can often hit 1.65GHz but it overheats so quickly that some benchmarkers actually deep sleep their phones before running the benchmark to get higher scores (something I find ridiculous and hilarious).
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Wow seriously? Not even going to wait for the Adreno 3xx version?

Nope:

Transformer-specs-2012.jpg


It's interesting how both OMAP and Tegra3 seem to have some overheating issues at high clocks.

My old Nook Color could really do some overclocking (800mhz to 1300mhz). I guess dual core TI designs got worse. I am overall uninterested in TI's products as the GPU is always behind a generation. Now my top Android SoC vendors are Nvidia (if they can get their shit together) and Samsung.