Smartphone GPUs to match the "classic" AMD/NV/Intel GPUs?

stateofmind

Senior member
Aug 24, 2012
245
2
76
www.glj.io
Hi

I was wondering, with Adreno 430 and the upcoming 530 (claimed to be 40% faster), this is really getting close to the bottom low end of current GPUs. According to some benchmarks, it could even match stuff like HD4000/HD4600 and basic AMD/NV GPUs

Do we have any information about it?
For how long will this trend continue? Could they reach mainstream performance like the GT 840M/940M or AMD top integrated graphics performance?

Thanks
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Hi

I was wondering, with Adreno 430 and the upcoming 530 (claimed to be 40% faster), this is really getting close to the bottom low end of current GPUs. According to some benchmarks, it could even match stuff like HD4000/HD4600 and basic AMD/NV GPUs

What makes your question difficult is that as far as I know we do not have cross-platform games that we can use to test the graphics chips across such platforms. But we can try to estimate it somewhat in terms of overall picture:

HTC One M9 has the Adreno 430. In turn, it gets killed by the iPhone 6S but the iPhone 6S gets dropped by the Surface Pro 3.

77655.png


I am not sure what Surface Pro 3 version AnandTech used as they have 3 different GPUs:
http://www.microsoft.com/surface/en-us/devices/surface-pro-3

If use HD5000 as the best case for the Surface Pro, that GPU is ranked just #302 on the laptop GPU list! The Surface Pro 3 is almost 2X faster than the Adreno 430, which means the 530 won't even match the Surface Pro 3.

On the same chart, Intel HD Graphics 530 is #198.

R7 250 GDDR5 has no problem beating HD530 in GTA V.

gtav.png


Desktop R7 250 as it sits relative to a mobile GPU compares to a 570M/660M level.

570M/660M are about #110-120 on the same list.

NOW, we finally arrived at your level which is 840M/940M. Those cards are sitting at spots #135 and #139.

Therefore, if 840M/940M at #135/139 and the Surface Pro 3 HD5000 (best case) is almost 2X faster than the Adreno 430 but HD5000 is #302 on that chart, how far are smartphone mobile GPUs like Adreno 530 from 840M/940M? Ridiculously far.

And BTW, in late 2015, 840M/940M are not mainstream GPUs but extremely low end.

By the time smartphone GPUs even catch up to 840M/940M, it'll probably take at least 3-4 generations (3-4 years). By that point Intel will have increased performance of its graphics even more, while NV/AMD will have adopted HBM2 and possibly HBM3 and would leap by massive amounts from today.

Right now AMD's integrated performance is stuck because they are waiting to shift to a 16nm node. Future APUs could possibly incorporate HBM2 as well.

I'll even go as far as to say that by the time smartphone GPUs catch up to PS4, the current console generation will be almost over and it'll probably be the year 2019. When companies like Apple say they have reached console level of graphics performance, the only way their statement would be accurate is if they are discussing Xbox 360/PS3 or Wii/Wii U. Otherwise, it's just marketing BS.

From a practical point of view, even if you could harness the power of a GTX980Ti inside a tablet/smartphone, it's mostly pure marketing. Think about it, would you want to play GTA 6, BF5, Cyberpunk 2077 on a smartphone over a 24-40" 4K PC monitor? I guess some people would but I don't see how I could enjoy playing games with my fingers on a 5.5" smartphone over using proper inputs, mouse, keyboard, joysticks on a large PC monitor with awesome speakers/headphones.

IMO, no matter how much more powerful the graphics in smartphones get over the next 5 years, gaming on smartphones is only good for casual pick up and play style games, nothing hardcore. In fact, we already see this now. Despite the power of smartphones today easily eclipsing NES, SNES, N64, GameCube, and PS1/PS2 at least, gaming on a smartphone is an absolutely joke compared to the experience on those consoles. I have a couple Rayman games on my iPhones and the graphics look better than any game on those consoles I described and after 15 minutes, I am literally bored out of my mind. Intinity Blade III - good graphics, boring as hell. I'd rather play Soul Calibur on the Dreamcast of Xbox 360.
 
Last edited:

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
I believe the Adreno 330 possesses comparable performance to a Radeon HD 5450, which in turn is similar in compute to the Intel HD 3000. There is about two years between the Intel HD 3000 and the SD800 though.
 
Mar 10, 2006
11,715
2,012
126
IMO, no matter how much more powerful the graphics in smartphones get over the next 5 years, gaming on smartphones is only good for casual pick up and play style games, nothing hardcore. In fact, we already see this now. Despite the power of smartphones today easily eclipsing NES, SNES, N64, GameCube, and PS1/PS2 at least, gaming on a smartphone is an absolutely joke compared to the experience on those consoles. I have a couple Rayman games on my iPhones and the graphics look better than any game on those consoles I described and after 15 minutes, I am literally bored out of my mind. Intinity Blade III - good graphics, boring as hell. I'd rather play Soul Calibur on the Dreamcast of Xbox 360.

Dead Trigger 2 is fun :) But yes, gaming is much better done on consoles and PCs than on smartphones...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Dead Trigger 2 is fun :) But yes, gaming is much better done on consoles and PCs than on smartphones...

I think there is potential if smartphones get powerful enough to be able to directly stream games at 1080P @ 30-60Hz onto your smart TVs. The latency would have to be very low though. However, hardware alone doesn't produce solid software and we are still quite a while away before smartphones even get as powerful as the Xbox One.
 
Mar 10, 2006
11,715
2,012
126
I think there is potential if smartphones get powerful enough to be able to directly stream games at 1080P @ 30-60Hz onto your smart TVs. The latency would have to be very low though. However, hardware alone doesn't produce solid software and we are still quite a while away before smartphones even get as powerful as the Xbox One.

My iPhone 6s delivers better singlethreaded performance than the XB1 CPU does! :)
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
My iPhone 6s delivers better singlethreaded performance than the XB1 CPU does! :)

I am sure. Those Jaguar cores are weak sauce in terms of IPC. I haven't followed too closely but looking at GeekBench, the IPC/single threaded performance of the A9 is probably up there with SB/IVB, if not better. I could be wrong but just from what I read briefly. Point is even if iPhone 9S has a CPU more powerful than an i5 2500K, it's more beneficial for snappiness and longer useful life of the smartphone (i.e., my iPhone 5 is still fast even compared to iPhone 6/6S -- meaning that I can't say I would be dissatisfied using the iPhone 5 if it had a 4.7-5.5" screen with all the same features and battery life and 2GB of RAM of the 6S), rather than a straight up comparison to a PC desktop CPU/GPU. You know what I mean? I think where smartphones are making huge breakthroughs are their cameras and SSD-controller storage.

I am way more impressed by this, than say smartphone chip that might have the power of my 2011 PC in it. Why? Because taking photos and videos of family, friends, travel is something you can cherish for a long time, while having desktop level graphics on a 4.7" screen is nice e-peen, but then it's outdated/gimmicky. :biggrin:

Also, you cannot play Forza 6, Rise of the Tomb Raider, or future Halo games on your 6S. Ha!
 
Last edited:

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
I think there is potential if smartphones get powerful enough to be able to directly stream games at 1080P @ 30-60Hz onto your smart TVs. The latency would have to be very low though. However, hardware alone doesn't produce solid software and we are still quite a while away before smartphones even get as powerful as the Xbox One.

If Nintendo were to leverage mobile, their games output to a tv would fit the hardware quite well.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
If Nintendo were to leverage mobile, their games output to a tv would fit the hardware quite well.

Ya, I agree. I am hoping the rumors of a unified NX platform come true and we would be able to buy Nintendo NX portable games and play them on the Nintendo NX home console. Essentially the Nintendo eco-system would be like Steam where you would be able to play games on various devices as long as they supported a uniform OS.

I wonder if the smartphone manufacturers may be able to tap into HSA benefits of CPU+GPU Compute to improve the overall smartphone performance for things other than gaming?
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Ya, I agree. I am hoping the rumors of a unified NX platform come true and we would be able to buy Nintendo NX portable games and play them on the Nintendo NX home console. Essentially the Nintendo eco-system would be like Steam where you would be able to play games on various devices as long as they supported a uniform OS.

I wonder if the smartphone manufacturers may be able to tap into HSA benefits of CPU+GPU Compute to improve the overall smartphone performance for things other than gaming?

Well, with perfect utilization, using gpu compute can achieve raw floating point performance close to desktop quad core cpus (the adreno 330 supposedly hits 130 Gflops). Though it is extremely unlikely most non-gaming apps will ever be programmed to use the gpu due to the massive parallel (read: pc equivalent of chinese sweatshop) architectures being difficult to program for.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Well, with perfect utilization, using gpu compute can achieve raw floating point performance close to desktop quad core cpus (the adreno 330 supposedly hits 130 Gflops). Though it is extremely unlikely most non-gaming apps will ever be programmed to use the gpu due to the massive parallel (read: pc equivalent of chinese sweatshop) architectures being difficult to program for.

We have to be careful as some of these smartphone SoCs peak in FP16, but their FP32 (games) performance is way lower than the quoted peak. I am not saying that the Adreno GPUs are like that as I haven't done enough research but every time I see smartphone SoC marketing, I am always hesitant to believe their marketing hyped numbers until I see them actually validated by benchmarks.

Also, it might not be reasonable to directly compare GFLOPS of different GPU architectures. Just as an example, look at GTX580 vs. GTX680 or GTX980Ti vs. 780Ti vs. Fury X.

I don't think we can directly compare the GFLOPS of Adreno to say GCN.

BTW, found this article that speculates Samsung is working on its own mobile smartphone GPU, slated for 2017-2018, with the intention of taking advantage of Heterogeneous System Architecture (CPU+GPU).
 
Last edited:

stateofmind

Senior member
Aug 24, 2012
245
2
76
www.glj.io
Thanks Russian. That is my estimation more or less - 4-5 generations if the trend of 40-50% performance bump can be achieved for each generation

BUT, my questions really were:

1. For how long will this trend continues? What are the limitations?
2. I do wonder if it works in a different way than I think of it. Maybe the smartphone GPUs are not the same in the manner that they will allow for more performance with lower consumption? And maybe the trend will be higher than 40-50% each generation?

I really have no clue.
 

NTMBK

Lifer
Nov 14, 2011
10,438
5,787
136
If you want a direct comparison, look at the Tegra X1 and compare it to the GTX 950.

CUDA cores- X1 has 256, 950 has 768
Memory bandwidth- X1 has 25.6GB/s (shared with CPU), 950 has 106GB/s

So a 950 is 3-4 times faster than a Tegra X1... and the X1 is the fastest mobile GPU out there, as far as I know.
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
a lot of mobile GPUs are probably better than the pre 8800GTX stuff, but after that it gets complicated I think.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
a lot of mobile GPUs are probably better than the pre 8800GTX stuff, but after that it gets complicated I think.

Well, if Russian is correct and most, if not all, mobile gpus utilise fp16 in games and possibly benches, the comparing it with desktop hardware in fp32, mobile devices will begin looking far weaker than we realize.
 

Jimzz

Diamond Member
Oct 23, 2012
4,399
190
106
A better thing to also look at is performance per watt. Remember mobile wants performance but not if it eats the battery up. Was that not NVidias problem with their arm SoC's, they used to much power but performed well?
 

stateofmind

Senior member
Aug 24, 2012
245
2
76
www.glj.io
If you want a direct comparison, look at the Tegra X1 and compare it to the GTX 950.

CUDA cores- X1 has 256, 950 has 768
Memory bandwidth- X1 has 25.6GB/s (shared with CPU), 950 has 106GB/s

So a 950 is 3-4 times faster than a Tegra X1... and the X1 is the fastest mobile GPU out there, as far as I know.

1. 256 Cuda cores@1000MHZ with LPDDR4@1.6GHZ (64-bit) positions it strongly in the low end of current laptops GPUs, quite close to the 940M (but far away still)

2. I wonder about the thermal design and limitations - how much will it be throttled? It has a TDP of 10W.

We have to be careful as some of these smartphone SoCs peak in FP16, but their FP32 (games) performance is way lower than the quoted peak. I am not saying that the Adreno GPUs are like that as I haven't done enough research but every time I see smartphone SoC marketing, I am always hesitant to believe their marketing hyped numbers until I see them actually validated by benchmarks.

Also, it might not be reasonable to directly compare GFLOPS of different GPU architectures. Just as an example, look at GTX580 vs. GTX680 or GTX980Ti vs. 780Ti vs. Fury X.

I don't think we can directly compare the GFLOPS of Adreno to say GCN.

BTW, found this article that speculates Samsung is working on its own mobile smartphone GPU, slated for 2017-2018, with the intention of taking advantage of Heterogeneous System Architecture (CPU+GPU).

What about the GFBench comparison?
For example:
http://www.anandtech.com/bench/PhoneTablet14/964

How does that work really? What is really benchmarked there? Is it the same? Is the current generation of mobile GPUs are much faster in the classic DX11 benchmarks, than the 15W iGPUs from AMD and Intel?

So what is missing, just the FP32 vs FP16? is it related directly to the GPU performance?

Would be glad to understand.. it is not like I thought before. It seems like the Tegra X1 and other top GPUs can get very close to current low end mobile GPUs
 

stateofmind

Senior member
Aug 24, 2012
245
2
76
www.glj.io
Anyone?

I still can't figure it out. The new Tegra X1 would have specifications to match 2/3 of the T 940M performance. It is low, but for 9-11W including the CPU, it's a lot. Even for 15W. And it outdo the integrated stuff from Intel by far

What am I missing? is it the CPU that lacks something? is it just a much better implementation? design? better marrying of software and hardware?
 

MisterLilBig

Senior member
Apr 15, 2014
291
0
76
I still can't figure it out.

An FP32 vs FP16 does have a direct impact on performance, but, it is highly unlikely that those two are being compared against in the same benchmark.

Also, keep in mind that the benchmark you linked to is an "Offscreen" result, meaning, it is useless as a comparison to make a choice for a device.

What am I missing?

Hmm, well the X1 is in an actively cooled box that plugs to a wall and the Intel comparison and the rest are...Phones? Tablets? Ultrabooks?

I will say, compare the hardware(the SoC) to where it is also being used. And, check the TDP/W range.
 

NTMBK

Lifer
Nov 14, 2011
10,438
5,787
136
Anyone?

I still can't figure it out. The new Tegra X1 would have specifications to match 2/3 of the T 940M performance. It is low, but for 9-11W including the CPU, it's a lot. Even for 15W. And it outdo the integrated stuff from Intel by far

What am I missing? is it the CPU that lacks something? is it just a much better implementation? design? better marrying of software and hardware?

It's a 20nm chip, while the 940m is 28nm. Smaller transistors, lower power consumption.
 

stateofmind

Senior member
Aug 24, 2012
245
2
76
www.glj.io
An FP32 vs FP16 does have a direct impact on performance, but, it is highly unlikely that those two are being compared against in the same benchmark.

Also, keep in mind that the benchmark you linked to is an "Offscreen" result, meaning, it is useless as a comparison to make a choice for a device.



Hmm, well the X1 is in an actively cooled box that plugs to a wall and the Intel comparison and the rest are...Phones? Tablets? Ultrabooks?

I will say, compare the hardware(the SoC) to where it is also being used. And, check the TDP/W range.

Thanks, but I'm talking about the chip itself, with no relation for the intended original market. I'm trying to figure out the technology future and benefits, not buying a smartphone

offscreen is ok - they are compared at 1080p.

It's a 20nm chip, while the 940m is 28nm. Smaller transistors, lower power consumption.

Still, 10W for the whole bunch? CPU and GPU altogether?
 

stateofmind

Senior member
Aug 24, 2012
245
2
76
www.glj.io
Nvidia may be understating TDP- there is a reason why the Shield TV has a fan ;) we shall have to see if it throttles in the Pixel C.

True, true. Still, I can't figure it out. What exactly do you get in the Intel I5-5300U (just an example) that you won't get in comparable or faster ARM stuff that do the same for 5-10W?