The future of Mainstream Enthusiast Desktop: Removal of the iGPU?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DrMrLordX

Lifer
Apr 27, 2000
21,991
11,542
136
Fact still remains eight threads are available to the cpu through the modern PC game engine.

Is that really true? How many PC games peg more than 4 cores at 100% utilization at high resolutions today?

Now, throw Mantle into the mix, which moves compute functions off the general purpose cores and onto GCN cores wherever possible. DX12 promises to do the same, probably on a wider range of GPU hardware.

I'm not going to put my neck out there and say that Kaveri + 290x under Mantle is necessarily going to beat 8350 + 290x also under Mantle in Battlefield 4 because Kaveri "isn't there yet" in so many ways (though, truth be told, I'd like to see a direct bench of that, which I don't think I've seen yet). I think the dam will finally break and gamers will start looking at iGPUs with newfound respect when they see Skylake + fat dGPU vs Broadwell-K/Haswell-E + fat dGPU under DX12.

Carrizo may make an impressive showing on the AMD side when it comes out, but we'll see how that goes.

I do think it is interesting that in the real world for around the same price as A10 7850K, a person could have either Athlon x4 750K or Athlon x4 760K plus R7 260X.

This condition is superficial and brought on by all factors leading AMD needing to/wanting to charge $180+ for the 7850K. If the 7850K were priced in the $120-$150 segment (or rather, if AMD *could* price it there), or if a majority of titles out there already supported DX12, or both, then Kaveri would look much better. The troubles associated with the Kaveri launch should not be an indictment of the present or future value of a robust iGPU.
 
Last edited:

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
Fact still remains eight threads are available to the cpu through the modern PC game engine.

With a large video card or cards (and they are only going to get larger) having the full eight x86 cores available will always be an advantage.

Therefore I just don't see any way (at this time) that quad core (+ HSA through iGPU) coupled to a discrete video card could replace an eight core cpu coupled to a discrete video card for a PC game.

I really don't know how this works. But even if the game generates 8 threads that need processing would a much faster 4 core i5 for example have trouble dealing with them?

Is it that since each core needs to processes 2 threads each would there be too much slow down due to the core switching threads even if each core were 3 times faster?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
Fact still remains eight threads are available to the cpu through the modern PC game engine.



With a large video card or cards (and they are only going to get larger) having the full eight x86 cores available will always be an advantage.



Therefore I just don't see any way (at this time) that quad core (+ HSA through iGPU) coupled to a discrete video card could replace an eight core cpu coupled to a discrete video card for a PC game.


If we're using the consoles as the baseline for future of PC games, then we're talking about 6 threads, since they reserve 2 for the OS. And those threads will be running on 1.6ghz jaguars, which have a lower IPC than just about everything intel puts out aside from the atom. A dual core i3 at 3ghz+ could run circles around that even without hyperthreading. A quad i5 leaves it in the dust. An i7 with HT laughs in it's face.

So you've got the consoles, machines designed to last 5+ years while devs squeeze every ounce of optimization out of them, deliberately putting in the weakest x86 cores they could get their hands on, and focusing almost entirely on the GPU, building in HSA features more advanced than we have on PC right now. (especially the ps4). On top of that, you've got dx12 and mantle, with practically the sole stated purpose of lowering CPU overhead even further on PC.

Take a look at resogun. A launch game for the ps4. Look at the ridiculous amount of physics on display....it's not doing that with CPUs. It's all GPU, and a relatively small one. Nothing on the PC even comes close. We hardly need more than 4 CPU cores now, but we certainly don't need them in the future when a critical mass of compute capable iGPUs is out there for devs to rely on. Whether or not you have an iGPU, games are not going to continue pushing up the number of CPU cores they utilize for much longer.

With respect, I think you're falling prey to that old saying from Henry Ford..."if I asked people what they wanted, they'd say faster horses." You only want more cores because that's what you know and understand.
 
Last edited:

BD2003

Lifer
Oct 9, 1999
16,815
1
81
I really don't know how this works. But even if the game generates 8 threads that need processing would a much faster 4 core i5 for example have trouble dealing with them?

Is it that since each core needs to processes 2 threads each would there be too much slow down due to the core switching threads even if each core were 3 times faster?

A single CPU core can process multiple threads. It's been that way since the dawn of multhreaded OSes...on PC that dates back to at least windows 95, maybe even 3.1? It was a long, long time ago.

The number of threads doesnt matter...it's how much work those threads need to do that matters. A single i-series core can handle as much as several of those piddly jaguar cores the consoles are pushing, without breaking a sweat.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Is that really true? How many PC games peg more than 4 cores at 100% utilization at high resolutions today?

I'm not sure how many PC games will be able to peg a Haswell-E octocore? Probably not too many (if any) since that kind of processing power will still be extremely rare in the consumer space due to price.

However, if we see x86 hexcores (like the Haswell E i7-5820K) drop into a lower price category I'd expect the game developers would be able to offer greater settings options for gamers in the future. Example: Maybe for Battlefield 5 there could be a 128 or 256 multi-player mode.

http://wccftech.com/intel-haswelle-...ations-unveiled-flagship-8-core-boost-33-ghz/

Intel Core i7-5820K Processor

The Core i7-5820K would be the entry level processor of the lineup featuring 6 cores and 12 threads. The CPU comes with a 15 MB L3 Cache, 3.3 GHz base clock, 3.6-3.8 GHz boost clock, 140W TDP and support for DDR4-2133 MHz memory. Being similar to the Core i7-5930K in terms of technical specifications aside from the clock speeds, the Core i7-5820K might just become an attractive processor for the X99 chipset platform since it will be priced around a lower range of $300 – $350 US.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Then after the Intel x86 hexcores drop in price, the Intel x86 octocores will follow at some later point down the road.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Now, throw Mantle into the mix, which moves compute functions off the general purpose cores and onto GCN cores wherever possible. DX12 promises to do the same, probably on a wider range of GPU hardware.

I'm not going to put my neck out there and say that Kaveri + 290x under Mantle is necessarily going to beat 8350 + 290x also under Mantle in Battlefield 4 because Kaveri "isn't there yet" in so many ways (though, truth be told, I'd like to see a direct bench of that, which I don't think I've seen yet). I think the dam will finally break and gamers will start looking at iGPUs with newfound respect when they see Skylake + fat dGPU vs Broadwell-K/Haswell-E + fat dGPU under DX12.

Mantle and DX12 are going to help, but then the resolution also increases at the high end (4K multi-monitor, etc.)
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The consoles also have HSA capable iGPUs, and weak CPU cores that a quad i5 would have no trouble running circles around them.

You've got it backwards...the next gen consoles aren't representatives of the multi-CPU days, they're the primary force that's going to drive HSA. They're intentionally designed barely adequate CPUs because they know GPU compute is the future.

If I recall right, only the PS4 supports HSA. The Xbox One is out due to the eSRAM for the GPU.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
AMD massively increased price with Kaveri, not to mention other SKUs are still MIA. So its bad to compare with that.
 

DrMrLordX

Lifer
Apr 27, 2000
21,991
11,542
136
I'm not sure how many PC games will be able to peg a Haswell-E octocore? Probably not too many (if any) since that kind of processing power will still be extremely rare in the consumer space due to price.

Last time I checked, not very many at all. Most games still top out at 4 cores, and HT still seems to hurt Intel quads in most PC gaming as well.

However, if we see x86 hexcores (like the Haswell E i7-5820K) drop into a lower price category I'd expect the game developers would be able to offer greater settings options for gamers in the future. Example: Maybe for Battlefield 5 there could be a 128 or 256 multi-player mode.

http://wccftech.com/intel-haswelle-...ations-unveiled-flagship-8-core-boost-33-ghz/

Then after the Intel x86 hexcores drop in price, the Intel x86 octocores will follow at some later point down the road.

I doubt most developers are going to target Intel hexcores as their probable end-user. Maybe I'm wrong here, but . . .

Mantle and DX12 are going to help, but then the resolution also increases at the high end (4K multi-monitor, etc.)

Exactly. Higher resolution gaming pushes an even greater burden at the GPU(s) and makes the game less bound by CPU performance. That trend only confirms my suspicion that, in a DX12/Mantle-dominated environment, general purpose CPU cores will mean less and less beyond the minimum required to run the game. I do not see hexcores or octocores becoming the defacto standard gaming CPU in such an environment.

AMD massively increased price with Kaveri, not to mention other SKUs are still MIA. So its bad to compare with that.

Correct. Kaveri also tends to bench poorly due to CPU throttling behavior. It makes itself look bad in the media and comes across at a price that is too high for many buyers. I still like the chip, but holding it up as "the future of iGPUs" seems a poor idea. At least the A8-7600 is finally showing up in OEM machines in quantity.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,991
11,542
136
Got some links to that? Last time you said so, it was still completely MIA.

It is being offered as a CPU option in some HP desktops. I wasn't the one who discovered that here, either . . . but I can't remember who did. All I added to the discussion is that you can't yet get OEM chips from HP's parts store yet (shucks).

Anyway, it seemed that OEM availability of this chip coincided with the announcement of the A10-7800.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
It is being offered as a CPU option in some HP desktops. I wasn't the one who discovered that here, either . . . but I can't remember who did. All I added to the discussion is that you can't yet get OEM chips from HP's parts store yet (shucks).

Anyway, it seemed that OEM availability of this chip coincided with the announcement of the A10-7800.

Nice, so it only took half a year after the paperlaunch to actually emerge. And then as OEM only. I wonder if the yields are so terrible at GloFo. It would also explain the large price increase.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Exactly. Higher resolution gaming pushes an even greater burden at the GPU(s) and makes the game less bound by CPU performance. That trend only confirms my suspicion that, in a DX12/Mantle-dominated environment, general purpose CPU cores will mean less and less beyond the minimum required to run the game.

As I understand things, Mantle reduces CPU overhead but doesn't completely eliminate it.

So as the detail settings and GPU power increase, the stress on the CPU can be reduced by Mantle.

However, with even larger GPUs and detail settings beyond a certain point, we are back to square one right?

I do not see hexcores or octocores becoming the defacto standard gaming CPU in such an environment.

According to bullet point #4, "Perfect Parallel rendering - utilize all eight cpu cores"

FBMantle_575px.jpg
 
Last edited:

BD2003

Lifer
Oct 9, 1999
16,815
1
81
As I understand things, Mantle reduces CPU overhead but doesn't completely eliminate it.



So as the detail settings and GPU power increase, the stress on the CPU can reduced by Mantle.



However, with even larger GPUs and detail settings beyond a certain point, we are back to square one right?







According to bullet point #4, Frostbite 3 is coded for eight cpu cores:



FBMantle_575px.jpg


Frostbite doesn't know or care whether a thread is sharing a core with another using HT or just time slicing, that's the OS's job. Its just their way of saying the engine is highly multithreaded. It's fairly well established that a modern i7 with HT can still overpower any 8-core CPU from AMD in highly threaded workloads.

This isn't any indication that we *need* more cores from intel.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
It's fairly well established that a modern i7 with HT can still overpower any 8-core CPU from AMD in highly threaded workloads.

This isn't any indication that we *need* more cores from intel.

Yes, but this is based on today's state of Video cards, resolutions, monitor refresh rate, and multiplayer count.

As Video cards get larger and resolutions increase, and 144 Hz refresh rate becomes more commonplace the path to a greater number of powerful cores (beyond four) has been established.

This, not to mention, increasing player count beyond 64 for multi-player.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
Yes, but this is based on today's state of Video cards, resolutions, monitor refresh rate, and multiplayer count.



As Video cards get larger and resolutions increase, and 144 Hz refresh rate becomes more commonplace the path to a greater number of powerful cores (beyond four) has been established.



This, not to mention, increasing player count beyond 64 for multi-player.


The only thing that's established is that computational requirements will go up over time, but that's not news. It says nothing about how we get there.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The only thing that's established is that computational requirements will go up over time, but that's not news. It says nothing about how we get there.

For HSA, I can vaguely understand how the iGPU could be used to boost floating point. But as far as boosting Integer goes, this will have to come via more added cpu cores and/or making the integer stronger in each core.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
For HSA, I can vaguely understand how the iGPU could be used to boost floating point. But as far as boosting Integer goes, this will have to come via more added cpu cores and/or making the integer stronger in each core.


Integer performance isn't what holding games back.

Even if it was, offloading floating point to the iGPU frees up the CPU for integer operations.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Integer performance isn't what holding games back.

On the current AMD quad cores (Richland, Kaveri, etc) each module has two integer cores and a shared FPU. I've seen a lot of mention that the shared FPU is rather weak for a quad core processor.

Why did AMD do this? I have no idea.

Maybe for non-gaming reasons? Maybe to save some die area? Not sure, but with AMD being a company with a lot of focus on gaming, I find it interesting they would reduce FPU if it was so important?

Maybe some games use more FPU than other games? But other use less FPU?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
On the current AMD quad cores (Richland, Kaveri, etc) each module has two integer cores and a shared FPU. I've seen a lot of mention that the shared FPU is rather weak for a quad core processor.



Why did AMD do this? I have no idea.



Maybe for non-gaming reasons? Maybe to save some die area? Not sure, but with AMD being a company with a lot of focus on gaming, I find it interesting they would reduce FPU if it was so important?



Maybe some games use more FPU than other games? But other use less FPU?


They're trying to make the most out of a limited amount of space. They probably figured they didn't need it if they could eventually offload the floating point heavy lifting to the GPU.

I tend to think of AMDs quad cores as being more like a dual core with HT. IIRC The FPUs actually have SMT/HT, and they all share L2/L3. They just have two full integer cores where intel would be sharing one with HT. But if a game/app is more concerned with FPU performance, a single HT integer core is probably enough to balance out.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Integer performance isn't what holding games back.

Even if it was, offloading floating point to the iGPU frees up the CPU for integer operations.

That's the dream (AMD's). I think what it really boils down to is whether or not GPGPU takes off on consoles, specifically the PS4, since Sony had their APU configured to handle 64 compute commands at once.

Considering that all non-graphics GPGPU could be handled on APUs, it's quite possible that Kaveri or Carrizo + dedicated graphics computers could have longer longevity as gaming systems in the future if games become GPGPU dependent.

Anyone have any real knowledge on how much latency is involved with GPGPU on APU versus CPU+GPU GPGPU processing?
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
That's the dream (AMD's). I think what it really boils down to is whether or not GPGPU takes off on consoles, specifically the PS4, since Sony had their APU configured to handle 64 compute commands at once.

Considering that all non-graphics GPGPU could be handled on APUs, it's quite possible that Kaveri or Carrizo + dedicated graphics computers could have longer longevity as gaming systems in the future if games become GPGPU dependent.

Anyone have any real knowledge on how much latency is involved with GPGPU on APU versus CPU+GPU GPGPU processing?

If we're talking about CPU<->GPU memory latency, the latency on a fully HSA APU is *literally* infinitely faster, because there's no latency whatsoever. They're sharing the same memory space. Obviously that doesn't translate into infinite performance, but that particular latency ceases to be any sort of bottleneck, because it simply doesn't exist. Most current implementations don't function this seamlessly, but AMD is further along that path.

NVIDIA isnt cut out of this either - there's nothing stopping NVIDIA or AMD from plopping CPU cores onto their dGPUs, as they've been long rumored to do eventually. But then we've got the reverse situation, where we're sacrificing parts of the GPU die for CPU cores that might not be used properly for years.

In the short term, they're introducing a new bus called NVLink to link multiple GPUs together much faster than PCI-E, and have plans to integrate it with CPUs eventually. In the course of this, they're going to have to move so much functionality onto the PCB that they're just shy of a socketed GPU.

http://www.anandtech.com/show/7900/nvidia-updates-gpu-roadmap-unveils-pascal-architecture-for-2016

Basically the three big players are all coming at the same problem in different ways, and in the end they might not look all that different from each other. One can look at all this and proclaim the dGPU is dying....but look at it from another perspective. When windows eventually becomes fully ARM compatible, and NVIDIA integrates high performance ARM chips onto their GPUs, and you can string a bunch of GPUs together with a super high bandwidth bus ...there's no need for intel inside that box. Hell, there isn't even a need for a motherboard, everything is fully integrated onto the individual "dGPU" dies and PCBs. They'd solve the latency problem while still retaining the ability to string multiple "dGPUs" together for extreme performance.

By that time, Intel's iGPUs might be competitive with AMDs, and AMDs CPUs competitive with Intel's....and they're all basically producing a single computing unit that does everything. Dunno what you call it, but for all intents and purposes everyone is moving towards the APU - just remember that as long as it's linkable, this doesn't necessarily mean we'll be shackled to the performance of what a single chip is capable of.
 
Last edited: