GPUs to replace CPUs [PC Gamer]

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
According to NVidia CEO Jensen Huang that is.

So this is the latest prognostication regarding the death of CPUs, but should it be regarded with skepticism or with trust? I was under the impression that while GPUs were becoming more like CPUs in terms of their ability to run general purpose code, CPUs were also becoming more GPU like by increasing their ability to handle extremely parallel code by increasing the amount of cores/threads and widening their SIMD units.

At least that's where Intel seems to be going. So to me it seems the truth lies somewhere in the middle. I can't really see a GPU running a Windows OS anytime soon, and I definitely can't see a CPU rendering a 3D game by itself anytime soon either.

In fact, a few years ago when details on Haswell's AVX2 became available, there was a lot of talk and speculation on various tech forums and websites about how AVX2 was a direct threat to GPUs in fact. The best example being this article from Extremetech:

Intel's haswell is an unprecedented threat to NVidia and AMD.

Of course in the end, it never turned out that way. But is it even possible to use wide vectors to render a game in software mode?
 

Crono

Lifer
Aug 8, 2001
23,720
1,501
136
Unsurprisingly clickbait-y headline, or at least it doesn't have the full context.

This all amounts to chest thumping on both sides, though there is more at stake than bragging rights. While Huang isn't necessarily asserting that GPUs will power consumer desktops while kicking CPUs to the curb, he does anticipate GPUs playing a larger role in supercomputers and specialized categories, such as AI, machine learning, cloud computing, and so forth.

Not surprising from a company that mainly produces GPUs, either. And it doesn't factor in the potential role that TPUs will play going forward.
 
  • Like
Reactions: Ajay

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,666
136
The extensions to AVX are more a threat to the feasibility of CPUs (by making them more inefficient at their generic tasks) than one to GPUs (that will always be more powerful and efficient at their tasks). No one is replacing the other, but the importance of GPUs is certainly going to grow further.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Cpu should be able to render 3D, 3d software rendering was even good and the most accurate for use in emulator.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The extensions to AVX are more a threat to the feasibility of CPUs (by making them more inefficient at their generic tasks) than one to GPUs (that will always be more powerful and efficient at their tasks). No one is replacing the other, but the importance of GPUs is certainly going to grow further.

There's no reason why they can't coexist together. AVX 512 is a much more flexible and useful SIMD extension than previous iterations, and can work on general purpose code (at least according to this guy), when it uses the VL designator.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,666
136
There's no reason why they can't coexist together. AVX 512 is a much more flexible and useful SIMD extension than previous iterations, and can work on general purpose code (at least according to this guy), when it uses the VL designator.
The problem is that Intel actively turned this into a needless chicken egg problem. Full AVX 512 support is essentially limited to the priciest CPU models, but good use of it depends on software adaption which won't happen as long as it's a feature limited to a high priced niche. Meanwhile it already negatively affects the overall CPU designs (die sizes increase, cache hierarchy is being optimized for server grade cases) where otherwise IPC is stagnating since years. For years Intel has been neglecting traditional CPU design improvements in favor of R&D on iGPUs and SIMD extensions.
 
  • Like
Reactions: VirtualLarry

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The problem is that Intel actively turned this into a needless chicken egg problem. Full AVX 512 support is essentially limited to the priciest CPU models, but good use of it depends on software adaption which won't happen as long as it's a feature limited to a high priced niche. Meanwhile it already negatively affects the overall CPU designs (die sizes increase, cache hierarchy is being optimized for server grade cases) where otherwise IPC is stagnating since years. For years Intel has been neglecting traditional CPU design improvements in favor of R&D on iGPUs and SIMD extensions.

I get your point, but as far as AVX 512 support is concerned, the reason why it's currently restricted to the HEDT and server grade hardware is likely because of the die space penalty. Eventually AVX512 will make it into the consumer space though, probably on 10nm if I had to guess.

And IPC hasn't been stagnant at all. It's been going up slowly for sure, but definitely not stagnant. I know Intel got rid of the tick tock cadence, but it seems with every new architecture, they've gotten around 10-15% extra in IPC. And I'm not talking about the respins.
 

moinmoin

Diamond Member
Jun 1, 2017
4,952
7,666
136
And IPC hasn't been stagnant at all. It's been going up slowly for sure, but definitely not stagnant. I know Intel got rid of the tick tock cadence, but it seems with every new architecture, they've gotten around 10-15% extra in IPC. And I'm not talking about the respins.
What do you consider respins, and what as new architectures? After Haswell the majority of improvements are down to higher frequencies at stock that actually increase heat and decrease efficiency. And with Skylake-X we actually saw a decrease in IPC.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
What do you consider respins, and what as new architectures? After Haswell the majority of improvements are down to higher frequencies at stock that actually increase heat and decrease efficiency. And with Skylake-X we actually saw a decrease in IPC.

Respins are Ivy Bridge, Broadwell, Kaby Lake.. These are enhanced versions of Sandy Bridge, Haswell and Sky Lake respectively. Also, Broadwell had about a 5-10% increase in IPC over Haswell in general computing, and about double that or more for FP/SIMD heavy doe due to the enhancements it had. And what makes you think Skylake-X had a decrease in IPC? Skylake-X's core has higher IPC than previous HEDT CPUs, but in certain applications (ie games), the smaller and slower L3 cache can hurt its performance. The smaller and slower L3 cache was a trade off Intel made for increasing the size of the L2 cache, which helps it a lot on server, database and productivity applications.
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I remember the Nvidia CEO straight up telling everyone, "You don't need a fast one anymore" regarding CPU's. This was a very long time ago, like 8 or 10 years or even longer maybe. I forgot its been so long. Funny how everyone is still freaking out about CPU speed. Its one of the biggest things going on in the tech industry right now and people still love their CPU's and always want more CPU power. I know I do. Don't need a fast one, lol. Right.

Also, moore's law might be slowing down big time, but that's with the current way CPU's are made. They are running into problems with the materials they use, right? Didn't that happen with tubes? Look what happened there. We DITCHED them and found a better way. There will always be a better way.