Question Could Strix Halo evolve into a unified CPU/GPU?

Jul 27, 2020
26,077
17,983
146
Imagine a die containing a sea of CPU cores (around 512 to 1024) that are capable of executing both x86 and GPU specific instructions. Could this how Strix Halo evolves to become the ultimate GPGPU?
 
  • Wow
Reactions: poke01

poke01

Diamond Member
Mar 8, 2022
3,775
5,108
106
The what? Please explain I’m interested to see what you mean
 
Jul 27, 2020
26,077
17,983
146
I'm not a silicon engineer but I'm guessing there are core structures shared between a CPU and GPU? What if the CPU core was tweaked to allow GPU specific code to also run without much penalty? Then a given application could switch between processing CPU or GPU instruction streams on the fly and using the cores dynamically as needed (16 cores for general purpose instructions and rest for GPU specific instructions or half of the cores running CPU instructions while the other half running GPU instructions?). The benefit would be dynamic allocation of resources as needed.
 
  • Like
Reactions: poke01
Jul 27, 2020
26,077
17,983
146
Intel Larrabee v2 ??? ;)
We already know AMD executes on Intel's ideas wayyyy better :p

AMD64: trounced Intel's 64-bits idea

Hyperthreading: Showed Intel how it's done with their SMT

AVX-512: Again, trumped the original inventor's implementation

Hybrid cores: Intel developed two different cores. AMD simply scaled their leaner P-core down to even more lean while retaining AVX-512 execution capability

So yeah, what if AMD's version of Larrabee is next?
 

Cheesecake16

Junior Member
Aug 5, 2020
17
80
91
Imagine a die containing a sea of CPU cores (around 512 to 1024) that are capable of executing both x86 and GPU specific instructions. Could this how Strix Halo evolves to become the ultimate GPGPU?
No... they are fundamentally different things...
You aren't going to see a merging of CPU and GPU cores... the closest thing would be something like Intel Larrabee, Xeon Phi, or Inspire Semi Thunderbird, that is a bunch of CPU cores with large SIMD units possibly with raster engines stuck on the end...
The successors to Strix Halo will be the same CPU cores with an iGPU... there is no reason to change the formula...
 
Jul 27, 2020
26,077
17,983
146
Dang. I wish someone here was qualified to argue with Cheese :p

Why isn't Dr. Ian here? Why does he hate the forums for intellectual discourse?
 

QuickyDuck

Member
Nov 6, 2023
51
59
51
Lrrabee?


There are sea of cores chips in the market , they just don't run x86 but some simple and MATMUL instructions for AI, you know.
 
Last edited:

soresu

Diamond Member
Dec 19, 2014
3,897
3,331
136
Dang. I wish someone here was qualified to argue with Cheese :p

Why isn't Dr. Ian here? Why does he hate the forums for intellectual discourse?
The closest thing you might get is this Bolt GPU for path tracing, but:

#1. It's using (AFAICT) proprietary extensions to RISC-V.

#2. It almost certainly sucks at CPU tasks.

Having a CPU ISA and being able to run CPU tasks is not the same thing.

As Cheese commented CPUs and GPUs fulfill fundamentally different compute roles.

CPUs are largely task oriented, with parallel data processing as an afterthought provided by wide SIMD ALUs in the modern day.

GPUs are wide/parallel from the ground up and data throughput oriented.

There are reasons that processor design is switching back to accelerators with separate design and silicon manufacturing.

#1. Separating dies allows for process node tuning to each function (chiplets supreme).

#2. A jack of all trades design is never going to fly in a world where we all want perf/watt to win simply so that the planet doesn't melt.

Maybe if Vaire Computing's so called "reversible logic" works out such concepts might get revisited - but at the end of the day it's still going to be suboptimal.
 

DavidC1

Golden Member
Dec 29, 2023
1,650
2,704
96
AMD basically claimed the same thing with Llano by calling it an "APU". Renaming it doesn't change the fact that it's still nothing more than iGPU in the same die as the CPU.

Oh sorry, let's not forget that they changed ONE thing since then: Sharing CPU LLC.

The problem with what @igor_kavinski wants is that if you make a CPU core big enough to have reasonable ST performance, then you can't have enough of them to perform decent for graphics. And vice versa.
 

soresu

Diamond Member
Dec 19, 2014
3,897
3,331
136
The problem with what @igor_kavinski wants is that if you make a CPU core big enough to have reasonable ST performance, then you can't have enough of them to perform decent for graphics. And vice versa.

This basically.

The task perfomance optimisation is simply too different.

You could have a single ISA for both with different µArchs hypothetically, but that would likely mean both have baggage that impedes optimum operation for their specific use cases.
 

DavidC1

Golden Member
Dec 29, 2023
1,650
2,704
96
This basically.

The task perfomance optimisation is simply too different.

You could have a single ISA for both with different µArchs hypothetically, but that would likely mean both have baggage that impedes optimum operation for their specific use cases.
I think Larrabbee future was possible if process scaled like in the Golden days. But it's at a fraction of that now. With Northwood, it allowed Intel to increase clocks by more than 50% at the same power consumption. Now there was probably some circuit optimizations as well, but thirty percent perf gains or 50% power reductions was a new generation.

This means task-specific processors are here to stay, such as with Hybrid architectures and accelerators. Forget about dark silicon issues, power has been the limiter for a decade now.
 

soresu

Diamond Member
Dec 19, 2014
3,897
3,331
136
Forget about dark silicon issues, power has been the limiter for a decade now
No dark silicon is very much a power related issue.

Either you lack the power to have the whole chip active without draining your battery dry, or lighting the whole thing up will turn your lap into an easy bake oven as the power overflows at the molecular level 🔥🤣