- Mar 3, 2017
- 1,777
- 6,791
- 136
There’s no way for AMD to increase MT perf enough by only improving ST perf and not adding more cores.1T is what everyone will benefit from.
I mean that's how the 96c Turin works.There’s no way for AMD to increase MT perf enough by only improving ST perf and not adding more cores.
16C is more than enough. If it isn't then buy Epyc or Threadripper because the workload is lovely. 50-100% MT isn't happening without increasing the core count. But what do they need that for? They already increased the core count well beyond 32.There’s no way for AMD to increase MT perf enough by only improving ST perf and not adding more cores.
If they go from 16C->32C instead, then MT perf may increase 50-100% for MT heavy workloads. To achieve similar with only improving ST perf, the ST perf would have to increase 50-100%, which there is no chance AMD will be able to even come close to.
Exactly. I keep getting crap about "Why do you need sound on your Genoas ?, Its a server". I don't use them as servers, I need more cores.16C is more than enough. If it isn't then buy Epyc or Threadripper because the workload is lovely. 50-100% MT isn't happening without increasing the core count. But what do they need that for? They already increased the core count well beyond 32.
This isn't the 4 core or 8 core era. We have enough MT now for 20% improvements to be sufficient. And further increases are number wanking with marginal utility in the desktop. If you're running a bunch of VMs or an embarrassing parallel workload then buy the part designed for that.
Same mantra Intel was pushing before we got Zen. 4C is more than enough for all desktop PCs they said. If you want more, you’ll have to buy our expensive HEDT or server CPUs. Then Zen entered the market and bumped it first to 8C and then to 16C on desktop PCs.16C is more than enough. If it isn't then buy Epyc or Threadripper because the workload is lovely.
Intel issue was puny 1t uplifts.Same mantra Intel was pushing before we got Zen
Their IPC CAGR is nice.Now it’s AMD that has stagnated
ARL-S is not a thing worth mentioning at all.It seems AMD will have a lead on Intel for a few good months on desktop then.
I don't have high hopes for ARL-S because of MTL. It was a buggy release and Intel seems to be the only CPU designer where they go backwards in features or IPC. If wasn't for AMD, x86 would have been "dead man walking" type scene.ARL-S is not a thing worth mentioning at all.
The funny bit is MTL being 3Q delayed, from Q1'23 to Q4'24.It was a buggy release and Intel seems to be the only CPU designer where they go backwards in features or IPC
eh still the incumbent but yes, not competitive on raw technicalities.If wasn't for AMD, x86 would have been "dead man walking" type scene.
GPUs haven't been graphics oriented since early 2010 and early 2012 for NV/AMD respectively.Before somebody say GPU, I think specialized AI coprocessor will be more energy efficient than a graphics oriented GPU.
Graphics Processing Unit, is optimized for processing graphics workload. It doesn't mean, GPU can't process AI workloads. GPU won't be efficient as purpose built AI co processor. If AMD can introduce dedicated AI core instead of additional 16core CCD, that would be a huge game changer.GPUs haven't been graphics oriented since early 2010 and early 2012 for NV/AMD respectively.
No it's optimized for running math.Graphics Processing Unit, is optimized for processing graphics workload.
It doesn't do anything.If AMD can introduce dedicated AI core instead of additional 16core CCD, that would be a huge game changer.
Don't generalize your expectation with others. For me it will be huge and will have great benefit.It doesn't do anything.
Why waste silicon?
You know, there is much more to CPU design than just the execution units? Like cache, registers, numerical precision, data flow paths, Instruction set etc... Each of them can be tuned for various workflow and AMD/NVidia/Intel spend huge amout of time optimizing them. So, optimized design graphics workload will not be optimal for AI workload.No it's optimized for running math.
In a SIMD (SIMT really) fashion.
You're not the market.or me it will be huge and will have great benefit.
We're talking GPUs which are explicitly dumb, brittle SIMD machines.You know, there is much more to CPU design than just the execution units?
waste of die area.Besides, why are you so against AI core processor?
Same mantra Intel was pushing before we got Zen. 4C is more than enough for all desktop PCs they said. If you want more, you’ll have to buy our expensive HEDT or server CPUs. Then Zen entered the market and bumped it first to 8C and then to 16C on desktop PCs.
Now it’s AMD that has stagnated, and say 16C is all you need on desktop PCs. The irony…
in 2019 with 16C, this latter evolution wasnt needed that early
So apparently the market wanted 16C already in 2019 after all.they didnt expect the 3950X to be as successfull at it was
You're long overdue for bigger 1t bumps.and we’re long overdue for another core count increase.
Intel until late 2017 had only quad cores outside of HEDT. Smartphones already had 6-8 cores for years. Show me a smartphone with more cores than a 3950X. It still doesn't exist, even 4.5 years later. The A12X launched 18 months after the 7700K and had better MT performance and twice the core count. Show me an iPad that has higher MT performance than 3950X. There is none, even 4.5 years later.Same mantra Intel was pushing before we got Zen. 4C is more than enough for all desktop PCs they said. If you want more, you’ll have to buy our expensive HEDT or server CPUs. Then Zen entered the market and bumped it first to 8C and then to 16C on desktop PCs.
Now it’s AMD that has stagnated, and say 16C is all you need on desktop PCs. The irony…
So apparently the market wanted 16C already in 2019 after all.
Now that was more than 4 years ago, and we’re long overdue for another core count increase.
Don't generalize your expectation with others. For me it will be huge and will have great benefit.
You know, there is much more to CPU design than just the execution units? Like cache, registers, numerical precision, data flow paths, Instruction set etc... Each of them can be tuned for various workflow and AMD/NVidia/Intel spend huge amout of time optimizing them. So, optimized design graphics workload will not be optimal for AI workload.
They are different types of devices, with different types of workload. Nobody is using their mobile phone or iPad to execute the type of high throughput MT workloads that some execute on desktop PCs. Hence the higher need for more MT perf on desktop PCs.Show me an iPad that has higher MT performance than 3950X.
That's not the point and I think you know. You say there is stagnation comparable to Intel. In 2018 a tablet launched that was faster in MT than the best non-HEDT part from Intel only 18 months earlier. That's stagnation of MT performance. If AMD is stagnating in MT then the entire industry is stagnating with themThey are different types of devices, with different types of workload. Nobody is using their mobile phone or iPad to execute the type of high throughput MT workloads that some execute on desktop PCs. Hence the higher need for more MT perf on desktop PCs.
That’s still a substantial portion, especially if you add sales of both the 16C CPUs (7950X and 7950X3D) together. You said even AMD was surprised how well the 3950X (and I assume 16C later versions) are selling.most of their current AM5 sales concentrate on the 7800X3D, the 7950X3D sell in a 0.15-0.2x ratio comparatively and the 7950X is even below the latter.
While I don't dispute your claim, not everybody has good GPU. Most developers still use just a basic GPU or built-in ones. Having a dedicated AI co-processor will have greater reach than discreet GPU. I was talking about AI coprocessor in the size of 16core CCD.GPUs and dedicated Neural or AI unit can perform the same tasks. What makes the dedicated parts more optimal is being more power efficient. Which is a thing in notebooks.
But it seems you are talking about desktop, where using GPU for the same task is not a problem. Some memory inefficiency is not a big problem, considering how much power (to perform AI tasks) you get from the desktop GPU.
If you are thinking about the dinky NPU units Intel and AMD have added to their notebooks, they are a fraction of capability of a desktop GPU. There is not really a strong reason to have an NPU + GPU in the same desktop system. It's just duplication and waste.
Workstation GPUs are another alternative, but probably more expensive per unit of performance than gaming GPUs.