- Mar 3, 2017
- 1,777
- 6,791
- 136
No.I was under the impression that SH5 was physically different (and bigger) then SP5.
no gonna tell you broSo is then Venice SP5, SH5 or something else?
haha, I am going to guess it will be one of the following four: SP5, SP7, SH5, SH6No.
no gonna tell you bro
H is heterogeneous aka APUs only.SH5, SH6
Oh, so that's what the H stands for. Good to know.H is heterogeneous aka APUs only.
That's just tip of the iceberg.MLID floated an idea that AMD may be adding some AI chiplets to Turin at some point
I'd imagine they're busy trying to maximize AI inference chips in any way they can. The Xilinx stuff is all inference and between self driving cars and the sudden explosion in generative AI inference, especially in the cloud, and just how much it costs to generate each query (especially in the cloud stuff) there's a huge market for high efficiency inference.That's just tip of the iceberg.
I suspect this means instructions for bf16 and matrix handling.
Not a problem, those tech were license by Xilinx to AMD long before the deal.there was rumor once the xilinx purchase was approved by all major countries that zen 6 might get some ai tech and it was hinted at by amd a couple years ago and even last year by lisa during the zen 4 announcement. it's all speculos on our part and anything we discuss is worth nothing as all of that would have been designed into place long before any of our fartings on the topic were summed up in a digital town square speech like in this thread or the zen 6 thread if there is one because I have never seen it myself but it doesn't exist one of you should make it.
The last time z5 ai inferencing and optimisations came up this was what was said. I don't expect any dedicated tile for ai work on consumer ryzen until zen 6 or 7. bloody bizarre to even type that out as I'd laughed at the idea of an amd comeback even on the even of zen's launch.I suspect this means instructions for bf16 and matrix handling.
AMD has typically lagged far behind ARM for CPU µArch design in this arena.
I suspect this means instructions for bf16 and matrix handling.
AMD has typically lagged far behind ARM for CPU µArch design in this arena.
of course, amd was very open with their working together long before the deal was announced. xilinx holds onto a lot more ip than amd has ever used. interesting enough in their disclosure reports mid acquisition xilinx was approached by another paarty seeking to buy them out. never was said who it was and we can only wildly guess.Not a problem, those tech were license by Xilinx to AMD long before the deal.
gpu will always be more efficient for the power used than a cpu. unless x86 can do the work of a gpu under an apple m processor 100% load power draw.I wonder how much that will improve the performance for typical machine learning tasks. Will it be a viable option compared to using GPUs?
gpu will always be more efficient for the power used than a cpu. unless x86 can do the work of a gpu under an apple m processor 100% load power draw.
As with all things I suppose it depends on the ML frameworks used and their optimisations.I wonder how much that will improve the performance for typical machine learning tasks. Will it be a viable option compared to using GPUs?
The assumption you have is everyone will have a top of the line consumer gpu or quadro. that isn't the case. the "engines" on processor can be flexed to be resourceful for speeding up or improving general software. it's a chicken and egg problem if you ask me.So I guess the question then is why add this to the CPU at all, if everyone doing serious machine learning tasks will use GPU anyway?
Not always.gpu will always be more efficient for the power used than a cpu
You're mixing up software encoding with hardware encoding here. if you use the igpu on intel or amd to encode video it's hardware encoding and the quality will not be there. We're at the point were brute forcing software encoding is very fast, at leas tcompared to a mere 5 years ago.Not always.
GPUs are great for certain tasks, and terrible at others.
A many core CPU will always be better than a GPU general compute at video encoding, especially as complexity and interframe dependencies increase with each new codec generation.
The assumption you have is everyone will have a top of the line consumer gpu or quadro. that isn't the case. the "engines" on processor can be flexed to be resourceful for speeding up or improving general software. it's a chicken and egg problem if you ask me.
No I'm not.You're mixing up software encoding with hardware encoding here
Quadro isn't cheap.Well, I'm thinking there's not that many desktop PC users doing machine learning training tasks as a "hobby project". And those that do it professionally will use GPUs anyway.
But regardless it will be interesting to see how an AMD 8950X Zen 5 CPU with these AI extensions will improve ML task performance compared to 7950X. Will it e.g. be 2x or 5x faster for such operations?
And how will 8950X perform against e.g. an RTX4080/4090 GPU. Will the GPU be e.g. 2x, 5x, or 10x faster?
If you want the most power efficient video encoder you buy that AMD/Xilix hardware. 1500 bones gets you something more efficient than the two. I'll send you my xmas gift list and you lot can take match sticks and see who ends up buying that for me.No I'm not.
I specifically said general compute - as in using the shaders instead of an ASIC.
An ASIC can be more efficient for power consumption, but it's utility is minimal as a fixed function hardware block.
ASIC encoding quality is also generally not up to snuff with software encoding in terms of quality tuning for at least the first 5+ years of codec availability, and often beyond.
Sure, if you can afford a TR 3990X or 5995WX system.We're at the point were brute forcing software encoding is very fast, at leas tcompared to a mere 5 years ago.