• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question What will going to happen when AVX1024 is launched by Intel?

Tugrul_8192bit

Junior Member
I guess it will have even lower boost frequencies but much better performance per core, especially with 1024 bit register (64 of them = 64kbit / 8kB fastest memory, maybe with 500-1000 GB/s bandwidth per core).
 
Last edited:
You mean Intel AMX (Advanced Matrix Extensions)? Regular users most likely aren't even going to get them at all, more so given how Intel recently put in limbo other existing extensions like AVX-512, TSX and SGX.
If AMX actually gets implemented, you will most likely see the same thing than we already saw with AVX-512: Even lower clocks than AVX2, but higher performance for optimized code. Or were you expecting anything revolutionary?
 
If Intel has any common sense, they'll form a SIMD consortium with ARMand/or Fujitsu to try and open up SVE2 or a future version (SVE2.x, SVE3, whatever).
 
You mean Intel AMX (Advanced Matrix Extensions)? Regular users most likely aren't even going to get them at all, more so given how Intel recently put in limbo other existing extensions like AVX-512, TSX and SGX.
If AMX actually gets implemented, you will most likely see the same thing than we already saw with AVX-512: Even lower clocks than AVX2, but higher performance for optimized code. Or were you expecting anything revolutionary?
AMX is targeted very specifically at AI/ML type operations on the CPU.

As they (until more recently) lacked any significant GPU power of their own to do those kind of operations it was an obvious next step to create such an AI/ML focused extension just as ARM did to spread AI/ML ops across its entire IP family.

As you say though it will likely have little use precisely because GPUs and ML focused accelerator cards can be drastically more efficient in this use case.
 
It won't have any impacts unless memory subsystems are improved substantially. 128B per cycle means 2 L1D cache lines and 2 cycles of backend to L1D store bandwidth .
 
I've been around, but I just don't find much to say at the moment.

I'm also actually all in on AMD too, as I have an AMD laptop that I am very happy with and have an AM4 system that I will be keeping for some time.
I'm in the same boat. I plan on buying 5800X3D when the price drops some more in the coming months. For laptop, I plan on buying 6800U based model early next year.
 
Same here.

And I hold out hope that AMD might even release a 5900X3D :screamcat:

Knowing my luck, they will release it *after* I purchase the 8 core version 😀.

I hope that MTL is a great product so they get into a nice price war in 2023 - we will win big time!
 
Back
Top