adroc_thurston
Diamond Member
Dawg, PHX2 onwards is a mix of Classic and Dense.If they ever want a Zen classic + dense part like the cursed Medusa point SKU.
Dawg, PHX2 onwards is a mix of Classic and Dense.If they ever want a Zen classic + dense part like the cursed Medusa point SKU.
I didn't say otherwise. Now what widely used client code benefits from 512-bit vectors that isn't already accelerated by some external block? There are obviously niche workloads that benefit from such wide vectors that aren't HW accelerated, but they are niche. I'm pretty sure AMD added AVX-512 only for some HPC workloads, and having them in client was just a side effect.No? what if you throw SIMD heavy code at the problem of large vectors of 512bit they would be stupidly slow they are good for client code like i have said not for SIMD code
medusa point 4 different PnP cores will surely be easy to scheduleDawg, PHX2 onwards is a mix of Classic and Dense.
VAES? Crypto that everyone uses more or less. JSON parsing ? the issue with AVX-512 is the availability rate is low so no one targets it on client.I didn't say otherwise. Now what widely used client code benefits from 512-bit vectors that isn't already accelerated by some external block? There are obviously niche workloads that benefit from such wide vectors that aren't HW accelerated, but they are niche. I'm pretty sure AMD added AVX-512 only for some HPC workloads, and having them in client was just a side effect.
For client use cases you currently don’t the massive throughput advantage that AVX-512 provides.Crypto that everyone uses more or less
this isn’t a client workload lol.JSON parsing ?
It is.this isn’t a client workload lol.
Don't check network requests in your browser right now for this very website.this isn’t a client workload lol.
Do you need AVX-512 for that?Don't check network requests in your browser right now for this very website.
It'll be faster for larger objects, however.Do you need AVX-512 for that?
I'll add that it's one of those elements - like TLS acceleration, gzip vectorization, etc - that are very helpful for servers so it'll be pursued there first. Yet it can add small wins in client browsers (in decade or so when AVX512 is actually ubiquitous).Edit: let me rephrase, do people who buy client CPUs ever need the performance that AVX-512 provides for JSON parsing e.g simdjson?
Clearly you have never used GB of JSON 🤣🤣Do you need AVX-512 for that?
Edit: let me rephrase, do people who buy client CPUs ever need the performance that AVX-512 provides for JSON parsing e.g simdjson?
medusa point 4 different PnP cores will surely be easy to schedule
Perhaps because no client workload needs to parse GB of json files? And even then, parsing is only part of what you do with your json. Again I'm not trying to dismiss how useful wide vectors can be, but I think it's relevant for cases that don't matter that much for most people; I'm not in that case, and I blame Intel for having slowed down AVX2 and even more AVX-512 adoption with their braindead market segmentation, as much as I blame Arm and Apple for not supporting SVE more, but at the same time I understand their reasoning.Clearly you have never used GB of JSON 🤣🤣
Why would anyone bother using AVX-512 when many CPUs don’t support it at all?Perhaps because no client workload needs to parse GB of json files?
This, alongside Intel’s initial suboptimal implementations, is the reason why AVX-512 has been so slow to catch on.I blame Intel for having slowed down AVX2 and even more AVX-512 adoption with their braindead market segmentation
And many CPUs now support it very nicely, so software runs more efficiently, which can safe battery life too, so why not?Why would anyone bother using AVX-512 when many CPUs don’t support it at all?
I do as well there shouldn't be non AVX-2 Processor still in manufacturing like AVX2 Should be 100% on all newly launched CPU.Perhaps because no client workload needs to parse GB of json files? And even then, parsing is only part of what you do with your json. Again I'm not trying to dismiss how useful wide vectors can be, but I think it's relevant for cases that don't matter that much for most people; I'm not in that case, and I blame Intel for having slowed down AVX2 and even more AVX-512 adoption with their braindead market segmentation, as much as I blame Arm and Apple for not supporting SVE more, but at the same time I understand their reasoning.
AVX10 is just AVX512 prettymuch.Pardon my ignorance but in the future, won't software developers be targeting something like AVX10 or whatever anyway?
that makes the dev's life easierAVX10 is just AVX512 prettymuch.
also APX as well but it would take at least 5 Year+AVX10 is just AVX512 prettymuch.
APX is a meme.also APX as well but it would take at least 5 Year+
You can check yourself: it took Intel almost 10 years from the first AVX2 CPU (Haswell 2013) until all of their CPU had support for it (Gracemont 2021). As I wrote, this is what slowed down adoption, and for AVX-512 it's even worse.I do as well there shouldn't be non AVX-2 Processor still in manufacturing like AVX2 Should be 100% on all newly launched CPU.
Well this is primarily down to Atom guys being really really annoying about it.You can check yourself: it took Intel almost 10 years from the first AVX2 CPU (Haswell 2013) until all of their CPU had support for it (Gracemont 2021). As I wrote, this is what slowed down adoption, and for AVX-512 it's even worse.
It's not if it was a meme AMD would have pushed backAPX is a meme.
AVX-512 was supposed to be mainstreamed but ala 10nm happened.You can check yourself: it took Intel almost 10 years from the first AVX2 CPU (Haswell 2013) until all of their CPU had support for it (Gracemont 2021). As I wrote, this is what slowed down adoption, and for AVX-512 it's even worse.
They did!It's not if it was a meme AMD would have pushed back