- Mar 3, 2017
- 1,749
- 6,614
- 136
That board you proposed costs 2x less then the asus strix i proposed ALC 887 old school I was thinking in ball park of 300-350$B650 PG Lightning, based on my experience with ASROCK Z790. Generally problem free mobos.
Two generations of IPC increases do something and going from stock 5950X, don't forget you get north of +25% just from the increased clock.I know it’s probably similar for a tuned 9700x as well but my brain still struggles to comprehend it can beat a 5950x in cb r23 multi
Im a fan of the MSI Tomahawks, B650 and X-670E.Can't do 2000fclk anymore whea's and worse perf on my 5800x3d thank god 9800x3d comes xD
Time to sell and buy day1. It will be out of stock in Poland 100%
b650-a strix or anyone have any recommendations ?
I am also wondering if AMD will not make X3D part of its main product line. Surely more than just games would benefit from more L3?The X3D is a miracle of modern silicon engineering. So they could be a bit limited by the laws of physics. Big deal. It's still going to be better than what came before it. Seriously give the melodrama a rest
Agree. Intel didn't remove SMT because of vulnerability. You have it right.If I am not mistaken first HT-related vulnerability was reported in 2005. We have 2024 now when they removed it Wouldn't call it fast. And if they really did it for security they would remove it from newest Xeons. The simplest explanation that they did it to make scheduling easier with two types of the cores and to save on validation time is the most fitting imo.
When it comes to AMD they have their own implementation. That seems to be doing better on the security front, of course it is also "younger" so might be we will see new ones popping up, but as well as with speculative execution, the idea itself is a clever way to boost CPU utilization so I guess companies will try to salvage it as much as possible.
I would like to believe that we still have the most competitive environment .Yeah, I dont know man. The result from this morning looks very bad. 328W for 42286 R23?
View attachment 109867
Source: HWBOT https://t.co/Rkxoev9JKf https://t.co/4KapR8CWtp
HXL (@9550pro) on X
I suspect it wasn't that obscure.Some saying that the 62K score was using chilled water. Either way, it’s probably higher than even an average person with an AIO can achieve. Still a really great score, but given that and it being just one game, trying to temper expectations a little.
I agree. I can see AMD making lots of different variations using this process.I wonder how the economics would work out if AMD were to completely forego L3 on the main CCD die (saving die area) and making all the processors V-Cache, and stacking them on top of IO die.
Then, the IO die would be broken into a section of dedicated V-Cache, private to the CCD sitting above it (maintaining the low latency). Plus there would be a link to IO die for the rest of the communication. IO die could continue to be N6 based.
The alternative - Strix Halo like link between CCD and IO Die - while cheap, it is not free.
The advantage disadvantages of proposed CCD on top of IO die:
- cost of 3D stacking
+ die saving on expensive node of CCD, not suitable for SRAM to cheaper N6 node
+ every CPU already starts with V-Cache and its performance advantage
+ unlimited bandwidth and low latency to IO die
Strix Halo / Navi 31/32 fanout link:
+ cheaper than 3D stacking
- fanout link still has its own cost
- SRAM on expensive node, where it does not scale
- adding V-Cache still has the same additional cost
Agree; however, I suspect they will find a way to make it much faster as they use it more.The problem with hybrid bonding is not cost, but capacity. The process used for it is slow, meaning that the throughput of a line doing it is not very high, meaning you have to build a lot of capacity, which is slow.
I cannot see them doing a product stack that strictly depends on hybrid bonding for all SKUs either next gen or the one after that. Not because of cost, but because you cannot magic up capacity for it.
There are ways it would help latency, because the most distant piece of cache would be closer.
You have not even mentioned the (code-name)-X server CPUs. They use the same technology and have become a very saleable CPU.I am also wondering if AMD will not make X3D part of its main product line. Surely more than just games would benefit from more L3?
Agree. Intel didn't remove SMT because of vulnerability. You have it right.
I would like to believe that we still have the most competitive environment .
I suspect it wasn't that obscure.
I agree. I can see AMD making lots of different variations using this process.
Agree; however, I suspect they will find a way to make it much faster as they use it more.
You have not even mentioned the (code-name)-X server CPUs. They use the same technology and have become a very saleable CPU.
early for Zen 5. supermicro is backordered since Zen 5 was released for my Turin motherboard. Coming this week they say. (per a supermicro salesman directly to me)AMD has been very quiet about these for Zen 5. They may be a mid-generation upgrade for Zen 5 in server space.
I know it’s probably similar for a tuned 9700x as well but my brain still struggles to comprehend it can beat a 5950x in cb r23 multi
During the cpu heavy parts how many threads get loaded? Does it keep to one ccd or go full 32 threads?
With attitude like this, you are not going to earn the "poster of the month" badge.This thread is so sad on so many levels. I wish you all the best in health and wealth.
I wonder how the economics would work out if AMD were to completely forego L3 on the main CCD die (saving die area) and making all the processors V-Cache, and stacking them on top of IO die.
Then, the IO die would be broken into a section of dedicated V-Cache, private to the CCD sitting above it (maintaining the low latency). Plus there would be a link to IO die for the rest of the communication. IO die could continue to be N6 based.
The alternative - Strix Halo like link between CCD and IO Die - while cheap, it is not free.
The advantage disadvantages of proposed CCD on top of IO die:
- cost of 3D stacking
+ die saving on expensive node of CCD, not suitable for SRAM to cheaper N6 node
+ every CPU already starts with V-Cache and its performance advantage
+ unlimited bandwidth and low latency to IO die
Strix Halo / Navi 31/32 fanout link:
+ cheaper than 3D stacking
- fanout link still has its own cost
- SRAM on expensive node, where it does not scale
- adding V-Cache still has the same additional cost
Seems pretty generalized to me... This diagram appears to include a lot of the various packaging techniques in one, such as TSVs + hybrid bonding, silicon bridges, and silicon interposer.Kepler just posted a patent that is one level above of what I was thinking, namely, there is a separate "pair node" which sits on top of IOD / AID. This "pair node" I am guessing would be SRAM for L3.
My thinking was that there could just be a section of the bottom die dedicated to L3.
Seems pretty generalized to me... This diagram appears to include a lot of the various packaging techniques in one, such as TSVs + hybrid bonding, silicon bridges, and silicon interposer.
Isn't it essentially what MI300C is doing?Kepler just posted a patent that is one level above of what I was thinking, namely, there is a separate "pair node" which sits on top of IOD / AID. This "pair node" I am guessing would be SRAM for L3.
My thinking was that there could just be a section of the bottom die dedicated to L3.
Notice the "DDR" part of the AID, implying this a server or desktop CPU.Isn't it essentially what MI300C is doing?
Isn't it essentially what MI300C is doing?
That's also CoWoS, just -L.Being able to use silicon bridges rather than CoWoS
You're hammering into an even harder bottleneck then (hybrid bonding is hella slow and ass).AMD bypass the CoWoS capacity bottleneck in the supply chain for the datacenter GPUs.
if it goes to the other ccd the score tanks, so it better to keep it within x3d ccd either manually or via process lasso / affinity mask (actually, this time devs of the game knew of this and they specifically try to keep the game within ccd0 - a few months ago I've tried to keep it wiithin ccd1 and it wasn't easy lol). Anyways, here are some results for ccd1 only (non cache chiplet) and game bar/pinning off:During the cpu heavy parts how many threads get loaded? Does it keep to one ccd or go full 32 threads?
Just a flesh wound, they'll find a game benchmark in which ARL will reign supreme and build their impregrable fortress off that 🏹🏹Overall, it seems a lot of the Intellers have gone quiet and contemplating switching because they know Intel won't be able to pull out a leprechaun out of their hat in less than two years to counter this level of performance increase in gaming.
I have to say, it's a super interesting concept. I can't really imagine it being used in server or desktop CPUs, would they really route hundreds of watts of power (total) through DDR chips?Notice the "DDR" part of the AID, implying this a server or desktop CPU.