- Mar 3, 2017
- 1,774
- 6,757
- 136
Windows rotating threads among cores is a remnant of the single core era when they found doing that improved performance by 1-2%.Wait, doesn't Windows rotate single thread workloads among cores?
On Linux I don't think I ever encountered an issue like this, every 10 seconds or so a thread is passed to a different core, at least according to my system monitor.
The core jumping is done by the CPU, at least on Ryzen. I think it started with Zen 2. Its an ingenius way to keep max boost clocks. It'll usually use its best 2 or 3 cores and constantly jump between them on something like a CB 1T run. Windows may rotate different cores it starts apps on, but I think when a process is in the middle of running, its the CPU algorithm that does it, based on current, temperature, and voltage.Wait, doesn't Windows rotate single thread workloads among cores?
On Linux I don't think I ever encountered an issue like this, every 10 seconds or so a thread is passed to a different core, at least according to my system monitor.
Perhaps the Asus laptop is doing something like this but not all cores can reach 5.1GHz?
Edit: I'm on vacation off-country since Sunday and got a serious case of food poisoning after 2 days, but I must say that the last 15 pages of this thread managed really well to keep me distracted and entertained.
One thing that keeps being repeated though is that the high cross-CCX latency between the zen 5 and zen 5C cores is hurting performance. May I ask if there is any proof of that in the actual (non-synthetic) benchmarks ran so far? From the Phoronix review nothing like that had emerged afaik.
I'm that way almost exclusively, I decide before hand how much energy I am willing to expend.There is a point when a reasonable person would stop, before trying to convince the very last person with the opposing view in endless tit for tat.
And that point does not have to equal inactivating one's account, just simply letting go.
Oh man, that sucks if true. Why disable cores on a 9950X to downgrade it to 9900X? Release it as 9950LE for Lottery Edition and price it a bit lower than 9950X, like maybe $50 less. Then if it doesn't completely meet expectations, at least it can still be more useful than a lame 9900X!
If it was not hitting clock targets reliably, they could have just saved it for a 9950 non-X, but if the issue was the core itself simply wasn’t stable then you can only fuse those cores off.Oh man, that sucks if true. Why disable cores on a 9950X to downgrade it to 9900X? Release it as 9950LE for Lottery Edition and price it a bit lower than 9950X, like maybe $50 less. Then if it doesn't completely meet expectations, at least it can still be more useful than a lame 9900X!
Not surprising.I hope someone at AMD sees this and redoes their abysmal naming again:
![]()
Using the term ‘artificial intelligence’ in product descriptions reduces purchase intentions
Companies may unintentionally hurt their sales by including the words “artificial intelligence” when describing their offerings that use the technology, according to a study led by WSU researchers.news.wsu.edu
Apparently adding "AI" to your product description makes it *less* desirable to people.
They should just remove that part from the naming scheme along with "HX",l. Both add nothing of value anyway
It was for OEMS and investors. Take a look at ASUS’s Strix Point landing page and AI is referenced a lot.Perhaps the AI in the name wasn't for consumers? Maybe OEMs like the sound of 'AI 9'. Or investors.
You can also see the 128GB ram config. This basically a x86 M3 Max but with a 256-bit bus. Those 128GB SKUs will be sought after LLM users. We will have 2 vendors that provide 128GB unified memory by the end of 2024.I caught these posts on Xitter regarding Strix Halo. Can see where the CPU and GPU is located based on hotspots.
View attachment 104283
View attachment 104284
Yeah, roughly 66mm2 per CCD and a little over 300mm2 for the N3E IOD, which contains the GPU.Woah, 450mm² of N3 and N4 silicon... this thing is yooge
What is N3 and N4 used for (Should be N4 for Zen 5, I guess)Woah, 450mm² of N3 and N4 silicon... this thing is yooge
I hope they won't cheap out, but knowing AMD, it'd have two single IF links with bandwidth of a dual chiplet desktop CPU 💲 💲 💲 . I wonder what prevents them from plopping the IMC into the core ringbus within monolithic APUs or these '(almost) as good as monolithic' substrates.Would be interesting to know if they are making any fabric changes or just route SDF/SCF over the fan out links.
Z4 already has dual SDP per CCD (EPYC GMI wide), so at the very least with dense fanout interconnect they can enable both SDPs and still consume less than half the energy compared to DT CCD.I hope they won't cheap out, but knowing AMD, it'd have two single IF links with bandwidth of a dual chiplet desktop CPU 💲 💲 💲 . I wonder what prevents them from plopping the IMC into the core ringbus within monolithic APUs or these '(almost) as good as monolithic' substrates.
Probaby true, but even Investors are seeing that most AI stuff fails to make any real money and starting sending mixed signals:Perhaps the AI in the name wasn't for consumers? Maybe OEMs like the sound of 'AI 9'. Or investors.
Here’s a review for the Asus ProArt 13 from NBC:
(They also have a video up on YouTube).
Battery life isn’t what I thought it’d be tbh. Best thing I see is a solid MT uplift; but that only goes so far. The load max was surprising too (185W). Average load was 74W.
N3 SOC die is the current rumor.What is N3 and N4 used for (Should be N4 for Zen 5, I guess)
Would be interesting to know if they are making any fabric changes or just route SDF/SCF over the fan out links.
Oh man, that sucks if true. Why disable cores on a 9950X to downgrade it to 9900X? Release it as 9950LE for Lottery Edition and price it a bit lower than 9950X, like maybe $50 less. Then if it doesn't completely meet expectations, at least it can still be more useful than a lame 9900X!
The "Text Processing" subtest scores ~3070 in ST and ~3800 in MT, which basically means it uses no more than 1.25 cores. The number of available threads is meaningless in this. Even a 96-core Threadripper only gets 3076 points in MT. On the other end of the spectrum is "Ray Tracer" with ~2600 in ST and then in MT ~15500 for Lunar Lake against ~31900 for Strix Point. The "Text processing" subtest basically shouldn't be considered for MT in the first place, or the benchmark should spawn multiple instances of it for MT.That's just 8 cores vs. 24 threads!