AMD Ryzen 5000 Builders Thread

Page 46 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
Last edited:

kognak

Junior Member
May 2, 2021
21
44
61
When tuning 5950x i've found it runs 4.4ghz static 1.1375V core, DDR4 3600C15, ~1.1V SOC, 100% high perf plan with C states enabled:
at idle it uses 30W of package power, while showing CPU spends >98% in C6 power state. WTH really.
It's downside of chiplet design. External SoC and high speed/low latency links on substrate aren't most power efficient way to build a cpu. Threadrippers have more chiplets and bigger SoC, they use even more power when idling. Monolithic design is much superior in this regard, that's why all mobile chips are monolithic. Zen cores themselves however are very power efficient, in idle they can switch to sleep mode and consume practically zero watts. Tweaking cores have no real effect on idle consumption either nor have clock speeds. But tuning memory and fabric clock speeds does. If anyone wants very low idle consumption and AMD, it needs to be APU. They do under 5W package power.
 

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
Definitely, and things will improve big time when AMD moves to ZEN4 and new design of IOD on new process.

What i find disappointing is the lack of information about this in actual reviews. I do understand that reviewers don't want to compare Apple to Oranges and so on, but they do bad job here highlighting this very substantial difference in idle power use.

And on topic of bad job by reviewers - the fact than not one of them noticed USB problems is disgrace and puts a stain on their reputation going forward. I mean during my limited time 5950x with pre 1.2.0.2 AGESA USB was acting up already ( mouse + keyb + HyperX usb audio crackling), there is no chance in hell that all of them got lucky and ran into no problems.
 

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
Not sure about being concerned. I share experiences and compare to what I have available.

I understand your point and frustration, but, you are comparing a non-monolithic 12c/24t part to a monolithic 8c/16t part made on a super refined process that Intel targeted at having very low idle power usage.

For many of us who have been reading reviews and playing with Zen since the Zen1 launch, your points about idle power usage and low thread loading power usage have been known, and are highlighted in the Anandtech reviews;

"Moving down to a single chiplet but will the full power budget, and there is some power savings by not having the communications of a second chiplet. However, at 8-core load, the 5800X is showing 4450 MHz: the Ryzen 9 processors are showing 4475 MHz and 4500 MHz, indicating that there is still some product differentiation going on with this sort of performance. With this chip we still saw 140 W peak power consumption, however it wasn’t on this benchmark (our peak numbers can come from a number of benchmarks we monitor, not just our power-loading benchmark set). "

Also, see below, that is why I asked if you read reviews before buying, the loading and power usage is shown:

1619958807186.png

If the 5900X performance in WoW is such a horrible thing, sell it and get a 5600X.

1619958885694.png
 

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
Definitely, and things will improve big time when AMD moves to ZEN4 and new design of IOD on new process.

What i find disappointing is the lack of information about this in actual reviews. I do understand that reviewers don't want to compare Apple to Oranges and so on, but they do bad job here highlighting this very substantial difference in idle power use.

And on topic of bad job by reviewers - the fact than not one of them noticed USB problems is disgrace and puts a stain on their reputation going forward. I mean during my limited time 5950x with pre 1.2.0.2 AGESA USB was acting up already ( mouse + keyb + HyperX usb audio crackling), there is no chance in hell that all of them got lucky and ran into no problems.

The USB issues are very unique, and, I do not think a reviewer would ever think to load up the USB bus and have at it.

Like any complicated technical issue, AMD gathered information and are working towards a fix; it's not like it can be fixed in a week or two. The root cause has to be identified, a fix developed, tested, and propagated to how many board vendors?

And, to be on a cutting edge halo product and complain there are issues, is kinda funny on an enthusiast msg board. :tearsofjoy:
 
  • Like
Reactions: Tlh97 and scannall

Makaveli

Diamond Member
Feb 8, 2002
4,717
1,051
136
And on topic of bad job by reviewers - the fact than not one of them noticed USB problems is disgrace and puts a stain on their reputation going forward. I mean during my limited time 5950x with pre 1.2.0.2 AGESA USB was acting up already ( mouse + keyb + HyperX usb audio crackling), there is no chance in hell that all of them got lucky and ran into no problems.

I'm on 1.2.0.1 Patch A and have no usb issues.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
I understand your point and frustration, but, you are comparing a non-monolithic 12c/24t part to a monolithic 8c/16t part made on a super refined process that Intel targeted at having very low idle power usage.
No frustration here and no need for you to defend the CPU and keep pointing to reviews, I would have bought the CPU anyway for testing drivers (especially USB and possible PCIe issues). I am reporting experiences and comparing to my Intel system. And I was an early adopter of Ryzen 1 and did some crazy tests with that (like proving that AIO hoses can explode off defective pumps due to default BIOS settings disallowing thermal shutdowns).

The main disappointment so far is the lack of Lua script performance despite the claimed IPC improvements (which Lua does not seem to benefit from at all). And that a heavily threaded application like Topaz Gigapixel AI still is not parallelized enough to make good use of the 5900X. Throwing cores and cache at problems still is not an easy solution for everything.

More power draw for the same load also means more heat production, which in turn means more noise. I will check that more properly once I applied TIM to the CPU/cooler.
 
Last edited:

Timur Born

Senior member
Feb 14, 2016
277
139
116
On a side-note. I specifically switched all PCIe 4.0 down to 3.0 trying to lower the power consumption. Unfortunately it did not make a difference. Not exactly a surprise since none of my device makes use of PCIe 4.0 anyway, but worth a try knowing that the IO and chipset chips need the active cooling.

That being said, is there a temperature sensor for the IO chip of the CPU? My Arctic Liquid Freezer 2 offers an optional offset placement to better cool the CPU (CCDs), but I suspect that this leaves the IO chip cooled worse.
 

Det0x

Golden Member
Sep 11, 2014
1,028
2,953
136
The main disappointment so far is the lack of Lua script performance despite the claimed IPC improvements (which Lua does not seem to benefit from at all)
Some programs have reached the limit and almost dont scale with "IPC" anymore, only frequency.. CPUmark99 is one example of this:
1619993717442.png
 

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
No frustration here and no need for you to defend the CPU and keep pointing to reviews, I would have bought the CPU anyway for testing drivers (especially USB and possible PCIe issues). I am reporting experiences and comparing to my Intel system. And I was an early adopter of Ryzen 1 and did some crazy tests with that (like proving that AIO hoses can explode off defective pumps due to default BIOS settings disallowing thermal shutdowns).

The main disappointment so far is the lack of Lua script performance despite the claimed IPC improvements (which Lua does not seem to benefit from at all). And that a heavily threaded application like Topaz Gigapixel AI still is not parallelized enough to make good use of the 5900X. Throwing cores and cache at problems still is not an easy solution for everything.

More power draw for the same load also means more heat production, which in turn means more noise. I will check that more properly once I applied TIM to the CPU/cooler.

Not defending, trying to point out your performance observations vs the 9900K and the 5900X's higher idle power usage are known items, while providing references where I recall seeing relevant data.
 
  • Like
Reactions: Tlh97

Timur Born

Senior member
Feb 14, 2016
277
139
116
Has Lua performance ever really been a big problem?
Load times for WoW and Fantasy Grounds are well over half a minute due to Lua scripts being loaded and initialized. CPU bottlenecks in Lua are also regular causes for dramatic frame-rate drops (GPU load decreases when these happen). Lua is single-threaded and many engines still use old Lua versions (like v5.1), so they don't benefit from the performance improvements of v5.4.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
Is it normal for CCX quality to differ so much on a single CPU (5900X)? The best core of CCX 2 is worse than the worst core of CCX 1. Is this lottery or done on purpose by AMD?

1620406156871.png
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
Is it normal for CCX quality to differ so much on a single CPU (5900X)? The best core of CCX 2 is worse than the worst core of CCX 1. Is this lottery or done on purpose by AMD?

View attachment 44081

It's the norm for one strong and one weaker CCX.

Maybe the app just makes them up? My 5900x has the same exact readings as yours, but on different cores as yours.

5900x_2.PNG

I wouldn't give it much thought in the end.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
I would rather suspect that the numbers are fixed by AMD, but the quality differences may well still be real. You can switch the availability of the core classification via UEFI and Windows Thread Scheduler bases its decision on that exact classification (mostly preferring good cores for high load and bad cores for low background load).
 

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
  • Like
Reactions: Tlh97

Timur Born

Senior member
Feb 14, 2016
277
139
116
Thanks for the comparison, that puts the numbers into perspective and demonstrates that different numbers can be had within the 5900X series. Maybe the numbers are set by BIOS and thus mainboard manufacturers have a hand in this rather than AMD setting them for each CPU in the factory. Or maybe AMD sets them for each run of dies coming from the fab or just position on the wafer?! 1usmus may know more as coder of CTR?!

I like how CPPC allows Windows to properly schedule the threads to the most fitting cores. In one test-run I did see Prime95 1 thread load being shifted from the two best cores to the very worst core for some time, though. That seemed strange, on the other hand P95 uses lowest idle thread priorities, so it may confuse the scheduler.
 

Yosar

Member
Mar 28, 2019
28
136
76
So it is a deliberate decision by AMD to get rid of worse dies instead of pure lottery?

I would not call them worse. They bin them as worse but that's not the same. The worse die on my 5900X would be probably good die on 5600X.
Every core on my worse die boost well over 4.9 GHz (on my better die every core boost over 5 GHz). Some of those cores on worse die would probably boost higher with no problem but AMD limited them down I guess. Almost every core on my worse die can do -30 in Curve Optimizer. And probably much better.
Only Curve Optimizer is limited to max -30. That could allow them boost higher but they are not able with curve set on them by binning. Just by setting -30 on most cores I raised their boost about 200 MHz.

On the other hand my best core on better die can not do anything in Curve Optimizer. It has already maxed curve of voltages. Thanks to that actually my best core is not my best core. My best core now is core 0 where setting over -20 on Curve Optimizer gives me better boost than on my 'best' core 3.
I am not complaining definitely. But it looks like they mark cores quite arbitrary. They must meet probably some specification to be marked as best core, or the second best core. Not actually being really best core on the die.

When I speak about setting voltage in Curve Optimizer, I don't speak about going to BIOS and put some numbers, boot into Windows and call it a day if it doesn't crash for a few hours. I mean I really tested them for few hours with Core Cycler and other programs on one core.
That's why I know my best core 3 can not do anything in Curve Optimizer. But core 0 with setting -20 on Curve Optimizer is just plainly better (and it's not even my second best core).
Of course there is always chance that even my heavy testing didn't show that -20 on core 0 is just too aggressive. But so far so good, testing was done as much as I could.

Personally I suspect they have mostly good or very good dies. Chiplets are small, it's their second generation on 7 nm process. In the meantime TSMC probably also learned quite a bit about clocking high chips on their 7 nm (look at the clocks of RDNA2).
Basically I was very surprised when I put my CPU into mainboard. After all the hoopla with boosting on Zen 2 processors, my 5900X (and probably most others either) was boosting to 4.95 GHZ right out of gate with default BIOS. So well over specification.
Basically AMD reached 5 GHz barrier just silently not making too much fuss about it.
 
  • Like
Reactions: Tlh97

Timur Born

Senior member
Feb 14, 2016
277
139
116
Makes sense, yes. Does anyone know for sure why Windows schedules lower load threads to the "worst" cores first, only to elevate them to "best" cores once they max out a core? My speculation is that it tries to keep the "better" cores cool/unused to allow for higher frequency spikes once they are needed.