• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

AMD Ryzen 5000 Builders Thread

Page 47 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Yosar

Junior Member
Mar 28, 2019
3
15
51
So it is a deliberate decision by AMD to get rid of worse dies instead of pure lottery?
I would not call them worse. They bin them as worse but that's not the same. The worse die on my 5900X would be probably good die on 5600X.
Every core on my worse die boost well over 4.9 GHz (on my better die every core boost over 5 GHz). Some of those cores on worse die would probably boost higher with no problem but AMD limited them down I guess. Almost every core on my worse die can do -30 in Curve Optimizer. And probably much better.
Only Curve Optimizer is limited to max -30. That could allow them boost higher but they are not able with curve set on them by binning. Just by setting -30 on most cores I raised their boost about 200 MHz.

On the other hand my best core on better die can not do anything in Curve Optimizer. It has already maxed curve of voltages. Thanks to that actually my best core is not my best core. My best core now is core 0 where setting over -20 on Curve Optimizer gives me better boost than on my 'best' core 3.
I am not complaining definitely. But it looks like they mark cores quite arbitrary. They must meet probably some specification to be marked as best core, or the second best core. Not actually being really best core on the die.

When I speak about setting voltage in Curve Optimizer, I don't speak about going to BIOS and put some numbers, boot into Windows and call it a day if it doesn't crash for a few hours. I mean I really tested them for few hours with Core Cycler and other programs on one core.
That's why I know my best core 3 can not do anything in Curve Optimizer. But core 0 with setting -20 on Curve Optimizer is just plainly better (and it's not even my second best core).
Of course there is always chance that even my heavy testing didn't show that -20 on core 0 is just too aggressive. But so far so good, testing was done as much as I could.

Personally I suspect they have mostly good or very good dies. Chiplets are small, it's their second generation on 7 nm process. In the meantime TSMC probably also learned quite a bit about clocking high chips on their 7 nm (look at the clocks of RDNA2).
Basically I was very surprised when I put my CPU into mainboard. After all the hoopla with boosting on Zen 2 processors, my 5900X (and probably most others either) was boosting to 4.95 GHZ right out of gate with default BIOS. So well over specification.
Basically AMD reached 5 GHz barrier just silently not making too much fuss about it.
 
  • Like
Reactions: Tlh97

Timur Born

Member
Feb 14, 2016
122
71
101
Makes sense, yes. Does anyone know for sure why Windows schedules lower load threads to the "worst" cores first, only to elevate them to "best" cores once they max out a core? My speculation is that it tries to keep the "better" cores cool/unused to allow for higher frequency spikes once they are needed.
 

Timur Born

Member
Feb 14, 2016
122
71
101
Indeed, albeit I saw a single thread of Prime 95 AVX load being schedules to the lowest core for quite some time in one test.

After lots of fiddling, testing, analyzing, measuring and wrapping my head around the whole thing I think that I now understand the mechanics and interoperability of PBO, Curve Optimizer, Vcore (offset), (maximum) frequency limit vs. single-core + multi-core performance.

The only thing that still confuse me are:

- The EDC (215 A) limit permanently hitting 100% during sustained load that is not even close to the TDC limit (210 A) or measured current (130 A).

- Some power read-outs of HWinfo that seem like either HWinfo errors of sensors reporting wrong values, specifically "Package Power" vs. "Core+SoC Power" (20 watts lower than Package Power) vs. PIn/POut (lower than Package Power).

Overall I am not convinced that messing with all these settings is worth over just enabling PBO in BIOS with either the AMD or mainboard preset. Even with CO offsets of -30 and a slight voltage bump (AMD overclock preset) the CB20 sustained multi-core increases only by 3.5 - 5% (average 4.65 - 4.7 Ghz vs. 4.5 Gz). And then you have to do stability tests.

Sustained single-core improvements (of maybe 2-3%) are even harder to achieve, because you have to use negative CO on the "best" cores, which are likely already running closer to the edge to begin with. This is where silicon lottery comes in again, which is what overclocking mostly is about anyway.
 
Last edited:

moinmoin

Platinum Member
Jun 1, 2017
2,507
3,177
136
The EDC (215 A) limit permanently hitting 100% during sustained load that is not even close to the TDC limit (210 A) or measured current (130 A).
EDC is Electrical Design Current which is a value the motherboard tells the CPU and should represent the peak current its VRMs can handle short term.

Combined with sensors reporting wrong values either you got a not-ideal motherboard or a not-ideal BIOS version. Do you know what's the temp of your VRMs?
 
Last edited:

Timur Born

Member
Feb 14, 2016
122
71
101
VR MOS temps peak at around 80°C for 24 threads of Prime 95 AVX load with PBO "Enabled" (preset). The VRM section on this MSI X570 Creation gets hotter than the one on my Gigabyte Z390 Aorus Master even with better airflow.

I am puzzled why EDC is measured as constantly hitting the limit without any dips when sustained load is applied. Measured sustained current is far away from 200 - 210 A, really not even close. So I obviously don't understand the EDC measurement yet.
 
Last edited:

Det0x

Senior member
Sep 11, 2014
513
645
136
The deed is done, have finally gotten this fully stable :)
4 memory sticks + flat CL14 + T1 with GDM disabled is very rare with ryzen.. Could only run this after i had binned all my ram sticks for the different memory channels on the motherboard
  • BIOS 3501 with AGESA V2 PI 1.2.0.2
  • dual CCD 5950x
  • 4x8GB gskill 3600 CL16
  • 1900:3800 @ flat CL14 + T1 GDM-OFF
Screenshot of TM 1umus 25 cycle + Memtest 20000% stable (do notice this is my old bloaty windows install with lots of stuff running in background)
1620845137034.png
Newest OCCT 8.1.3 1 hour large dataset extreme + 4 iteration in y-cruncher with all tests (same boot as above)
1620845156741.png
Some performance number:
The SiSoftSandra v2021.31.12 (from Mar 5th, 2021). Not sure if this is a good match for CTR (?)
Intel latency checker
1620845201567.png
Next we have dram calc easy + normal bench together with cinebench r23
1620845258799.png
And lastly we have SotTR @ 1080p lowest as a gamebench running on my new 24/7 settings = 288 CPU average fps
1620845280797.png
Very happy with these results and my new 24/7 settings :)
Maybe i will try to push for higher fclk now 😎

Feel free to leave any comments or questions
 

aigomorla

Cases and Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
18,750
1,288
126
I can confirm X570 Taichi + Sammy 980 PRO + 6900XT meaning i am on 100% Gen4 pci-e.... have not had usb issues after latest agesa updates.

My brother is loving his system.
He says its wicked fast, and he is trying to tax his GPU on games he plays which is sadly @ 1440p.
I told him he needs a 4k, to tax that beast.

Make sure you guys update that AGESA.
 

Timur Born

Member
Feb 14, 2016
122
71
101
I will soon start extensive USB and PCIe audio tests on both old and new AGESA, including looking for workarounds for those not getting the new version for their mainboard.
 

aigomorla

Cases and Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
18,750
1,288
126
I will soon start extensive USB and PCIe audio tests on both old and new AGESA, including looking for workarounds for those not getting the new version for their mainboard.
They said most of the issues were on the PCI-E 4.0
Not many people were running all gen 4, because video cards are next to impossible to get, and people who did get them did not run them on a Gen 4. as they were used to MINE crypto's.
 

Timur Born

Member
Feb 14, 2016
122
71
101
USB on AMD is Asmedia based, who have a mixed history of USB 3 (audio) compatibility. USB 3.1 chipsets/hubs usually worked, USB 3.0 ones were unusable for professional audio.

If I can reproduce problems then I will also test my Texas Instruments based USB 3.0 hub in between as workaround, because that helped in the past. These are hard to come by, though, mine is part of a Dell U3014 display.
 

B-Riz

Golden Member
Feb 15, 2011
1,463
568
136
I can confirm X570 Taichi + Sammy 980 PRO + 6900XT meaning i am on 100% Gen4 pci-e.... have not had usb issues after latest agesa updates.

My brother is loving his system.
He says its wicked fast, and he is trying to tax his GPU on games he plays which is sadly @ 1440p.
I told him he needs a 4k, to tax that beast.

Make sure you guys update that AGESA.
What setup does your brother have?

My 1080 fps went 200 consistent now with Zen3 and the 240Hz monitor, about to jump to 2560x1440 today if the monitor is good, no dead pixels or anything.

I have the 5900X on an X570 Master with what should be the latest BIOS, an SN850 2TB and a 5700XT on PICE4, no USB issues I have found, but, I don't have the bus loaded up, just keyboard, mouse and might put a Creative Labs X-Fi USB on it.
 

Dave3000

Golden Member
Jan 10, 2011
1,050
38
91
My Asus B550-F Strix (Wi-Fi) motherboard's BIOS sets the TRdwr for Channel A to 8 and Channel B to 7 if using the DOCP profile. I'm using 32GB Ballistix Gaming DDR4-3200 memory. Should both memory channels be the same timing for the TRdwr? I set it manually to 8 so both memory channels are 8 but I'm not sure if both channels should be set to 7 or 8. I'm using the 1804 BIOS. I sometimes have been getting crashing to the desktop when playing Sunset Overdrive and the audio would start chopping up just before crashing to the desktop and I'm wondering if it's because of the different timing for both memory channels that the auto setting did for the TRdwr.
 

Shmee

Memory and Storage, Graphics Cards
Super Moderator
Sep 13, 2008
5,070
860
126
@Det0x your memory setup is very similar to mine, though your latency is much tighter. Also you are putting a lot more voltage across that memory :p

I am running 2 different 3200 Gskill Bdie kits, one Trident Z the other Flare X. Mine run at CL16, but I only put a little over 1.41V across them. My motherboard tends to put more memory voltage than what is actually set.
 
  • Like
Reactions: bigboxes

Det0x

Senior member
Sep 11, 2014
513
645
136
@Det0x your memory setup is very similar to mine, though your latency is much tighter. Also you are putting a lot more voltage across that memory :p

I am running 2 different 3200 Gskill Bdie kits, one Trident Z the other Flare X. Mine run at CL16, but I only put a little over 1.41V across them. My motherboard tends to put more memory voltage than what is actually set.
Yes above ~1.5volts on samsung b-die you need to start considering to put a fan on them to keep them from overheating.. (should try to keep temp below 40-45 degrees)
I can maybe lower the voltage a little bit to around ~1.55 while retaining stability, but these tight timings require alot of power..
But i dont think i'm in the dangerzone yet, when i keep the temperature and the others settings in check, and there are many b-die sets out there which require 1.5volts at stock settings.

It also depend on RTT resistance how much amps the memory get.. Before i was running 6-3-3 RTT timings but with the latest agesa they seemingly doubled the resistance, so now iam running 7-3-1 instead. (tCKE also matter)
 
  • Like
Reactions: Elfear

aigomorla

Cases and Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
18,750
1,288
126
What setup does your brother have?
CPU:: 5800X
Ram: Crucial Ballastix (32GB x 2) DDR4 3600mhz - 64GB Total
Motherboard: Asrock X570 TaiChi
GPU: Sapphire Nitro+ 6900XT
PSU: EVGA Supernova Gold 850W
Storage Primary: 1TB Samsung 980 Pro. - nVME
Storage Games: 4 x 4TB Samsung 860 QVO In Raid - 0 - SATA

Yeah, its a beast... :)

Creative Labs X-Fi USB
if im going DAC id get a real DAC.
 
Last edited:
  • Love
Reactions: lightmanek

Hitman928

Diamond Member
Apr 15, 2012
3,520
3,786
136
Got my 5900x installed. Looks like 2 cores will boost up to 4.9 GHz at stock, so 100 MHz over rated boost. Everything at stock I get 20813 multi and 1574 single in CBr23 so seems to be right where it should be. This is with just changing the CPU (2700 previous) and not reinstalling Windows or anything. I have some additional personal testing I can go through over the weekend when I have more time. At least at stock, the 5900x uses less power, runs cooler, and obviously performs much better than my overclocked 2700 (4 GHz). Happy so far.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,458
2,188
136
Got my 5900x installed. Looks like 2 cores will boost up to 4.9 GHz at stock, so 100 MHz over rated boost. Everything at stock I get 20813 multi and 1574 single in CBr23 so seems to be right where it should be. This is with just changing the CPU (2700 previous) and not reinstalling Windows or anything. I have some additional personal testing I can go through over the weekend when I have more time. At least at stock, the 5900x uses less power, runs cooler, and obviously performs much better than my overclocked 2700 (4 GHz). Happy so far.
Nice!

You should play around with Ryzen Clock Tuner and compare results. The latest offering looks to work pretty good with my 5900x.

ClockTuner v2.1 for Ryzen (CTR) Guide - Introduction (guru3d.com)

I'm still playing around with it, but it shaved around 19-20c off my temps using a suggested profile. 4475/4400 @1.175v's got me 22.649 in CBr23 for multi-score with app running and surfing the web. I haven't tried the single core with hybrid-OC yet.
 
  • Like
Reactions: Tlh97 and bigboxes

B-Riz

Golden Member
Feb 15, 2011
1,463
568
136
CPU:: 5800X
Ram: Crucial Ballastix (32GB x 2) DDR4 3600mhz - 64GB Total
Motherboard: Asrock X570 TaiChi
GPU: Sapphire Nitro+ 6900XT
PSU: EVGA Supernova Gold 850W
Storage Primary: 1TB Samsung 980 Pro. - nVME
Storage Games: 4 x 4TB Samsung 860 QVO In Raid - 0 - SATA

Yeah, its a beast... :)



if im going DAC id get a real DAC.
LOL, nice, I will get a 6800 XT someday, for MSRP, someday, lol.

Did he have any issues with the 980 Pro?

https://www.reddit.com/r/buildapc/comments/n8868i
That is why I went with an SN850.

It is actually the Sound BlasterX G5, I have been eyeballing some DAC's on Drop though.

 

Hitman928

Diamond Member
Apr 15, 2012
3,520
3,786
136
Nice!

You should play around with Ryzen Clock Tuner and compare results. The latest offering looks to work pretty good with my 5900x.

ClockTuner v2.1 for Ryzen (CTR) Guide - Introduction (guru3d.com)

I'm still playing around with it, but it shaved around 19-20c off my temps using a suggested profile. 4475/4400 @1.175v's got me 22.649 in CBr23 for multi-score with app running and surfing the web. I haven't tried the single core with hybrid-OC yet.
Thanks, I'll probably be starting a fresh Windows install and tweaking memory for a while as I have time, but I'll look more into it after that.

From 1 of my personal tests, using the exact same memory and timings as the 2700 (for now), one of the CAD tools I use for work performs 2.72x faster on the stock 5900x versus the overclocked 2700. Even if you equalize for cores, that would be a 1.81x speedup though with fewer cores the Zen3 cores would be able to also run faster so that speedup would again increase.
 

aigomorla

Cases and Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
18,750
1,288
126
Did he have any issues with the 980 Pro?
He has not reported anything to me in slowing down.
Its probably also due to the fact he has not hit 1/4th on that drive.
The 980 is pure windows and Apps drive, no games will be stored on it, unless it requires a nVME.
 

Timur Born

Member
Feb 14, 2016
122
71
101
My curiosity is served for the time being and I decided that overclocking the 5900X is not worth the cost and stability testing efforts.

Compared to stock settings with PBO disabled (LLC mode 8):

- Less than 7% CB23 score improvement increases power consumption by over 37% (PBO Advanced + CO). We are talking less than 135 watts CPU Package power vs. over 185 watts.

- Less than 5% improvement increases power draw by 25% (negative Vcore offset -0.0625 V, which decreased performance compared to the above settings accordingly).

Question: Does anyone know why CPU Package Power (and PPT) is always measured 20 watts higher than CPU Core + SoC, both idle and at full load? What other parts of the CPU use 20 watts (other than heat)?
 

B-Riz

Golden Member
Feb 15, 2011
1,463
568
136
Question: Does anyone know why CPU Package Power (and PPT) is always measured 20 watts higher than CPU Core + SoC, both idle and at full load? What other parts of the CPU use 20 watts (other than heat)?
What software are you using to measure?

I set my PPT to 120, and Ryzen Master shows 120 W limit.

Oooo, I got 142 fps installing it, lol.

1621069434266.png

1621069506424.png

Prime95 Small FFT all core all threads shows using 120 W right now.

1621069893782.png
 

Timur Born

Member
Feb 14, 2016
122
71
101
Yes, and both your PPT and "CPU Package Power" are 20 watts higher than your "CPU core + SoC Power". You will also notice that these extra 20 watts are always present, both at peak load and full idle load.

So there is no meter telling us where these 20 watts went, which would be nice to know more about.
 

Timur Born

Member
Feb 14, 2016
122
71
101
HWiNFO's author Martin replied:

CPU Package Power and PPT account for additional rails that can't be measured, but are estimated.
I will discuss with him if said "estimation" may be too high when it regularly exceeds the VRM's own POUT/PIN measurement (which cannot fully be trusted neither).
 

ASK THE COMMUNITY