• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

AMD Ryzen 5000 Builders Thread

B-Riz

Golden Member
Feb 15, 2011
1,391
483
136
Last edited:

Det0x

Senior member
Sep 11, 2014
503
610
136
Switching from 2 sticks of b-die to 4 sticks was/is an adventure. I never knew how easy I had it with 2 sticks until I tried 4. I won't go back, but the 2 stick easy tweaking will be missed.

4 sticks defaults to gear down mode. Trying to run with CR1 and GDM disabled is unstable. So far it's looking like CR2 is the lesser of 2 evils when it comes to effecting latency. I'm still playing around with the memory, but I'll settle for 3600 CL14 if I have to.

View attachment 35797
Welcome to my world :D

I'm just happy ive found my CL14 memtest 1000% stable settings with 4x8gigs @ 1900mhz IF 1:1

If some of you other guys want to start playing around with the curve optimizer, i can recommend this as a starting point if you are using a Asus motherboard:

Make sure you have Global C-state Control disabled in bios before you start.

In Extreme Tweaker:

  • PBO Fmax Enhancer: disabled (dont work with 5000 series)
  • PBO: manual: 280/235/245
  • Overdrive: +50 mhz
  • CPU SoC voltage: Manual 1.08 -1.10v
  • CPU Core Voltage: Offset 0.01 -0.06v (You want the least amount of offset as possible. Most will be @ +0.05v)
In Digi+
  • Max all the current limits
  • Disable all the spread spectrums
  • SOC LLC: Level 3
  • CPU LLC: Level 1 or Level 2
Note: You want the all core voltage to be as low as possible to get the highest all core boost that you can. My logic here is that I am using core voltage offset to maintain enough voltage for a stable single core boost, but the lowest LLC so that on all core loads the core voltage will droop and allow a higher all core boost clock.

In Advanced> AMD Overclocking > PBO
  • Set to advanced
  • Set limits to "Motherboard"
  • Curve optimizer: All core, Negative, 15-25 (can start at 15 and work your way down to 25 if system is stable)*

If you have a 56/57/58/5900X You use the same settings but increase the overdrive setting from 50mhz to as high as your single core test will allow. Most CPU's will max out at a sustained 5050mhz and 5125mhz single core boost.

To test single core boost clock:

  1. Open HWinfo64, make sure you have the "Effective Core clock" registers exposed.
  2. Make a note of the top 4 ranked cores.
  3. Open task manager and to to the "Details" tab.
  4. Open CB R20, start a single core run
  5. In Task manager's details, right click Cinebench.exe, click "Select Affinity", uncheck the top box to de-select all cores, put a check box next to core 0, then click ok. (You will have to re-select the affinity every run unless you have an application such as Process lasso that will automatically apply core affinity.)
  6. In HWinfo, Look at the clock speed for core zero. It should be at your max boost + overdrive (for a 5950X using the settings above, that would be 5050mhz max boost + 50 mhz overdrive = 5100mhz).
  7. In HWinfo, monitor the "Effective core speed" It should be very near reported core clock. For example, the core clock is at 5100mhz, the effective core clock should be 5085mhz or higher. Generally effective clock will be reported a bit lower than core clock because of how it is calculated. (which is why the effective core clock really needs a sustained load to be close to accurate).
  8. Monitor Core voltage and effective core VID. Your core voltage should be 1.5v- 1.525v, and effective VID 1.485v - 1.506v. You really don't want core voltage to exceed 1.525v for sustained period (spikes are fine and normal) This is why your core voltage offset shouldn't exceed +0.05 - 0.07v.
  9. You should be able to complete at least 5-6 back to back runs of CB R20 without crashing.
Once that is done, you move on to the next three of the top rated cores and make sure they are stable. Though it is likely that not all of the top four cores will boost as high at core 0.

Once that is complete move on to all core load stress testing. If an all core workload does not pass stress testing, increase core LLC to achieve stability.

*edit*

* = You can start with -15 on core optimizer with cpu voltage on auto, then work the negative curve offset down until you have all core instability, or until the all core boost clock stops increasing. There is no point in running the neg curve value beyond whatever gives you max all core boost..

Once you have your all core set to highest stable neg offset / highest all core boost clock, Move on to single core.

Open cinebench and open task manager. Start a single core run, then on task manager's details tab, find cinebench, right click and set cpu affinity. Leave only core zero checked.

In hwinfo, click the clock to reset the counters after the single core run starts, and after you set cpu affinity. Then monitor the core 0 effective clock. It should be very close or the same as core frequency.

If the run fails, Start adding positive cpu core voltage offset to get your single core stable.

Once done stability test with IBT linpack on very high/high for 10 passes, Blender Benchmark (all scenes), 1 hour of P95 blend with AVX disabled.
 

Attachments

Last edited:

Makaveli

Diamond Member
Feb 8, 2002
4,295
481
126
lol end of story...

Even anandtech's own review sample was doing 4825mhz

"The Ryzen 7 5800X has a base frequency of 3800 MHz and a rated turbo frequency of 4700 MHz (we detected 4825 MHz), and uses a single eight-core chiplet with a total 32 MB of L3 cache. The single core chiplet has some small benefits over a dual chiplet design where some cross-CPU communication is needed, and that comes across in some of our very CPU-limited gaming benchmarks. This processor also has 105 W TDP (~142 W peak)."


This can be confirmed by looking at other reviews and talking to others that own the chip. You have had the chip 1 day and know better cool story. lol anyways enjoy.
 
Last edited:
  • Like
Reactions: Tlh97

Udgnim

Diamond Member
Apr 16, 2008
3,658
81
91
4.7 is AMD's guaranteed boost

4.85 is the potential cap a stock 5800X is allowed to boost up to on individual cores with a non PBO enabled motherboard.

motherboard cheating is an Intel thing

motherboard cheating with a Zen 3 CPU is more likely to cause a # of instability / CPU temp issues
 

Udgnim

Diamond Member
Apr 16, 2008
3,658
81
91
if you're building PCs for someone, then by all means use AMD's advertised boost for stating build specs

we're just saying the actual max boost is potentially higher than AMD's advertised boost.

AMD having a lower advertised boost than the potential actual max boost does 2 things for AMD. the lower advertised boost ensures that that target is either always hit or exceeded and the higher potential max boost allows for better benchmark #s from review sites.
 
  • Like
Reactions: Tlh97 and Makaveli

Justinus

Platinum Member
Oct 10, 2005
2,470
597
136
Not sure if you've ever built computers for more than yourself, but if you have, you'd know that none of what you said is an acceptable answer to tell customers without losing reputation for looking like you are guessing when the MANUFACTURER themselves don't list those clocks ANYWHERE on their site...

It's a bad look, professionally speaking. When asked WHY something runs outside of spec, responding with "Oh it's okay, it's supposed to do that. AMD sneaks you extra clocks under the table, but it's well known..." BS, that answer loses customers and makes a tech look dumb or worse, like their making crap up.

Professionally, pointing to the manufacturer and saying "They list this, so it runs like that exactly" is a major part of tech supporting builds. CLEARLY many of you don't have this burden on your plate or you'd be more concerned about actual FACTUAL published numbers. Saying "Oh AMD suffered from this, so they sneak you that, now..." BULLCRAP. That is not an explanation. That is a cop out.

So yeah, I'm sticking to "motherboard cheating" for this explanation because until I can show something from AMD saying that is normal behavior, that is professional hell of just throwing guesses with no sources out there. NOT how I do business...

FURTHERMORE, the excuse somebody else gave of "Who's gonna complain about it running faster?" was CLEARLY made out of professional ignorance because there ARE customers who will complain when ANYTHING is out of spec. I'm very GLAD most of you have simple customer demands from you, but SOME of us work in the real world where people DO make note of these things and complain about them... and outright return hardware when it's not EXACTLY what they expected.

It's real nice when all you have to worry about is yourself or a few friends around you. SOME of us do tech work outside of our own home, and deal with the general public. Conspiracy theories kill jobs, and saying AMD is sneaking clocks under the table to us sounds like a conspiracy theory. I know it's true, YOU know it's true... but most others are not gonna buy it. And I will lose jobs by sticking to that explanation. Screw that.

Edit: In the PROFESSIONAL world, there are customers who will lose their minds over 10Mhz off, plus or minus, they don't care, it's out of spec and MUST be killing their hardware, in their minds. They know it all. And I have to do business with them, I don't get to pick and chose my customers. SOOO nice that many of you can, some of us live in the REAL world.

Edit 2: This will inevitably lead to me having to listen to customers telling me why I should have built them an Intel instead.... and they're right! At least they get the advertised clock speeds exactly. And they WILL say that, regardless of anything else, I am the one who gets to face them and lose that job because I tried to tell them AMD sneaks extra clocks to them under the table...

I would think there would be more than me here with 30 years professional experience dealing with the general computing public and the insanity they bring to your desk daily. WHY add to it? Why make your job messier? It's already a hodge podge soup of standards as it is, and now we have to tell customers there IS no set clock standards on AMD? Yikes. Sounds fishy to me. Telling somebody the standards listed on the site are more like guidelines than rules is NOT how one sells computers...
These finicky customers had better never use a modern GPU, they would have a convulsive fit over modern GPU boost technologies going far above documented and advertised clockspeeds and how variable it is from sample to sample.
 

turtile

Senior member
Aug 19, 2014
521
149
116
Here's a scenario that DOES happen... a lot, for professional builders.

Customer A says "I want a pure stable PC, there can be nothing overclocking related involved because -I- know that overclocking kills hardware and nothing you say will convince me otherwise... now advise me in those parameters."

Now, say out loud "AMD doesn't overclock their processors, they sneak you extra clocks because they got in trouble last generation and now sneak you free Mhz!"

Say that out loud and think about how the customer who said the first part is gonna react. Honestly.

Edit: Yanno what, nevermind. This is a waste of breath, there are no fellow pro builders here, or if so, they don't care what business they lose for "guessing" specs. I'm clearly out of place.
Why don't you give your customers what they want? Do they want a stable clock? Lock them at 4.7Ghz max. Do you offer them ECC memory?
 
  • Like
Reactions: Tlh97

kognak

Junior Member
May 2, 2021
1
6
36
When tuning 5950x i've found it runs 4.4ghz static 1.1375V core, DDR4 3600C15, ~1.1V SOC, 100% high perf plan with C states enabled:
at idle it uses 30W of package power, while showing CPU spends >98% in C6 power state. WTH really.
It's downside of chiplet design. External SoC and high speed/low latency links on substrate aren't most power efficient way to build a cpu. Threadrippers have more chiplets and bigger SoC, they use even more power when idling. Monolithic design is much superior in this regard, that's why all mobile chips are monolithic. Zen cores themselves however are very power efficient, in idle they can switch to sleep mode and consume practically zero watts. Tweaking cores have no real effect on idle consumption either nor have clock speeds. But tuning memory and fabric clock speeds does. If anyone wants very low idle consumption and AMD, it needs to be APU. They do under 5W package power.
 

Elfear

Diamond Member
May 30, 2004
7,033
551
126
Has anyone said what will be the effective max power draw for any of the announced Vermeer SKUs? Matisse topped out at 142W.
The Anandtech article had a blurb on that. "All four processors have the same official memory support at DDR4-3200, and the 105 W TDPs will offer a turbo power of 142 W, which is the same as the current generation Ryzen processors."
 

bfun_x1

Senior member
May 29, 2015
475
155
116
I currently have a 2600X and a MSI B450 Tomahawk motherboard. I’m planning on upgrading to the 5600X and I have 2 options. Option 1. I wait 5 to 6 months and upgrade the BIOS on my B450. This is the cheapest option at $300, but I don’t like the idea of the one direction beta BIOS update. The second option is to buy a B550 or X570 mother board. This option would cost me around $500, but I have no idea which type of board to get. I’m a little surprised at how many B550s cost more than a X570. I’m looking at boards in the $200 range. Something like the ASUS ROG Strix B550-F Gaming (WiFi 6) or the GIGABYTE X570 AORUS ELITE WIFI.
 

Noid

Platinum Member
Sep 20, 2000
2,248
84
91
It shouldn't be that bad, but who knows. I tried it first on my X99 rig, and that went smooth enough. I am pretty sure less than an hour from 1909 to 20H2.
People running the May 2020 Update will have a faster overall update experience because the update will install like a monthly update, just as it was for devices moving to Windows 10, version 1909 from version 1903.

Issues

I'm still waiting for it to " appear " as a monthly.
Some have already had this upgrade.
( with issues ) , and they pulled it out of public access.
 
Last edited:

DisEnchantment

Senior member
Mar 3, 2017
781
1,886
136
I'm somewhat surprised so many people are surprised that 5800X is hot. It's a 105W TDP part (the max possible on stock on AM4), with a single chiplet. So unlike 5900X and 5950X the TDP is not split between two chiplets, unlike 5600X one chiplet is not limited to 65W TDP. With that added headroom 5800X is bound to be running at the border of the thermal envelope. If people expect/want it to run like the other three chips I'd suggest lowering PPT or use eco mode.
I want to add that the heat is inherent problem of the 7nm process, For example Zen3 in eco mode (65W TDP) will still boost to 5+ GHz and cause temperature spikes. Eco mode is targeting PPT, whereas per core temp could still spike due to a single core boosting to oblivion.
On the bright side, you dont lose ST performance in eco mode, and if the fan noise is bothering you, you can alter the fan speed profile because according to AMD Zen3 should be perfectly fine working around 90 degrees celsius daily.
5800X is particularly very boosty in ST, but since it is not as highly binned as 5950X it uses more voltage to reach the higher speeds and therefore cause higher temperature spikes.
During all core loads, surprise, the temperature actually does not spike. This is my observation so far.
But Zen3 has a much better temperature curve than Zen2 for sure.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,419
2,160
136
Damn! Your RAM is clocking well! I guess you are running 2 sticks?
I made my set of Samsung B-Die stable at 3840MHz CL18-17-16 and the rest of subtimings tweaked, but my best latency was 56.8ns and 56GB/s Read.
Yep! and yes I'm only running 2 sticks. I'm thinking my kit was produced during the crazy high DDR4 pricing and wasn't really binned well, if at all....I wouldn't min

Can your kit do 3600 CL14? With tight timings you could shave off a couple ns's and not lose any bandwidth....Viewing my 3600 CL14 snip a couple posts above for reference.

What kit do you have? The dual rank 32GB one?
 

DrMrLordX

Lifer
Apr 27, 2000
17,112
6,118
136
@amrnuke

I had once feared that the 3900X would offer up serious VRM problems for cheaper boards, but that threat never really materialized. You had to hit pretty low on the AM4 motherboard list to find VRMs that would really overheat with a chip like that in the socket. It was possible, but rare.

The 5900X isn't going to be any worse.

edit: Looks like the B450m Pro4 has a weaker VRM than the B450 Pro4. As long as its cooled you should be fine.
 
Last edited:

Kenmitch

Diamond Member
Oct 10, 1999
8,419
2,160
136
@amrnuke

I had once feared that the 3900X would offer up serious VRM problems for cheaper boards, but that threat never really materialized. You had to hit pretty low on the AM4 motherboard list to find VRMs that would really overheat with a chip like that in the socket. It was possible, but rare.

The 5900X isn't going to be any worse.
It really depends on ones work flow, case air flow and ambient temps of room.

I guess a person could reference some of the Hardware Unboxed videos were he does somewhat push the higher core offerings. I'd imagine the 5xxx series would have similar results.

A couple of snips of B550's and X570's for quick reference. The higher end boards usually have the best temps.

B550 VRM temps.JPG

X570 VRM temps.JPG

I view the above examples as most likely not even the worse case scenarios.
 
Last edited:

Tup3x

Senior member
Dec 31, 2016
505
375
136
By the way, it turns out that the other memory kit is likely defective. I don't see any other reason why this kit boots just fine but the other one doesn't. I did the previous single and dual stick test with that kit... Then I swapped the positions and tried first with single stick from another kit. Then I added another stick from that kit.
 

Det0x

Senior member
Sep 11, 2014
503
610
136
Done tweaking my final 24/7 PBO + curve optimizer settings.
Had to handtune the curve optimizer setting for each core on my cpu to make it 100% stable and boost like it should, and it took pretty much whole day, but it was worth it in the end :)

Screenshot below show the curve optimizer offset for each of the corresponding cores in ryzen master.
Hwinfo's T0 effetive clock show what each core can sustain in cinebench r23 singlethread with these settings. (forced the thread around by affinity)

finale 24-7 PBO settings.png

Also did a 1000% memtester run to make sure my memory settings are rock solid.
memtest 1000 24-7 settings.png

Pretty much done tweaking this platform now.. Just waiting for my RTX 3090 SUPRIM X to arrive next week so i can start playing around in 3dmark :)

*edit*

Some benchmarks with my 24/7 settings:
24-7 settings.png
Note: Passmark runs really hot, had 82max temp after cinebench r20 and r23 multicore with passmark upping it to 90 (!)
 
Last edited:

Dave3000

Golden Member
Jan 10, 2011
1,048
37
91
people on other forums complain about WHEA error.. windows event viewer-> windows logs -> system

I get them aswell when i overclock my memory, but the system is stable.. tested 5hours + in ramtest.. so continue to run it overclocked..

i think its a bios problem.. but some people already talking about RMA..

What you guys think? We got bad CPUs? Or it just a bios thing?
One thing to keep in mind is that Zen 2 and Zen 3 only officially support up to DDR4-3200 memory despite motherboard manufacturers mentioning their motherboard's support faster RAM, for example, DDR4-3600 (OC), DDR4-3800 (OC), DDR4-4000 (OC), and so on. (OC) is mentioned in those specs of the motherboards because those speeds are not officially supported by those CPU's. If you run faster than DDR4-3200 RAM on these CPU's, don't expect it to run stable, as that's technically an overclock. I'm not surprised that some people here are saying that they are getting WHEA errors when they are running faster memory than what their CPU's memory controller officially supports. I would not blame that a CPU is defective if it can't run faster memory reliably that what it officially supports and I would recommend running the memory within official specs of what the CPU supports and see if those WHEA error still happen before blaming a bad CPU.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,419
2,160
136
I performed an exorcism on my PC the other day and removed the evil Nvidia card. I had to wait for my memory sticks to arrive before I would post a photo. I'm still working on a color match for the memory and the RX on the GPU.

20201215_125230.jpg

Parts list:
AMD Ryzen 9 5900x
MSI MPG B550 GAMING CARBON WIFI
G.SKIll Trident Z Neo 32GB(4x8)
XFX-Speedster MERC319 AMD Radeon RX6800
Samsung 970 EVO Plus SSD 500GB NVMe boot drive
Samsung 860 EVO 1TB storage drive
Lian Li PC-011 Dynamic
Corsair iCUE H150i Elite Capellix AIO
Corsair QL Series, Ql120 RGB,120m(3-pack)
SEASONIC FOCUS+ 850w

Switching from 2 sticks of b-die to 4 sticks was/is an adventure. I never knew how easy I had it with 2 sticks until I tried 4. I won't go back, but the 2 stick easy tweaking will be missed.

4 sticks defaults to gear down mode. Trying to run with CR1 and GDM disabled is unstable. So far it's looking like CR2 is the lesser of 2 evils when it comes to effecting latency. I'm still playing around with the memory, but I'll settle for 3600 CL14 if I have to.

3600_CL14_32GBa.png
 
Last edited:

bigboxes

Lifer
Apr 6, 2002
32,504
11,253
146
No troubles with USB after flashing new bios. No disconnects and super fast transfer speeds. No luck on the sharing on the network yet. I was able to browse my file server through Asus AiCloud, but not through the new rig. I then did a total reset on my router and reentered my settings. At some point, I saw the server name n the network locations. I was then able to map the server drives to my main rig.


 

ASK THE COMMUNITY