• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion Ryzen 3000 series benchmark thread ** Open **

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Overclocking overrides those limits.
The limits are there in relation to stock operation; XFR2 and PB2.
The 3600X will certainly outperform the 3600 at stock, just like the 3800X will outperform the 3700X at stock.
If you are manually overclocking, or using PBO, then there should functionally be no difference between either CPU with identical core counts*.

*Caveat: 3600X has XFR2 whereas 3600 does not, so perhaps there's a tangible benefit to going 3600X, but this will require more testing...and would not be relevant with a manual overclock anyway.
Thanks for that info!

As I will not be overclocking in the traditional sense, I have purchased the 3600x

I will be overclocking by using the PPT/TDC/EDC values and cpu offset underclocking.

Ofcourse memory overclocking will be key and I need an answer to a burning question that ive had since I purchased my MSI X370 titanium.

The burning question is if the Titanium is a turkey flagship motherboard with regards to RAM overclocking!

Hopefully by the time the CPU arrives (few days) MSI will have released some form of BIOS for the Titanium.

Fingers crossed

😛
 
Thanks for that info!

As I will not be overclocking in the traditional sense, I have purchased the 3600x

I will be overclocking by using the PPT/TDC/EDC values and cpu offset underclocking.

Ofcourse memory overclocking will be key and I need an answer to a burning question that ive had since I purchased my MSI X370 titanium.

The burning question is if the Titanium is a turkey flagship motherboard with regards to RAM overclocking!

Hopefully by the time the CPU arrives (few days) MSI will have released some form of BIOS for the Titanium.

Fingers crossed

😛
Latest I heard from MSI was that their BIOSes would be ready "by the end of the month" for what it is worth.
My understanding is that RAM overclocking has more to do with the CPUs IMC than the mobo itself. Ryzen's IO die IMC is much improved over earlier generations.
 
I think HardwareUnboxed said in their 3600 review, that they had to pay for it with their Patreon money, that AMD might sample them a 3600 later on. AMD's review kit apparently included a 3700X and a 3900X, and a couple of specific X570 mobos. (Loaned?)

Oh okay. That makes sense.

The 3200G and 3400G are being discrimited due to be Zen+ but they are also considerable upgrades...

Huh. If I had a ton of free time (sadly, I do not), one of those chips on a cheap board would make me want to revisit AMD's HSA software stack under Linux. And Renoir . . .
 
Is the IF able to do 3733MHz C17? I read someone say that for stability it's best to keep it around 3600MHz cause stability becomes iffy around that speed. I'm actually moving on to choosing my RAM now and I'm a little curious.

Thanks
 
Latest I heard from MSI was that their BIOSes would be ready "by the end of the month" for what it is worth.
My understanding is that RAM overclocking has more to do with the CPUs IMC than the mobo itself. Ryzen's IO die IMC is much improved over earlier generations.
Thats correct, though I have been through 3 CPUs (2 x 1600x, 1 x 2600x) on the mobo and the memory ceiling hasnt budged!
 
A weird finding.

For some reasons Ryzen 3800X is slower than 3700X according to GeekBench 4 both in ST and MT modes:

https://browser.geekbench.com/v4/cpu/search?dir=desc&q=3800x&sort=score
https://browser.geekbench.com/v4/cpu/search?dir=desc&q=3700x&sort=score

though in theory it should be a better/faster CPU.

Wild guess but it’s likely due to some software or bios patch that needs to be applied. I’d guess since the 3700x was sent out to reviewers that got the maximum attention regarding bios & software.

Just a guess...
 
P.S. I don't get why people say Blender or Cinebench R20 are artificial benchmarks. They are real applications used in real-world workflows and they are very representative of real-world performance in those workflows.

It's basically Intel that says this *now*

Their marketing strategy changed about half a year ago because they saw the clobbering coming from a mile away.
 
There are Windows scores below and 3800X is slower.

3800X should score higher, there's no doubt about that. It just shows that the platform is in a quite shaky/raw state at the moment and by buying a better/more expensive CPU you won't necessarily get better results.
 
There are Windows scores below and 3800X is slower.

3800X should score higher, there's no doubt about that. It just shows that the platform is in a quite shaky/raw state at the moment and by buying a better/more expensive CPU you won't necessarily get better results.
I don’t think there is going to be a significant difference between the two given the overclocking results so far (seems like a hard limit of 4400 MHz all-core except for golden samples). There's just not a lot of room to boost, the 7nm process is, sadly, limiting atm.
 
It's basically Intel that says this *now*

Their marketing strategy changed about half a year ago because they saw the clobbering coming from a mile away.
No it's because of ryzen 2's humungus cache,if the benchmark scenes fit completely inside the cache it's obviously going to be a very different result from a scene that does not fit completely in cache,somehow like how they got so fast in CS:Go all of the sudden,if it fits in the cache it gets a huge boost that is still real and going to impact performance but it's far from going to happen all the time.
 
No it's because of ryzen 2's humungus cache,if the benchmark scenes fit completely inside the cache it's obviously going to be a very different result from a scene that does not fit completely in cache,somehow like how they got so fast in CS:Go all of the sudden,if it fits in the cache it gets a huge boost that is still real and going to impact performance but it's far from going to happen all the time.
You just can't admit that AMD has a good product, excuses, excuses....You need to wake up and smell the roses, or in this case, the Ryzens.
 
No it's because of ryzen 2's humungus cache,if the benchmark scenes fit completely inside the cache it's obviously going to be a very different result from a scene that does not fit completely in cache,somehow like how they got so fast in CS:Go all of the sudden,if it fits in the cache it gets a huge boost that is still real and going to impact performance but it's far from going to happen all the time.

I don't remember the year when it was determined that cache size was not a part of designed CPU architecture and so was somehow considered "fake performance"

Of course, there's that whole intentional design of allowing race conditions that exploit privilege checking, or however that works, to create actual fake performance, for generations, at the expense of security.

But apparently we shouldn't mention that.
 
Last edited:
..........................
1-630.4df422bd.jpg




https://www.computerbase.de/2019-07/asus-mainboard-x470-b450-pcie-4.0/
 
I think we should all agree Ryzen 3000 series need to be tested with their L3 cache disabled in order to reflect real world workloads.

Yo momma so fat ain't got cache for that!
lol
theELF, please, I got one 3900X running here, its pretty good !

but about that cache, its simply BIG- wonder how much it cost
 
No it's because of ryzen 2's humungus cache,if the benchmark scenes fit completely inside the cache it's obviously going to be a very different result from a scene that does not fit completely in cache,somehow like how they got so fast in CS:Go all of the sudden,if it fits in the cache it gets a huge boost that is still real and going to impact performance but it's far from going to happen all the time.

It's definitely a mixed bag. Due to the chiplet design and subsequent increase in latency, it falls behind in certain areas, but of course overall it's still excellent.

I'd say the 3900x is 10/10 productivity, 8.5/10 gaming, while say a 9900k is 7/10 product and 10/10 gaming. For mixed use or productivity, Ryzen is the clear winner. For gaming ONLY, and only with high refresh and high end GPU, 4.7-5.2Ghz CL is best. Of course, it's all a matter of luxury anyway, it's not like a 3900x or 9900k would suck at all, they're better than probably 99.9% of PCs on Earth right now lol. But people will argue tooth and nail because of their pet companies. It's baffling.
 
I don't remember the year when it was determined that cache size was not a part of designed CPU architecture and so was somehow considered "fake performance"
2006. You wouldn't believe the number of people who claimed that Conroe's huge-for-the-time L2 cache was somehow cheating. 😛
 
Finally, I was waiting for that. To me the X570 chipset is pretty superfluous, all I want is PCIe 4 lanes and (especially) PCIe 4 M.2 directly from the CPU, and the X570 shouldn't be necessary for that. Now that we keep getting to know plenty X470, B450, X370 and B350 boards working mostly fine with Ryzen 3000 CPUs some manufacturers starting to unlock PCIe 4 where possible shouldn't come at a surprise.
 
Finally, I was waiting for that. To me the X570 chipset is pretty superfluous, all I want is PCIe 4 lanes and (especially) PCIe 4 M.2 directly from the CPU, and the X570 shouldn't be necessary for that. Now that we keep getting to know plenty X470, B450, X370 and B350 boards working mostly fine with Ryzen 3000 CPUs some manufacturers starting to unlock PCIe 4 where possible shouldn't come at a surprise.


Guess that they needed some time to tests the MBs that could do it, hence why they back tracked on the compatibilty..
 
Back
Top