CHADBOGA
Platinum Member
- Mar 31, 2009
- 2,135
- 832
- 136
It would be useful for someone to do a review on Zen 2 with different memory speeds.
https://www.techpowerup.com/review/amd-zen-2-memory-performance-scaling-benchmark/It would be useful for someone to do a review on Zen 2 with different memory speeds.
You are actually missing Moonbogg's point.
The point of testing at low resolutions and settings such as 720p, is to isolate how the CPU performs in games.
Yes, for actual gaming it seems irrelevant, but hear me, it is. Because a single frame number like Average FPS does not show all the data. Benchmarks are limited. What you may feel or experience in the game can simply not be represented by one set of data.
What you are trying to do is eliminate the bottleneck in times when it might be one.
It would be useful for someone to do a review on Zen 2 with different memory speeds.
The King in a world of Republics.By the time it's a bottleneck I can feel, I think the $400 or so saved by going with a 3600 platform vs a 9900k platform will probably pay for an entire platform that will eliminate a bottleneck.
Yes, there is merit in exploring how far from the limit we are... and when we do those tests, we notice that we are FAR from any of these CPUs being a realistic bottleneck to perceptible gaming performance. With that knowledge under our belt, we can evaluate the differences to mean diddly squat.
It was always a questionable decision to buy a 9900k specifically for only gaming workloads, but some could justify it with some productivity workloads on the side... but now? The tipping point has been reached. 9900k is king, but from now on, it'll be ruling that one guy with the cornerest of corner cases where it actually matters.
Are there any memory latency sensitive benchmarks to showcase Zen 2 worst case scenario, or do anyone remembers any equivalent Nehalem era comparisons (Bloomfield vs Arrandale)? I'm rather amazed at the results since I wasn't expecting to Zen 2 to significantly improve gaming performance as much as it did considering that the Memory Controller is now offdie. The massive Cache L3 should be doing rather well hiding the extra memory latency, because it seems to perform consistently faster in absolutely everything than Zen+.
Minimal difference. Even single channel hangs in there on most games...which is a win for those badly configured systems OEM's make sometimes.
Certainly not enough to make me worry about selling my 3200 ram and buying something faster.
https://www.techpowerup.com/review/amd-zen-2-memory-performance-scaling-benchmark/
Edit: beaten to it.......
Yep, Linus' video has some BFV streaming results.
Hopefully in Zen 3, rather than add 20% more cores or whatever the process will allow, they double the cache again.Latency does seem to be Intel's last stronghold. Whilst the doubled L3 is clearly having an impact in some games, others clearly do send requests that are too large to fit within it...and lots of them. If AMD can crack this nut then I suspect they will surge past Intel in all use cases.
I think it highlights Intel's process issues; with a smaller node they could also increase their own L3 cache.Hopefully in Zen 3, rather than add 20% more cores or whatever the process will allow, they double the cache again.
I don't think Zen4 on 5nm is confirmed.
Not read the full thread but over here in Europe as always. Looks like a paper launch (1-2 weeks) and prices are almost $100 higher, the 3900x is listed at $579 for example and hence more expensive than a 9900k. And it's through the whole lineup 30-80 more expensive than US MSRP. AMD? WTF? Can you explain? it should be cheaper here, no tarifs. I feel like they are just using tarifs as a way to increase prices everywhere else as well. (And same for the Navi GPUs...)
XFR+PBO should allow for good effective overclocks but current BIOS versions cannot completely remove the 142W PPT limit (Power Package Tracking, think of it as PL2 power limit at Intel).
The King in a world of Republics.
Hopefully in Zen 3, rather than add 20% more cores or whatever the process will allow, they double the cache again.
Yeah it makes no sense to continue with the CCX approach now that they've committed to the chiplet design. CCXes were always going to be a compromise in terms of cache hierarchy as it's original purpose was to have a modular design that could allow for an iGPU a-la the Ryzen APUs. Now that chiplets are here, AMD should instead focus on rearranging the CCD so that L3 is shared among all cores. This split cache arrangement requires accessing the other half through the IF which burns additional power and isn't efficient in terms of bandwidth utilization.I expect that they'll rework the cache hierarchy in general. It's a bit surprising that they didn't do very much there with Zen 2, and this seems to be the obvious remaining low hanging fruit for Zen 3. It wouldn't surprise me to see on-package HBM or something similar. As it stands AMD is spending an embarrassing amount of transistors on cache to compensate for a questionable hierarchy. Rome has 256 MB of L3, but will behave in most situations like it only has 16 MB. Yes, there are some advantages including latency and bandwidth, but not enough to justify that degree of wastage.
If I had to guess, I'd peg Zen 3 at <= the L3 per core as Zen 2, but effectively shared between all cores on a die rather than constrained to a 4-core CCX, plus a large L4 that is either on the IO die or (more likely) on its own chiplet.
Lastly, given expected improvements to 7nm yields I can see AMD moving to a larger, 16-core chiplet.
According to The Stilt all power numbers from Asus boards are wonky, as Asus BIOS makes power management believe CPU is drawing less power.Some of those reviews do feature average power draw going past 142W for the 3900x. Power numbers have not been consistent.
According to The Stilt all power numbers from Asus boards are wonky, as Asus BIOS makes power management believe CPU is drawing less power.
TR 2950X is on par with 9900K, useless benchmark reallyGeekbench Multi Threaded scores:
View attachment 8141
TR 2950X is on par with 9900K, useless benchmark really
TR2950X wipes the floor with 9900K in everything seriously multithreaded
It hasn't been the high point of their recent architectures, but the original Athlon 64's DDR1 memory controller is still all these years later probably the most efficient memory controller ever to appear on the x86 platform. Their subsequent IMCs have generally been okay from a bandwidth standpoint, but they've struggled latency-wise against Intel.Well, AMD IMC has not exactly been the high point of past AMD architectures. They (significantly) improved the controller at the same time they moved it off-die. Pretty sure there was a good deal of headroom to be had to more than make up for the latency penalty of moving it.
well that is nice, but it doesnt reflect why should I buy more than 8C CPUThe Ryzen 7 2700x multi-threaded score increased 25% overnight because of the new Windows 1093 version(?)
http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=2700x+windows
Geekbench 4.0 is a series of very short bursts with pauses. The interaction with windows to increase/decrease the clock fast enough seems to be crucial.
(besides the usual issue of threads being moved from core-to-core)
Linux is still far superior to Windows:
LINUX: http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ryzen+9+3900x+linux
WINDOWS: http://browser.geekbench.com/v4/cpu/search?utf8=✓&q=ryzen+9+3900x+windows
Single threaded score for the Ryzen 9 3900x goes up from 5630 to 6300 (12 %)
Multi threaded score for the Ryzen 9 3900x goes up from 44900 to 55000 (23 %)
That's odd. Greg at Science Studio tried a 3900X in an Asrock B350 motherboard. He didn't do a whole lot of testing, but apparently it worked. He's gonna try an A320 board next.I have to wonder, especially with the most recent updates suggesting to NOT update your system per ASRock:
https://www.asrock.com/mb/Intel/Z370 Taichi/index.asp#BIOS
They never used this language before to describe previous updates. Maybe I should test performance before and after...