Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 162 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Cinebench needs to constantly access the scene data from every thread. That data doesn't change, so there's no lock contention for the scene data, but that's all that is saved over something like a game. Games usually divide up their tasks such that they have little to no lock contention as well and everything happens in a cadence.
That cadence still needs at least some synchronization on every frame to not break down, unlike Cinebench, that being my idea of their key difference.
Not as much as in the past. Fast syscall, benaphores, user-mode thread management, etc... have greatly reduced the need for the syscall instruction (CPU level instruction / kernel mode / ring 0 / whatever).
They did, but at least GPU drivers still need that one instruction in droves, don't they, and i do not trust game developers to be mindful of alternatives. Besides, syscall neatly ties with idea of high latency IMC that present testing implies. Either way, i wait for your testing, that is one scenario where pure synthetic would be useful.
 

HutchinsonJC

Senior member
Apr 15, 2007
465
202
126
Someone buying a $300+ CPU will NOT be running less than 1080p - simple as that.

You know, I keep seeing this said, and while I'm not the exact definition of what you just said, I've been seeing the basic concept of this argument over and over in this thread. (no one games 1080P on a 1080 GTX) And yet, here I am, about as close to that as you can get without actually being 1080P.

1080 GTX STRIX, 3960x (OC'd to 4.4 to 4.6 depending what I want at the time), rampage IV, 16GBs of RAM

And a Samsung 1920 x 1200 monitor with almost zero inclination to upgrade the monitor at this time.

All of it, except the GPU, was singled out and bought by me in 2012. So, the GPU got upgraded and the monitor didn't and likely won't for a while. It happens.
 

cytg111

Lifer
Mar 17, 2008
23,174
12,837
136
There was indication that minimum framerates were better at some point.. was that ever proven or retested?
 

Head1985

Golden Member
Jul 8, 2014
1,864
686
136
Yeaa and its here the stupid engineers at Intel made a mistake and gave bwe 4 mem channels. What a ah heck !

J/k
With skylake try cinebench with 2133mhz ram and then with 4000mhz ram
Then try witcher3 with 2133mhz ram and then with 4000mhz ram.
Cinebench will be +- same.
Witcher will run 30-40% faster.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
There was indication that minimum framerates were better at some point.. was that ever proven or retested?
Absolute minimums would make perfect sense to be better in GPU limited scenarios, since they are basically random.
I have seen no evidence of 1% lows being better on Ryzen over say Kaby Lake in higher resolutions or otherwise, for majority of games.
Yeaa and its here the stupid engineers at Intel made a mistake and gave bwe 4 mem channels. What a ah heck !
They did not do it for workstations, i'll tell you that much.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
With skylake try cinebench with 2133mhz ram and then with 4000mhz ram
Then try witcher3 with 2133mhz ram and then with 4000mhz ram.
Cinebench will be +- same.
Witcher will run 30-40% faster.
Ok. Then buy bwe for gaming i guess or am i confused?
Its build for it. Unlike kbl that needs hbm and 128mb l4 cache to "really shine"

J/k
 

cytg111

Lifer
Mar 17, 2008
23,174
12,837
136

Head1985

Golden Member
Jul 8, 2014
1,864
686
136
Ok. Then buy bwe for gaming i guess or am i confused?
Its build for it. Unlike kbl that needs hbm and 128mb l4 cache to "really shine"

J/k
Well you need around 50MB L3 cache then cpu will not "miss" instructions and dont need access to system ram.i have seen some insane results from xeon CPUs at 3.3Ghz with 50MB L3 cache which are faster in games than 6700k at 4.5Ghz with 3000mhz ram.
Also broadwell with L4 is also faster in games than skylake/kabylake at same clock but they clock very bad and skylake with 3000+mhz ram have comparable performance in games.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
There was indication that minimum framerates were better at some point.. was that ever proven or retested?
All we really have are anecdotes right now. The actual benchmarks are far too varied to prove anything atm.

The 8 core should be more fluid, than the 4 cores, as physical cores are far superior to artificial cores.

I suspect that zen is usually more fluid, but there are dips when info is transferred from one ccx to the next that likely tanks the %lows.

 

Trender

Junior Member
Mar 4, 2017
23
1
16
It isn't actually possible to properly set thread affinity to get the behavior you want. You would need to be able to snoop into a process and manually set individual thread affinities.

Windows needs to recognize AMD's SMT implementation and to resist moving threads to the other CCX on context switches - keep them on the CCX.

That's and easy 10% in games that are impacted by this. A few edge cases will be more, naturally.

Applications that are fully threaded don't have a problem because Windows won't shuffle threads around...

THAT GIVES ME AN IDEA!!

Someone needs to run a heavy, very low prio, process that loads up every core and see if gaming performance improves - that should trick the Windows kernel into not load leveling as it won't find a lesser used core.
What about this? https://bitsum.com/
 

zinfamous

No Lifer
Jul 12, 2006
110,568
29,179
146
probably something to do with the fact that Intel literally dictated the optimization that are currently in place because for 12 years now there has been nothing but Intel platform to optimize for.

...you guys realize I was being sarcastic, right? :D (I think looncraz got it with his response...)
 
  • Like
Reactions: Agent-47

zinfamous

No Lifer
Jul 12, 2006
110,568
29,179
146
You literally just listed every socket and instruction set AMD has created in the past 12 years. Just to try rebut someones point that Intel has required the industry to change multiple times when the original poster said incorrectly stated "Intel never required "the whole world to change" in order to use their products properly. Good thing we have Intel always looking out for our interests." Please educate yourself on the argument before you waste your time.

Again: I was pretty clearly being sarcastic about that comment: look at who I was responding to. :D
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
That cadence still needs at least some synchronization on every frame to not break down, unlike Cinebench, that being my idea of their key difference.

They did, but at least GPU drivers still need that one instruction in droves, don't they, and i do not trust game developers to be mindful of alternatives. Besides, syscall neatly ties with idea of high latency IMC that present testing implies. Either way, i wait for your testing, that is one scenario where pure synthetic would be useful.

Yes, but synchronization has pretty much always occurrs with volatile data - forced into main system memory. That's the only way to guarantee atomicity on any MT system. Synchronization also only occurs at intervals concurrent with the framerate and is driven by the graphics stream... so we're talking about using a microsecond or so for synchronization every 8~20 milliseconds. It's just a drop in the bucket.

By comparison, if you're code is thrashing the cache, which is what some games are undoubtedly doing on Ryzen, then you will have repeated 20~100ns data accesses for many operations. That cost will run you a millisecond easily if prefetch fails at a ~50% rate and you're stuck with random access times Ryzen has.

Cache aware algorithms need to know that Ryzen only has 8MB available to a core - and that only 4MB or so of that is full-speed due to policy and sharing. Otherwise they will overrun the cache, dirty the lines, and then their data will be repeatedly flushed and fetched.
 
  • Like
Reactions: Drazick and krumme

looncraz

Senior member
Sep 12, 2011
722
1,651
136
You know, I keep seeing this said, and while I'm not the exact definition of what you just said, I've been seeing the basic concept of this argument over and over in this thread. (no one games 1080P on a 1080 GTX) And yet, here I am, about as close to that as you can get without actually being 1080P.

1080 GTX STRIX, 3960x (OC'd to 4.4 to 4.6 depending what I want at the time), rampage IV, 16GBs of RAM

And a Samsung 1920 x 1200 monitor with almost zero inclination to upgrade the monitor at this time.

All of it, except the GPU, was singled out and bought by me in 2012. So, the GPU got upgraded and the monitor didn't and likely won't for a while. It happens.

Are you in the market for a new CPU?

I only run 1080p myself. I now have a Fury (not that I needed an upgrade from my RX 480 - I bought it for testing Ryzen... probably will end up keeping it).

Also... have you played with DSR? AMD's alternative (VSR) is really nice. Turn off all AA and run 4K with an uptick in image quality in little loss in performance... Sadly, though, that limits my FreeSync range to just 60hz - and I'd rather have FreeSync than the higher image quality.
 
  • Like
Reactions: Drazick

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
so we're talking about using a microsecond or so for synchronization every 8~20 milliseconds.
Makes sense. Syscall guess is by far the best one with the scheduling quirks, imho, but i wait for your tests. After all, there has to be a good reason it gets worse the closer gaming test gets to pure drawcall testing.
 
  • Like
Reactions: looncraz

SunburstLP

Member
Jun 15, 2014
86
20
81
All we really have are anecdotes right now. The actual benchmarks are far too varied to prove anything atm.

And there lies the gulf between clean install BMs and the user experience. My instinct is that R7 will be faster the way people actually use their PCs; discord/VoIP, chrome/FF, spotify all churning along in the background while gleefully dispatching their on-screen foes. I remember how much legwork went into trying to BM my first Athlon XP rig. I had a separate HD with that Win install on it. Trimmed to the max, services shut off, no BG programs. I beat the crap out of that poor Barton and R300. Fun times. All in the name of 3D Mark scores.

I've gained enough HP over the years that I simply don't bother tweaking much of anything other than my personal preferences with Win 10. Multi-threading, much as it's hard to implement at times, has really improved user experiences. It takes challenges to inspire innovation and that's what we're lucky enough to witness happen now. From how CPU/GPUs are designed, to how software is programmed and evaluated, it's all changing. Lucky us.
 
  • Like
Reactions: CatMerc

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I think we know the drill and i agree with you a long way.
Do you game at +120 Hz btw?
There is a few new games of relevance that needs some work for zen even at 90fps but imo Its what is comming in a year or two that is of importance here. The few games will get a patch.
The dual ccx is there to stay for the next gens of zen and with it the limitations. But ram latency will be improved and ram speed.
The new windows schduler is soon here in its basic form and the most fringe new games will be patched and then we are 1 year into zen and multithreaded games march on.
So zen plus will look far nicer on the graphs. 150 vs 100 fps. But frankly i dont think it matters except for the minor 144Hz segment.
In 4 years we are 2 bf games newer from dice and how does a 4c 8t processor perform then? It will be unplayable. Heck even the next gen bf is beeing a stretc for 4c8t if its tailored for the next gen consoles. You can even bring s skl i7 to the knees today in bf1.

I dont know peoples budget for cpu. But if you play 144 there is much good to be argued for buying a 6800 and oc it if you have the doe.
For a slightly bit lesser cost a non x 1700 oc seems to me like a 5 year safe buy if 60hz/75hz is used. I think it will outlast even a sb 2500 because of next gen consoles using the same tech.

I game at 3440x1440@100hz. With my dual 980ti's the monitor gets maxed out pretty often. Been playing Rise of the Tomb Raider lately and FPS is often 90-100 pegged. I saw the Ryzen CPU's dipping into the 80's with that game and I said forget it. No way. So I went with a 6800K which slams the GPU at 1080p into a 150fps wall. My 3930K gets 125fps in that game's benchmark and the Ryzen, even OC'd only gets 104fps and again the 6800K gets 150 because it maxed out the GPU in that test along with the other Intel CPU's.
I figured why downgrade to a Ryzen from a 3930K? 6/12 is enough threads for now and even for a few years I'm sure. It won't choke like a quad, but I do expect 8/16 to pull ahead at some point and even now in a few games, at certain times, an Intel 8/16 chip will pull ahead of a 6/12. So that time is coming, but not yet. When that point finally arrives, some day, then I'll just upgrade again.