Is there a reason Ryzen struggles in certain games?

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
I was just browsing for the latest Assasins Creed: Origins benchmarks with the 1.03 patch and came across this:

http://www.pcgamer.com/assassins-creed-origins-performance-guide/
HL5vHCKckZgEZLQ6jT8kQR-650-80.png


Of particular concern is how, on a very scaleable game engine (thread wise), Ryzen still seems to put in disproportionately poor scores - when I say disproportionate, I mean in terms of relative IPC, it shouldn't be that far behind the Intel chips. Is there some part of the Ryzen architecture that holds it back in certain games? Are some games just poorly optimised for Ryzen, or is there a technical flaw / bottleneck in Ryzens design that makes it struggle in games like AC:O?

Please keep this thread civil, I'm not trying to bash Ryzen here, but rather I'm looking for some technical discussion as to why the gaming performance of Ryzen can be inconsistent, depending on the game engine.

On a side note - Threadripper surprisingly does much better than Ryzen in this game - could the extra threads really make that much difference, or is there another explanation? Higher platform memory bandwith, perhaps?
 
Last edited:
  • Like
Reactions: JimKiler

tamz_msc

Diamond Member
Jan 5, 2017
3,763
3,586
136
What RAM speeds are PCGamer running? Knowing them it'll probably be 2666MHz auto timings at best. Games respond best to memory latency and it is especially important in a NUMA-like topology which Ryzen is. The difference between 2666MHz auto timings and 3466MHz tuned LL timings is up to 30% on Ryzen 8C with a 1080Ti.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
This game has made a name for itself with dual layer DRM tanking CPU performance. So it's kind of a worst case scenario for useless CPU cycles. Plus Ryzen is also known for it's inter-CCX penalty due to it's modular nature and reliance on RAM clocks. I can imagine both of these combined create a perfect storm really showing off Ryzen's weaknesses.

Edit: Notice the 8400's better score than 8700K with Vega, this makes no sense except for the engine penalizing multi threading. What I imagine is a similar "NUMA-like" architecture penalty that is even worse with Ryzen.
 
Last edited:
  • Like
Reactions: JimKiler

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
What RAM speeds are PCGamer running? Knowing them it'll probably be 2666MHz auto timings at best. Games respond best to memory latency and it is especially important in a NUMA-like topology which Ryzen is. The difference between 2666MHz auto timings and 3466MHz tuned LL timings is up to 30% on Ryzen 8C with a 1080Ti.

They mentioned G.Skill DDR4 3200 CL14 in the article, but that was for the Intel Z370 platform. You would assume they would use the same RAM for the AMD platform, though they probably didn't fine tune any timings.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,763
3,586
136
They mentioned G.Skill DDR4 3200 CL14 in the article, but that was for the Intel Z370 platform. You would assume they would use the same RAM for the AMD platform, though they probably didn't fine tune any timings.
Well that sucks because it doesn't tell me whether XMP is actually working on their Ryzen setups. Why do publications forget mentioning these things?
 
  • Like
Reactions: Christopher Bohling

TimCh

Member
Apr 7, 2012
54
47
91
Edit: Notice the 8400's better score than 8700K with Vega, this makes no sense except for the engine penalizing multi threading. What I imagine is a similar "NUMA-like" architecture penalty that is even worse with Ryzen.

Yes it must be poor/negative SMT/HT scaling, would be interesting to see some results with SMT/HT off.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Well that sucks because it doesn't tell me whether XMP is actually working on their Ryzen setups. Why do publications forget mentioning these things?

Indeed, could also explain why TR does so much better than Ryzen? Would the TR scores be more representative of a 'tuned' Ryzen setup?

On one hand, its good to know that there fine tuning RAM latency can gain so much additional performance for Ryzen. On the other hand, I feel really sorry for the average Joe who goes to Best Buy or whatever and finds a cheap Ryzen gaming box, only to have it paired with DDR4-2400 RAM!
 

tamz_msc

Diamond Member
Jan 5, 2017
3,763
3,586
136
Lack of optimization/driver issue for AMD GPUs it seems. It does scale with HT with NVIDIA GPUs.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
This game has made a name for itself with dual layer DRM tanking CPU performance. So it's kind of a worst case scenario for useless CPU cycles. Plus Ryzen is also known for it's inter-CCX penalty due to it's modular nature and reliance on RAM clocks. I can imagine both of these combined create a perfect storm really showing off Ryzen's weaknesses.

Edit: Notice the 8400's better score than 8700K with Vega, this makes no sense except for the engine penalizing multi threading. What I imagine is a similar "NUMA-like" architecture penalty that is even worse with Ryzen.

Does this not affect TR performance in this particular instance? And yes, the Vega performance numbers are very odd indeed! I seriously wonder if they mixed up the 8700K and 8400 numbers, it makes no sense.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,763
3,586
136
Indeed, could also explain why TR does so much better than Ryzen? Would the TR scores be more representative of a 'tuned' Ryzen setup?

On one hand, its good to know that there fine tuning RAM latency can gain so much additional performance for Ryzen. On the other hand, I feel really sorry for the average Joe who goes to Best Buy or whatever and finds a cheap Ryzen gaming box, only to have it paired with DDR4-2400 RAM!
No, I guess it is simply due to additional core scaling. Wish they'd tested 6 and 8 core Skylake-X CPUs too. The Assassin's Creed engine is extremely well-threaded, there are videos of AC:Unity running on a 1950X with 40% CPU utilization across all 16 cores.

Yes, the need to get fast RAM and fine tune it is an annoyance, though budget conscious customers will probable be choosing a 1060/580 at most and are going to be GPU limited most of the time, so they won't be noticing it.
 

Yakk

Golden Member
May 28, 2016
1,574
275
81
Assassin's Creed Origins is also the worst performing of the XBOX X launch titles.


Looks like just badly optimized code from ubi$oft cause XBOX has some of the best dev tools & profiler available. This in turn continues to AMD & PC. Something tells me nvidia must do a lot of code jacking to get it working.
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Assassin's Creed Origins is also the worst performing of the XBOX X launch titles.


Looks like just badly optimized code from ubi$oft cause XBOX has some of the best dev tools & profiler available. This in turn continues to AMD & PC. Something tells me nvidia must do a lot of code jacking to get it working.
Edit - nvm got the video working!

Hmm, it doesn't run so badly on Intel though. Still not great, mind you, but relatively speaking, at least the 6C CFL chips are getting mins about 60fps. Is nVidia not putting resources towards tuning performance for AMD CPUs?
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
Is nVidia not putting resources towards tuning performance for AMD CPUs?
Hehehehe. I seriously got a good old belly chuckle out of this. Small things...
Does this not affect TR performance in this particular instance? And yes, the Vega performance numbers are very odd indeed! I seriously wonder if they mixed up the 8700K and 8400 numbers, it makes no sense.
Yeah good question, I was wondering this myself. Maybe TR maybe has extra IF channel (or two) for chip to chip communication which helps the thrashing? I'm not sure.
 

el etro

Golden Member
Jul 21, 2013
1,581
14
81
Surprisingly, Ryzen gaming IPC performance gets a lot better with DDR4 setted at 3200Mhz.
 

DominionSeraph

Diamond Member
Jul 22, 2009
8,391
31
91
Edit: Notice the 8400's better score than 8700K with Vega, this makes no sense except for the engine penalizing multi threading.

You mean AMD's driver penalizing multithreading, since last time I checked the 8700k is at the top of the chart there.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
You mean AMD's driver penalizing multithreading, since last time I checked the 8700k is at the top of the chart there.
Nope I didn't meant that. But thanks for putting words in my mouth?

I can see why you think that though,since that's an easy conclusion if you don't bother to think about it too hard. But if you consider results from every other game with the same drivers I think you'll come to a different conclusion.
 

Yakk

Golden Member
May 28, 2016
1,574
275
81
Edit - nvm got the video working!

Hmm, it doesn't run so badly on Intel though. Still not great, mind you, but relatively speaking, at least the 6C CFL chips are getting mins about 60fps. Is nVidia not putting resources towards tuning performance for AMD CPUs?

I'd be very curious to see that engine code... Let's not forget ubi$oft seemed to have developed at least part of their graphics engine with gameworks which has caused them big time headaches in their previous releases, and probably still is causing ubi$oft issues if they still can't fix it with the XBOX profiler.
 
Jul 24, 2017
93
25
61
This has been mentioned in passing, but the #1 reason for Ryzen being poor in certain titles is the inter-CCX latency. Certain games really like low latency communication between cores and suffer when dealing with the higher latency introduced by the infinity fabric design.

Remember these benchmarks?

https://www.techspot.com/review/1450-core-i7-vs-ryzen-5-hexa-core/

Take a look especially at Grand Theft Auto V, Far Cry Primal, and Hitman. Those are three of the worst performing games on Ryzen.

Notice that in those same games, the i7-7800X is also much worse than the 7700K. In fact, the 7800X isn't much better than Ryzen 5 1600 when the 7800X is clocked to 4.7Ghz and the R5 1600 is a 4 Ghz. So R5 1600 is nearly as good as the 7800X despite Skylake's IPC advantage AND the 7800X's clock advantage.

So what gives? If it were really just a single-core performance issue, the 7800X should be better.

Well, the 7800X uses the higher-latency mesh interconnect, like all Skylake-X processors, while the 7700K is using the lower-latency ring bus design. The core-to-core latency is the obvious connection between the 7800X and R5 1600. So that has to be the reason.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
This has been mentioned in passing, but the #1 reason for Ryzen being poor in certain titles is the inter-CCX latency. Certain games really like low latency communication between cores and suffer when dealing with the higher latency introduced by the infinity fabric design.

Remember these benchmarks?

https://www.techspot.com/review/1450-core-i7-vs-ryzen-5-hexa-core/

Take a look especially at Grand Theft Auto V, Far Cry Primal, and Hitman. Those are three of the worst performing games on Ryzen.

Notice that in those same games, the i7-7800X is also much worse than the 7700K. In fact, the 7800X isn't much better than Ryzen 5 1600 when the 7800X is clocked to 4.7Ghz and the R5 1600 is a 4 Ghz. So R5 1600 is nearly as good as the 7800X despite Skylake's IPC advantage AND the 7800X's clock advantage.

So what gives? If it were really just a single-core performance issue, the 7800X should be better.

Well, the 7800X uses the higher-latency mesh interconnect, like all Skylake-X processors, while the 7700K is using the lower-latency ring bus design. The core-to-core latency is the obvious connection between the 7800X and R5 1600. So that has to be the reason.

Exactly why I went with an 8700k instead of Ryzen or SkylakeX despite wanting more cores. I'm not even pushing my uncore or memory hard (47uncore, 3600mhz 16-16-16-36) and my mem latency is ~ 41ns. Contrast that with what we see in Ryzen even when tuned to the hilt and you're looking at a 75% increase in latency which can really hurt in certain titles.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
If it were really just a inter-CCX latency performance issue then we would know it.
People tried it right away,to run games on a single ccx by using affinity,it doesn't work it hardly makes a difference in FPS.
 
Jul 24, 2017
93
25
61
If it were really just a inter-CCX latency performance issue then we would know it.
People tried it right away,to run games on a single ccx by using affinity,it doesn't work it hardly makes a difference in FPS.

Then what is the other commonality between the 7800X and R5 1600 that makes them both perform poorly in the same games? I'm using Occam's razor here
 
  • Like
Reactions: psolord

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Changes to the L3. I mean the ringbus wasn't exactly the fastest either as core count went up but the 6900 and 6950 had performance well within the levels you would expect for the arch at the clock speeds they worked at. The major change from SL to SLX is the change to the cache mainly shrinking the L3 and making it a victim cache. This is the same configuration of Zen. On top of that much like Zen instead of the L3 being a deep pool for the cores, the cache is assigned per core.

So realistically in the past a Core could access the L3 pull out what it wanted and go about it's business. Now it has to call up the core it needs to and have it look inside the L3 and L2 and pull up what it needs, or re-request it from memory.
 
Last edited:

IRobot23

Senior member
Jul 3, 2017
601
183
76
Can i point something out?

https://www.youtube.com/watch?v=olI9Mmtw39Y

please check this, check BF1 benchmark.
Then show me 1 other tech site which shows that stock R5 1600 with 2933Mhz DDR4 beats i5 8400 with same 2933MHz DDR4. Main thing here is that i5 will be slaughtered by R5 1600 in MP.

There is ONE thing that nobody is showing it. AMD does better in CS:GO.

You also found out that Ryzen does pretty well actually and basically there is so little difference when you draw the line. But everyone is showing ryzen as a non gaming CPU.

Some people talked about how will nvidia put drivers to improve ryzen performance on GTX.

Anyway, we finally can buy decent CPU under 200$.
 
Last edited:

JimKiler

Diamond Member
Oct 10, 2002
3,558
205
106
awesome, i am glad this thread has not dovetailed into a spitting match but there is still time.

This is an interesting topic and i did not know this game exposes some deficiences. Really odd for the Intel + Vega mashup.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
730
126
Can i point something out?

https://www.youtube.com/watch?v=olI9Mmtw39Y

please check this, check BF1 benchmark.
Then show me 1 other tech site which shows that stock R5 1600 with 2933Mhz DDR4 beats i5 8400 with same 2933MHz DDR4. Main thing here is that i5 will be slaughtered by R5 1600 in MP.
Look at the GPUs Ghz...
He messed up and had the power saving features enabled on BF1 on the intel machine only ,that's why it's the only site to show the ryzen winning,it's the only site that messed up.
JDGmuue.jpg
 
  • Like
Reactions: pcp7 and epsilon84