Ryzen: Strictly technical

Page 55 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Timur Born

Senior member
Feb 14, 2016
277
139
116
Can anyone offer an explanation why my 1800X quickly throttles down to x35.3 - x36 during *non* AVX ITB/Linpack load? It's not temps and likely not power draw, especially since the AVX version of ITB/Linpack, Prime95 and Realbench run at full x37 for hours.
 

Kromaatikse

Member
Mar 4, 2017
83
169
56
In non-AVX code there are more instructions to decode and schedule for the same amount of work done. I wonder if that has an effect on the front-end power draw - which XFR *is* sensitive to.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
Even if XFR is sensitive, only the x37 multiplier is XFR, but I see throttling below x36. Several cores regularly hit x35.3, others x35.8.
 
Last edited:

bjt2

Senior member
Sep 11, 2016
784
180
86
It would seem you're right. I tested it without SMT enabled and IPC maxed at 3.19 on a single core.

That got me to thinking, though, of finding a benchmark which could do better. GeekBench 3 actually manages it. Without SMT, it hits a per-thread IPC of 4.05 (average of the peak IPC seen across all threads). With SMT, it only manages 3.1.

I measured IPC with StatusCore.

This actually makes a LOT of sense given the SMT scaling.

4.05 IPC * 8 Cores = 32.4 IPC (MT)
3.1 IPC * 16 Cores = 49.6 IPC (SMT)

SMT peak instruction throughput improvement: 53%

I will need to hunt for something that can really push more IPC... any ideas?

An heavy INT thread, mostly with aritmethic (only add and sub) and logic (AND etc) and an heavy simd thread(int or fp doesn't matter, unless you want to push also power draw, so FMAC should be advisable).

Only mixed load can exploit all the ports... If you don't fix the affinity of the thread/processes, you can test if the windows scheduler is clever, by comparing performance/IPC with fixing on one logic core the int thread and fp thread to the other...
 

keymaster151

Junior Member
Mar 15, 2017
15
20
36
Very interesting video by AdoredTV here. https://www.youtube.com/watch?v=0tfTZjugDeg

It seems that at least in Tomb Raider, Ryzen's bad DX12 performance isn't necessarily caused by a CPU bottleneck as much as an API bottleneck on Nvidia cards. This definitely warrants more testing in other games, if anyone happens to have both AMD and Nvidia cards handy.

Edit: Also the real gameplay benchmark on AotS was updated with 6900k results.
http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/

It runs even worse than on Ryzen. o_O Could be caused by a bug, since that performance is atrocious.
 
Last edited:

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
It seems that at least in Tomb Raider, Ryzen's bad DX12 performance isn't necessarily caused by a CPU bottleneck as much as an API bottleneck on Nvidia cards. This definitely warrants more testing in other games, if anyone happens to have both AMD and Nvidia cards handy.
There is bad Dx12 performance in Tomb Raider on Ryzen, even if on nV cards?

https://www.computerbase.de/2017-03...-frametimes-ryzen-7-1800x-gegen-core-i7-7700k

https://www.computerbase.de/2017-03...-frametimes-ryzen-7-1800x-gegen-core-i7-7700k

Looks much much better in Dx12 for me, if you ask me. Silky smooth, in fact
 

Malogeek

Golden Member
Mar 5, 2017
1,390
778
136
yaktribe.org
My take from that video and the benchmarks for Tomb Raider is that it's heavily bound by a majority of work being done on the main thread which is going to be much faster on the 7700k.

edit: Finishing watching video. Holy moly that AMD 480x2 performance is crazy. The difference is insane between the Intel and Ryzen on the Nvidia 1070 compared to the 480's.
 
Last edited:

keymaster151

Junior Member
Mar 15, 2017
15
20
36
The computerbase.de 1080p benchmark looks GPU bound to me, since all the top CPUs get pretty much the same fps. If you look at the 720p benchmarks, there are some pretty big differences, in fact the 1800x doesn't gain any fps from going to 720p. In AdoredTV's video the 1800x seems to be bottlenecking his OCd 1070 pretty badly, once he switches to RX480 crossfire, he gets huge fps gains, which shouldn't really happen if your CPU is a bottleneck.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
edit: Finishing watching video. Holy moly that AMD 480x2 performance is crazy. The difference is insane between the Intel and Ryzen on the Nvidia 1070 compared to the 480's.
Bear in mind that he intentionally or not disabled Crossfire in Dx11, so his Dx11 runs on 480s are utterly irrelevant.
With that said, i did find it curious that for him 1800X ran like trash with 1070.

EDIT: Disregard what i had removed, cb.de does suggest that in dx12 1800X looks to be kind of GPU limited but not really in the same time. Weird stuff.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
Bear in mind that he intentionally or not disabled Crossfire in Dx11, so his Dx11 runs on 480s are utterly irrelevant.
With that said, i did find it curious that for him 1800X ran like trash with 1070.

EDIT: Disregard what i had removed, cb.de does suggest that in dx12 1800X looks to be kind of GPU limited but not really in the same time. Weird stuff.

What leads you to believe this?
 

keymaster151

Junior Member
Mar 15, 2017
15
20
36
The plot thickens. http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/

Disabling 4 cores on the 1800x gives a dramatic performance boost in this scenario. You might think that it would indicate CCX communication issues, but that can't be the case since the 6900k also performs very badly. According to the article the reason the CPUs with more than 4 cores are performing so terribly is that the game automatically reduces image quality on processors with less than 6 cores making it easier for them to run the game, but it doesn't do it for the in-game benchmarks. I don't know if it affects the game in general or only this specific scenario. Pretty weird to say the least. So, mystery solved i guess?
 

Hitman928

Diamond Member
Apr 15, 2012
5,180
7,631
136
Could it be something with the saved game being from a 4 core, 8 thread machine and then it's getting screwed up some how when the cpu configuration is different?
 

keymaster151

Junior Member
Mar 15, 2017
15
20
36
Could it be something with the saved game being from a 4 core, 8 thread machine and then it's getting screwed up some how when the cpu configuration is different?
The save game is apparently from a 6 core i7 5820k. It would be pretty odd if the game saved your CPU setup in the save file, but who knows. Software developers can be an interesting bunch.
 

thigobr

Senior member
Sep 4, 2016
231
165
116
Quick question: is it already possible to overclock maintaining CnQ active? Or does increasing multipliers still disables power savings features on recent UEFI releases?
 
  • Like
Reactions: ButtMagician

mtcn77

Member
Feb 25, 2017
105
22
91
AotS isn't Linpack. The developer Tim Kipp stated they used sse2 in order to avoid compatibility (issues, I mean) with former cpu generations. Tim Kipp
 

Dygaza

Member
Oct 16, 2015
176
34
101
I wouldn't be surprised to see ryzen optimised drivers from nvidia.

Edit. And why not from AMD aswell.
 
Status
Not open for further replies.