Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 178 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
Some of those games like Witcher 3 are known to run much faster with faster RAM. Fallout 4 is another one.
It's just bad programming that leaves them hitting memory a lot.

No VIDEO GAME should be RAM speed limited like that. It take poor programming to make it happen.

Then learn to program and start doing it yourselft if its that easy to you. Call me back if you ever figure out a way to get rid of the main thread that is causing the bottlenecks on current games.

There is a higher chance that games start going heavy AVX than getting rid of the main thread btw.
 

Tapoer

Member
May 10, 2015
64
3
36
The most retarded comparison i ever see since AMD own streaming comparison... the I7, I5, I3, Pentium, etc has a built-in h264 encoder that OBS supports, using CPU encoding on the I7 is just stupid.

If you want to compare quality, well thats another issue enterely.

twitch.tv have a soft limit of 4500kbps, many streamers are using 720p@60fps and many games still look very pixelated. Of course they want the best quality possible and hardware encoding is out of the question.

Many streamers that only use one PC (4 core CPU) for play and encode have problems on demanding games and many are using a dual PC setup.
 
  • Like
Reactions: ZGR and Tup3x

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,328
4,913
136
The most retarded comparison i ever see since AMD own streaming comparison... the I7, I5, I3, Pentium, etc has a built-in h264 encoder that OBS supports, using CPU encoding on the I7 is just stupid.

If you want to compare quality, well thats another issue enterely.

GPUs have hardware encoding as well... one of the reasons people who rely on streaming for revenue or mindshare continue to stream via software (e.g. OBS) is because of the much better quality you can achieve for a given bitrate.
 

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
GPUs have hardware encoding as well... one of the reasons people who rely on streaming for revenue or mindshare continue to stream via software (e.g. OBS) is because of the much better quality you can achieve for a given bitrate.

I do streaming myselft I know that perfectly. BUT they are not comparing quality, thats the whole point here, you are NOT going to do software encoding with a quad core and try to game at the same time, its just pointless to compare like that.

Place the hardware encoder in the quad core, and place the best quality that Ryzen can do at software while gaming whiout dropping frames, thats is a good and usefull comparison.
 

DuronBurgerMan

Junior Member
Mar 13, 2017
21
19
41
I do streaming myselft I know that perfectly. BUT they are not comparing quality, thats the whole point here, you are NOT going to do software encoding with a quad core and try to game at the same time, its just pointless to compare like that.

Place the hardware encoder in the quad core, and place the best quality that Ryzen can do at software while gaming whiout dropping frames, thats is a good and usefull comparison.

Actually it is a good thing to be aware of in one way. Because it shows that Ryzen can handle that scenario, and you won't lose quality. With a quad, you have to choose. You can have speed at the expense of quality, or quality at the expense of FPS. With an 8-core that doesn't suck... you don't have to choose. You can have both. That is likely to be important these days, with most of the hardcore gamers I know doing vids and livestreams and sh*t. I don't do that sh*t. I suck at gaming these days. I don't want anybody to watch.

Buuuuuuut it would be nice to have AE rendering some video and be able to play a game while I wait without tanking. Sad part is, I'm still kind of irritated at AMD for all the beta-ish sh*t they pulled with this launch. If Intel were charging something like $500-$600 for their 6900k, I'd go that route all day long. But no, they had to go north of $1000. Nope. Sorry, Intel. AMD's got my money this time around.
 

MajinCry

Platinum Member
Jul 28, 2015
2,495
571
136
Some of those games like Witcher 3 are known to run much faster with faster RAM. Fallout 4 is another one.
It's just bad programming that leaves them hitting memory a lot.

No VIDEO GAME should be RAM speed limited like that. It take poor programming to make it happen.

You're stuck in the days of the XBOX, where games were designed to use 512MB combined, for game data + meshes + textures + preloaded meshes 'n' textures.

Now that we're in the 64bit era, games with lots of memory use are gonna want them fast DDR sticks.
 

majord

Senior member
Jul 26, 2015
433
523
136
I'll give it a shot: Timer resolution.

For some new results: HT4U did some benchmark runs at different mem speeds and also with 2 cores disabled (simulating the 1600X).
https://www.ht4u.net/reviews/2017/amd_ryzen_7_1800x_im_test/

1800X DDR4-2133 vs. DDR4-3200
Many games run ~15% faster! Now imagine switching off SMT and fixing any thread/CCX ping pong games.

https://www.ht4u.net/reviews/2017/amd_ryzen_7_1800x_im_test/


I think it's important not to forget increasing RAM clocks increases DF bandwidth, which would also ease contention and cross CCX latency . It would be interesting to specifically compare 4+0 vs 2+2 Ram clock scaling , particuarly in those that show a big performance delta when run in 4+0.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
Not necessarily. They may simply mean that Windows is properly differentiating physical cores from logical cores, which is true. The Windows 10 scheduler IS doing that right. There's nothing broken with it. That being said, it isn't optimized for Ryzen, because of the two CCXs acting, in some ways, almost like separate chips. So AMD comes out and says, basically, "calm down, people, Microsoft didn't do anything wrong." Which is true, and builds some goodwill for AMD... so that hopefully they can convince MS to optimize more for Ryzen. In other words there's a difference between being broken and being unoptimized.

After all, you don't build goodwill by blaming Microsoft for everything -- it was AMD that decided this was the way they wanted to do it knowing full well that Windows wasn't currently optimized for this scenario. They had to know this would be an issue. Give 'em some time.
They have had at least a year... AMD trying to downplay this is very bad news.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Are we supposed to be upset about lower FPS in some games? Im sure not this thing is awesome. Who were reviewers talking to when they said this is bad for gaming!!?? They dont know what the h... they are talking about. Talk about terrible advice, im glad i didnt listen to them. :)
 

cytg111

Lifer
Mar 17, 2008
23,174
12,835
136
They have had at least a year... AMD trying to downplay this is very bad news.

- Damn man, I wouldnt want to know your scenario for "very-very bad news" then :). I think this is a minor stumble. Best case scenario updates will arrive and you'll get a speedbump, worst case you wont and it'll still be a hell of a product.

Btw, is the 6900k a true octa core or two quads slammed together?
 

cytg111

Lifer
Mar 17, 2008
23,174
12,835
136
Why? Should every single asset of every single game fit entirely inside L2 cache?

Thats the point IMO, if so, then the L2 would be the bottleneck and subject to "poor programming"(we could all go back to playing quake1 I guess). Bottlenecks per definition is not good or bad its just a characteristic for a system at a given scenario.. It may well be performing in an optimal configuration given that the primary bottleneck is on the ram.
 
Last edited:

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Btw, is the 6900k a true octa core or two quads slammed together?
intel-xeon-e5-v4-block-diagram-mcc-lcc.jpg

Take your guess (6900k is cut LCC).
 
  • Like
Reactions: CatMerc and cytg111

i-know-not

Junior Member
Mar 2, 2017
13
14
41
  • Like
Reactions: cytg111

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
I think it's important not to forget increasing RAM clocks increases DF bandwidth, which would also ease contention and cross CCX latency . It would be interesting to specifically compare 4+0 vs 2+2 Ram clock scaling , particuarly in those that show a big performance delta when run in 4+0.
I had the same thought. For one thing we should see lower latency in PCPer's tool - and then with constant mem latency. For another thing in games/apps those effects of inter CCX communication and overall mem B/W + lat can't be isolated.

Well, maybe isolated tests for mem B/W and latency effects on 4+0 and 2+2 could be used to remove them for a clearer look at the CCX communication.
 
  • Like
Reactions: CatMerc

NTMBK

Lifer
Nov 14, 2011
10,232
5,013
136
Some of those games like Witcher 3 are known to run much faster with faster RAM. Fallout 4 is another one.
It's just bad programming that leaves them hitting memory a lot.

No VIDEO GAME should be RAM speed limited like that. It take poor programming to make it happen.

Those are games with big open worlds, meaning that they have what is technically known as a buttload of data. Lots of data, lots to load from memory.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
According to PCGameshardware, the tested 4+0, 2+2, and 3+3 configs were still slower in BF1.

http://www.pcgameshardware.de/scree...-Test-CPU-Core-Scaling-Battlefield-1-pcgh.png


With Nvidia drivers BF1 seems to scale well enough to more cores with DX11 and scale less with DX12.
DX11 seems to produce higher FPS too.
Wonder if AMD GPUs have the same DX12 behavior.
BF1 scales ok with mem clocks too
That PCG test is with low DRAM clocks and DX11, maybe it's somewhat different with decent RAM and DX12.
lol i haven't payed so much attention to benchmarks since Gulftown...
 
  • Like
Reactions: Dresdenboy