• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 178 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Some of those games like Witcher 3 are known to run much faster with faster RAM. Fallout 4 is another one.
It's just bad programming that leaves them hitting memory a lot.

No VIDEO GAME should be RAM speed limited like that. It take poor programming to make it happen.

Then learn to program and start doing it yourselft if its that easy to you. Call me back if you ever figure out a way to get rid of the main thread that is causing the bottlenecks on current games.

There is a higher chance that games start going heavy AVX than getting rid of the main thread btw.
 
The most retarded comparison i ever see since AMD own streaming comparison... the I7, I5, I3, Pentium, etc has a built-in h264 encoder that OBS supports, using CPU encoding on the I7 is just stupid.

If you want to compare quality, well thats another issue enterely.

twitch.tv have a soft limit of 4500kbps, many streamers are using 720p@60fps and many games still look very pixelated. Of course they want the best quality possible and hardware encoding is out of the question.

Many streamers that only use one PC (4 core CPU) for play and encode have problems on demanding games and many are using a dual PC setup.
 
The most retarded comparison i ever see since AMD own streaming comparison... the I7, I5, I3, Pentium, etc has a built-in h264 encoder that OBS supports, using CPU encoding on the I7 is just stupid.

If you want to compare quality, well thats another issue enterely.

GPUs have hardware encoding as well... one of the reasons people who rely on streaming for revenue or mindshare continue to stream via software (e.g. OBS) is because of the much better quality you can achieve for a given bitrate.
 
GPUs have hardware encoding as well... one of the reasons people who rely on streaming for revenue or mindshare continue to stream via software (e.g. OBS) is because of the much better quality you can achieve for a given bitrate.

I do streaming myselft I know that perfectly. BUT they are not comparing quality, thats the whole point here, you are NOT going to do software encoding with a quad core and try to game at the same time, its just pointless to compare like that.

Place the hardware encoder in the quad core, and place the best quality that Ryzen can do at software while gaming whiout dropping frames, thats is a good and usefull comparison.
 
I do streaming myselft I know that perfectly. BUT they are not comparing quality, thats the whole point here, you are NOT going to do software encoding with a quad core and try to game at the same time, its just pointless to compare like that.

Place the hardware encoder in the quad core, and place the best quality that Ryzen can do at software while gaming whiout dropping frames, thats is a good and usefull comparison.

Actually it is a good thing to be aware of in one way. Because it shows that Ryzen can handle that scenario, and you won't lose quality. With a quad, you have to choose. You can have speed at the expense of quality, or quality at the expense of FPS. With an 8-core that doesn't suck... you don't have to choose. You can have both. That is likely to be important these days, with most of the hardcore gamers I know doing vids and livestreams and sh*t. I don't do that sh*t. I suck at gaming these days. I don't want anybody to watch.

Buuuuuuut it would be nice to have AE rendering some video and be able to play a game while I wait without tanking. Sad part is, I'm still kind of irritated at AMD for all the beta-ish sh*t they pulled with this launch. If Intel were charging something like $500-$600 for their 6900k, I'd go that route all day long. But no, they had to go north of $1000. Nope. Sorry, Intel. AMD's got my money this time around.
 
Some of those games like Witcher 3 are known to run much faster with faster RAM. Fallout 4 is another one.
It's just bad programming that leaves them hitting memory a lot.

No VIDEO GAME should be RAM speed limited like that. It take poor programming to make it happen.

You're stuck in the days of the XBOX, where games were designed to use 512MB combined, for game data + meshes + textures + preloaded meshes 'n' textures.

Now that we're in the 64bit era, games with lots of memory use are gonna want them fast DDR sticks.
 
I'll give it a shot: Timer resolution.

For some new results: HT4U did some benchmark runs at different mem speeds and also with 2 cores disabled (simulating the 1600X).
https://www.ht4u.net/reviews/2017/amd_ryzen_7_1800x_im_test/

1800X DDR4-2133 vs. DDR4-3200
Many games run ~15% faster! Now imagine switching off SMT and fixing any thread/CCX ping pong games.

https://www.ht4u.net/reviews/2017/amd_ryzen_7_1800x_im_test/


I think it's important not to forget increasing RAM clocks increases DF bandwidth, which would also ease contention and cross CCX latency . It would be interesting to specifically compare 4+0 vs 2+2 Ram clock scaling , particuarly in those that show a big performance delta when run in 4+0.
 
Not necessarily. They may simply mean that Windows is properly differentiating physical cores from logical cores, which is true. The Windows 10 scheduler IS doing that right. There's nothing broken with it. That being said, it isn't optimized for Ryzen, because of the two CCXs acting, in some ways, almost like separate chips. So AMD comes out and says, basically, "calm down, people, Microsoft didn't do anything wrong." Which is true, and builds some goodwill for AMD... so that hopefully they can convince MS to optimize more for Ryzen. In other words there's a difference between being broken and being unoptimized.

After all, you don't build goodwill by blaming Microsoft for everything -- it was AMD that decided this was the way they wanted to do it knowing full well that Windows wasn't currently optimized for this scenario. They had to know this would be an issue. Give 'em some time.
They have had at least a year... AMD trying to downplay this is very bad news.
 
Are we supposed to be upset about lower FPS in some games? Im sure not this thing is awesome. Who were reviewers talking to when they said this is bad for gaming!!?? They dont know what the h... they are talking about. Talk about terrible advice, im glad i didnt listen to them. 🙂
 
They have had at least a year... AMD trying to downplay this is very bad news.

- Damn man, I wouldnt want to know your scenario for "very-very bad news" then 🙂. I think this is a minor stumble. Best case scenario updates will arrive and you'll get a speedbump, worst case you wont and it'll still be a hell of a product.

Btw, is the 6900k a true octa core or two quads slammed together?
 
Why? Should every single asset of every single game fit entirely inside L2 cache?

Thats the point IMO, if so, then the L2 would be the bottleneck and subject to "poor programming"(we could all go back to playing quake1 I guess). Bottlenecks per definition is not good or bad its just a characteristic for a system at a given scenario.. It may well be performing in an optimal configuration given that the primary bottleneck is on the ram.
 
Last edited:
Btw, is the 6900k a true octa core or two quads slammed together?
intel-xeon-e5-v4-block-diagram-mcc-lcc.jpg

Take your guess (6900k is cut LCC).
 
I think it's important not to forget increasing RAM clocks increases DF bandwidth, which would also ease contention and cross CCX latency . It would be interesting to specifically compare 4+0 vs 2+2 Ram clock scaling , particuarly in those that show a big performance delta when run in 4+0.
I had the same thought. For one thing we should see lower latency in PCPer's tool - and then with constant mem latency. For another thing in games/apps those effects of inter CCX communication and overall mem B/W + lat can't be isolated.

Well, maybe isolated tests for mem B/W and latency effects on 4+0 and 2+2 could be used to remove them for a clearer look at the CCX communication.
 
Some of those games like Witcher 3 are known to run much faster with faster RAM. Fallout 4 is another one.
It's just bad programming that leaves them hitting memory a lot.

No VIDEO GAME should be RAM speed limited like that. It take poor programming to make it happen.

Those are games with big open worlds, meaning that they have what is technically known as a buttload of data. Lots of data, lots to load from memory.
 
According to PCGameshardware, the tested 4+0, 2+2, and 3+3 configs were still slower in BF1.

http://www.pcgameshardware.de/scree...-Test-CPU-Core-Scaling-Battlefield-1-pcgh.png


With Nvidia drivers BF1 seems to scale well enough to more cores with DX11 and scale less with DX12.
DX11 seems to produce higher FPS too.
Wonder if AMD GPUs have the same DX12 behavior.
BF1 scales ok with mem clocks too
That PCG test is with low DRAM clocks and DX11, maybe it's somewhat different with decent RAM and DX12.
lol i haven't payed so much attention to benchmarks since Gulftown...
 
Back
Top