Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 160 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
I bought the 1800x as a gift to AMD for launching this chip. My hope was that it would wind up being the best bin available, but . . . 1700 might be better, at least for speeds up to 3.9 GHz anyway.

Finally got my chip in, just no board, so . . . hey, nice box.



They ain't doin me no favors.
That is why I went with the 1700 over 1600x.

lol, I literally just talked to the fedex man, and got my cpu.

The taichi is the last piece of the puzzle.... Come on newegg, daddy needs a new mb.
 
Last edited:
  • Like
Reactions: richierich1212

Agent-47

Senior member
Jan 17, 2017
290
249
76
Intel never required "the whole world to change" in order to use their products properly. Good thing we have Intel always looking out for our interests.

probably something to do with the fact that Intel literally dictated the optimization that are currently in place because for 12 years now there has been nothing but Intel platform to optimize for.
 
  • Like
Reactions: looncraz

HutchinsonJC

Senior member
Apr 15, 2007
465
202
126
no. people do tests on realistic setups for determining current/TODAY performance . low res is to see which CPU will bottleneck first in the future (which is an invalid assumption).

I've never seen low resolution benchmarks as a standard of gaming for today or for the future. That's not me saying I've never seen review sites present it that way. Low resolution gaming benchmarks are a CPU benchmark. That's it. It removes other hardware components as bottlenecks and allows us to see how THAT CPU, with zero hold backs, performs on THAT game's code.

It is NOT a gaming benchmark, it is NOT supposed to be indicative of actual gaming performance.

It MIGHT be indicative of what is to come *IF* the reviewer does a proper job and presents all the facts about where hardware is going in the future with multi-threading and where developers are going with multi-threading and can actually include screens showing core usage, etc.

As for the example of one particular game, with the same couple of CPUs being compared against each other with a new GPU from a few years later, is it possible a change in DirectX has done something to better share resources of many threads? He himself in that video didn't really do all that proper of an analysis.
 
  • Like
Reactions: CHADBOGA

cytg111

Lifer
Mar 17, 2008
23,170
12,824
136
I dont care, I want an explanation and something that is repeatable and testable that explains the lowres poor gaming results. The argument that "nobody games 640x480" is null and void IMO cause these numbers may be indictive of something else, it might not be "lowres games" that it falls short on.
Is there anything official on a windows10 update to the scheduler that fixes SMT allocation?
I am sure that performance will rise with updates and patches, where can we follow this? who will keep reviewing the platform as it matures?
 

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
Hyper-threading, MCM dual core, MCM quad core, MMX, IA64 (failed), RDRAM (failed), socket after socket, 0.5mm mounting hole changes every generation, Broadwell L4, ... the list is endless.

Yes all those things could be aplied to AMD and more.
Phenom X6 "ergh the problem is that you need to make your software more MT", FX8150"ergh the problem is that you need to make your software even more MT", CMT, 754,939, AM2, AM2+,AM3, AM3+ FM1, FM2, FM2+, AM1, XOP, FMA4, 3DNOW, SSE4A, Polaris "errgh the problem is you need to make your game DX12/Vulkan", Ryzen "arrrgh there is no way you gonna belive me if i say the problem is MT in software for the 3rd time right?".

Seriusly, this is getting old... its like Intel screaming at developers why they dont use AVX in every new generation since SB.
 

The Alias

Senior member
Aug 22, 2012
647
58
91
Yes all those things could be aplied to AMD and more.
Phenom X6 "ergh the problem is that you need to make your software more MT", FX8150"ergh the problem is that you need to make your software even more MT", CMT, 754,939, AM2, AM2+,AM3, AM3+ FM1, FM2, FM2+, AM1, XOP, FMA4, 3DNOW, SSE4A, Polaris "errgh the problem is you need to make your game DX12/Vulkan", Ryzen "arrrgh there is no way you gonna belive me if i say the problem is MT in software for the 3rd time right?".

Seriusly, this is getting old... its like Intel screaming at developers why they dont use AVX in every new generation since SB.
You literally just listed every socket and instruction set AMD has created in the past 12 years. Just to try rebut someones point that Intel has required the industry to change multiple times when the original poster said incorrectly stated "Intel never required "the whole world to change" in order to use their products properly. Good thing we have Intel always looking out for our interests." Please educate yourself on the argument before you waste your time.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Yes all those things could be aplied to AMD and more.
Phenom X6 "ergh the problem is that you need to make your software more MT", FX8150"ergh the problem is that you need to make your software even more MT", CMT, 754,939, AM2, AM2+,AM3, AM3+ FM1, FM2, FM2+, AM1, XOP, FMA4, 3DNOW, SSE4A, Polaris "errgh the problem is you need to make your game DX12/Vulkan", Ryzen "arrrgh there is no way you gonna belive me if i say the problem is MT in software for the 3rd time right?".

Seriusly, this is getting old... its like Intel screaming at developers why they dont use AVX in every new generation since SB.

New instruction sets are optional - as are new APIs. Your inclusion of them is silly. I'll grant you the socket jumble - but at least there was some compatibility with existing components.

Multi-core/CPU wasn't an AMD invention - I did it with Intel for years (multi-CPU) before AMD created their first multi-core CPU... something which was a really fantastic move and required no real software changes to enjoy for me because I wasn't running Windows.

I like how you pick on the Phenom II X6... it was an absolutely fantastic CPU that is useful to this very day and still fetches something of a premium. CMT was a great idea - the implementation was just bogus. Windows only needed to handle it differently... big whoop - Windows had to have a nearly complete scheduler rewrite because of Intel's Hyper-threading... and there are still problems with it 15 years later!

Do you remember IA64? That was Intel's attempt to destroy x86 and 32-bit compatibility entirely. The reason? To eliminate all competition.

It took AMD to create a proper 64-bit ISA - and they ensured that it was backwards compatible with 32-bit, 16-bit, and so-on. Intel had no choice but to adopt AMD's 64-bit ISA. Your Intel CPU is using AMD technology. Pretty cool, eh?
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
New instruction sets are optional - as are new APIs. Your inclusion of them is silly. I'll grant you the socket jumble - but at least there was some compatibility with existing components.

Multi-core/CPU wasn't an AMD invention - I did it with Intel for years before AMD created their first multi-core CPU... something which was a really fantastic move and required no real software changes to enjoy for me because I wasn't running Windows.

I like how you pick on the Phenom II X6... it was an absolutely fantastic CPU that is useful to this very day and still fetches something of a premium. CMT was a great idea - the implementation was just bogus. Windows only needed to handle it differently... big whoop - Windows had to have a nearly complete scheduler rewrite because of Intel's Hyper-threading... and there are still problems with it 15 years later!

Do you remember IA64? That was Intel's attempt to destroy x86 and 32-bit compatibility entirely. The reason? To eliminate all competition.

It took AMD to create a proper 64-bit ISA - and they ensured that it was backwards compatible with 32-bit, 16-bit, and so-on. Intel had no choice but to adopt AMD's 64-bit ISA. Your Intel CPU is using AMD technology. Pretty cool, eh?

Yes, most Intel attempts of going against the world FAILED, thats the whole point here. If i take your argument, only HT was the big against the world attempt that passed. AMD is doing this kind of stuff on EVERY RELEASE, im not justifing what Intel did, you are trying to justify what AMD did, big difference. I did not even mention Intel until you did.

The botton line is, im tired of this... every new AMD product comes with "it will work better after all those changes are done", this has to stop, seriously.

BTW, good trolling attempt there.
 
Last edited:

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
Yes, most Intel attempts of going against the world FAILED, thats the whole point here. If i take your argument, only HT was the big against the world attempt that passed. AMD is doing this kind of stuff on EVERY RELEASE, im not justifing what Intel did, you are trying to justify what AMD did, big difference. I did not even mention Intel until you did.
I don't see how AMD is going against the world here.
One change in Window's scheduler, a change that already exists thanks to the first multi-core processors being similarly designed, will fix the only thing that makes Zen majorly different to work with on a software level. Most developers won't have to deal with it, and where they will, AMD is already sending engineers to help smooth it out.

Unless you mean multithreading, in which case I would like to point at the entire industry ALREADY moving in that direction. It's not there, but more and more modern titles are heavily multithreaded. It's a simple logical progression, since processors appear have hit an IPC wall and clockspeed gains are painfully slow. More cores is the only reasonable way to increase performance anymore.
All AMD is doing here is giving consumers a processor ready for that, without the massive premium Intel charges for it.

When I bought my 6600K, the absolute majority of games performed pretty much the same as with the 6700K when on similar clockspeeds. Now hyperthreading is seeing massive gains in some games, and my 6600K is being left behind. An 8c/16t CPU will not hit any such limit any time soon.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
I had to be among the first to backorder the taichi on the 2nd, and it says 1-15 days, but I have a sinking feeling that I won't get it until the end of the month.
 

SunburstLP

Member
Jun 15, 2014
86
20
81
AMD is doing this kind of stuff on EVERY RELEASE, im not justifing what Intel did, you are trying to justify what AMD did, big difference.

Citation needed. I'm not going to try and assign a motivation to your last few posts, that's not fair. However, I don't get what point you're trying to make. The way I'm reading it is that you're displeased with AMD for launching one of the most clever, clean-sheet CPUs ever and stating how programming needs to be done to best utilize it. If I'm mis-reading you, please clarify.
 

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
I don't see how AMD is going against the world here.
One change in Window's scheduler, a change that already exists thanks to the first multi-core processors being similarly designed, will fix the only thing that makes Zen majorly different to work with on a software level. Most developers won't have to deal with it, and where they will, AMD is already sending engineers to help smooth it out.

Unless you mean multithreading, in which case I would like to point at the entire industry ALREADY moving in that direction. It's not there, but more and more modern titles are heavily multithreaded. It's a simple logical progression, since processors appear have hit an IPC wall and clockspeed gains are painfully slow. More cores is the only reasonable way to increase performance anymore.
All AMD is doing here is giving consumers a processor ready for that, without the massive premium Intel charges for it.

When I bought my 6600K, the absolute majority of games performed pretty much the same as with the 6700K when on similar clockspeeds. Now hyperthreading is seeing massive gains in some games, and my 6600K is being left behind. An 8c/16t CPU will not hit any such limit any time soon.

What i mean, this is a server design, running basic desktop tasks. For a correct working scheduler, while its not the same, this is very close to have a dual socket motherboard on your every day pc.
Most production software are not going to have a problem with it, almost all of them do linear tasks. But games arent production software and the 2 CCX ARE a problem with them, thats the very reason of why we need a scheduler update in the first place.
No one can tell if Ryzen gonna work as well as a true 8-Core once the main botteneck in games is removed (main thread), scheduler update means try to keep all threads on the same CCX and avoid CCX jumping as much as possible.

There is a reason of why not many people buy dual socket motherboards and try to game on them right?
 

Shivansps

Diamond Member
Sep 11, 2013
3,851
1,518
136
Citation needed. I'm not going to try and assign a motivation to your last few posts, that's not fair. However, I don't get what point you're trying to make. The way I'm reading it is that you're displeased with AMD for launching one of the most clever, clean-sheet CPUs ever and stating how programming needs to be done to best utilize it. If I'm mis-reading you, please clarify.

AMD wants developers to going back to a old way of programing, devs do not babysit threads anymore, thats the scheduler work. They wants devs to account for every current cpu and future cpu and write proper MT code and manually set affinity to best utilize every one of them? they are crazy or what, devs just dont do that anymore, they just start threads and let the scheduler figure out what to do with them, that is also cross platform friendly.
One the main bottleneck today is removed, the main thread, we all gonna win from it.
 
Last edited:

KompuKare

Golden Member
Jul 28, 2009
1,013
924
136
Seriusly, this is getting old... its like Intel screaming at developers why they dont use AVX in every new generation since SB.
Well, unless that was sarcasm it's quite easy to see why developers do not use AVX much: Intel almost totally at random decides to fuse off AVX on such a large proportion of their range that I can see why developers might be reluctant to invest much in it. Only if their compiler gives it to them semi-automatically do most bother.

Can't say that I or many other will have anything but Schadenfreude if Intel's endless segmentation comes back to bite them now that AMD has a reasonably competitive design. I mean on a totally different market, they just more-or-less gave up on Atom after dumping $billions on it and one of the main reason is that for the couple of years of Atom they not only gave it no attention but seemed to have been so worried about it eating into Core sales that they left it crippled.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Well, i guess i learned my lesson to not act a goat against a slew of others.

But since i have not been keeping up with the thread last few days: did anyone actually test what happens when you try to workaround supposed issue with scheduling by forcing affinities and comparing results?

Also, i am surprised to learn that Ryzens are actually binned. That is actually a scary discovery.
 

SunburstLP

Member
Jun 15, 2014
86
20
81
AMD wants developers to going back to a old way of programing, devs do not babysit threads anymore, thats the scheduler work.

Hasn't considering the hardware that will execute code sort of always been pretty important to code-monkeys? I'm obviously not a programmer, so there is most likely a disconnect between what I think and reality.

It seems to me that if you're going to invest yourself heavily in a project for months/years, you would try to ensure that your efforts are as performant as possible to as wide of a market as is reasonable. What I'm getting at is that I think it's pretty much a core responsibility to make accommodations to your market within reason.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
What i mean, this is a server design, running basic desktop tasks. For a correct working scheduler, while its not the same, this is very close to have a dual socket motherboard on your every day pc.

Like the first Intel quads? (Clovertown et al)

Communication between the two different dies occurred across the FSB meaning high latency and woeful bandwidth - if you didn't recognise the problem and deal with it appropriately.

You could get an up to 15% improvement in performance just by setting core affinity to threads on these quads.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Well, i guess i learned my lesson to not act a goat against a slew of others.

But since i have not been keeping up with the thread last few days: did anyone actually test what happens when you try to workaround supposed issue with scheduling by forcing affinities and comparing results?

Also, i am surprised to learn that Ryzens are actually binned. That is actually a scary discovery.

It isn't actually possible to properly set thread affinity to get the behavior you want. You would need to be able to snoop into a process and manually set individual thread affinities.

Windows needs to recognize AMD's SMT implementation and to resist moving threads to the other CCX on context switches - keep them on the CCX.

That's and easy 10% in games that are impacted by this. A few edge cases will be more, naturally.

Applications that are fully threaded don't have a problem because Windows won't shuffle threads around...

THAT GIVES ME AN IDEA!!

Someone needs to run a heavy, very low prio, process that loads up every core and see if gaming performance improves - that should trick the Windows kernel into not load leveling as it won't find a lesser used core.
 

Atari2600

Golden Member
Nov 22, 2016
1,409
1,655
136
AMD wants developers to going back to a old way of programing, devs do not babysit threads anymore, thats the scheduler work.

I thought it was pretty clear that AMD want the scheduler updated to consider the effect of trashing cache across the CCX.

Obviously it wasn't clear enough.

Maybe AMD need to break out the crayons next time to explain it...