Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 137 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

looncraz

Senior member
Sep 12, 2011
722
1,651
136
It looks like Windows 10 is already handling CCX correctly.

Core 0, Core 1, Core 2, and Core 3 are shown as sharing one L3.

Core 4, Core 5, Core 6, and Core 7 are shown as sharing the other L3.

It's definitely not handling it correctly:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775716

Windows 7 is performing WORLDS better... in fact, THAT performance is the performance that you would expect given the rest of Ryzen's performance... Something is seriously wrong with how Microsoft handles Ryzen with Windows 10.
 

Mockingbird

Senior member
Feb 12, 2017
733
741
136
That may or may not affect relative FPS, but why would it affect SMT?

His copy is Windows 10 is either not up-to-date or incorrectly configured.

Coreinfo shows that his copy of Windows 10 thinks that each thread are different cores and that each core has its own L3 which is not the case.

starheap and iBoMbY have copies of Windows 10 that correctly detected that two threads are shared by a core and L3 are shared by four cores.

It's definitely not handling it correctly:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775716

Windows 7 is performing WORLDS better... in fact, THAT performance is the performance that you would expect given the rest of Ryzen's performance... Something is seriously wrong with how Microsoft handles Ryzen with Windows 10.

Aside from what I mentioned above, he also mentioned that he didn't use the same application to measured FPS.
 
  • Like
Reactions: Agent-47

looncraz

Senior member
Sep 12, 2011
722
1,651
136
His copy is Windows 10 is either not up-to-date or incorrectly configured.

Coreinfo shows that his copy of Windows 10 thinks that each thread are different cores and that each core has its own L3 which is not the case.

starheap and iBoMbY have copies of Windows 10 that correctly detected that two threads are shared by a core and L3 are shared by four cores.



Aside from what I mentioned above, he also mentioned that he didn't use the same application to measured FPS.

As always, more testing is required. I have enough SSDs to create Windows 7 and Windows 10 clean installs just for testing (I plan to attempt to migrate my existing Windows 10 install for my permanent usage... I'd had to start all over).

Now if only my motherboard would come in...
 

Majcric

Golden Member
May 3, 2011
1,409
65
91
Hardware Unboxed detailed Ryzen GPU Bottleneck Testing:


Around the 2min mark, you can read some of the extended testing done that hasn't been published yet...

+1

Nice find. One of the better videos of explaining how CPU/GPUs work.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Yes, this is AMD's own fault for claiming superiority or equality in gaming while it is only true in a few cases.

On the hole, though, the 1800X is slightly faster than the 6900k (once you remove outliers).

http://www.anandtech.com/bench/product/1853?vs=1729

In productivity AMD probably underhyped it - since you can see even a Core i7 6950K get a good seeing too,in a number of instances. The gaming was not a good idea - they should have not have done the direct comparisons with a Core i7 6900K,and showed it off in a more general sense,against what they had before,ie,an FX9590 to show how much better Ryzen was when compared to the FX CPUs.

I think some of the disappointment was more towards it not being exactly Core i7 6900K for gaming,and in some reviews it can get close enough,whilst in others it really does not get that close.

But it wouldn't be surprising once we get some windows updates,and more stable motherboards,things might look better on that front anyway.
 

Mockingbird

Senior member
Feb 12, 2017
733
741
136
Looks like the a big portion of the SMT performance deficit is from a Windows 10 issue... look at The Stilt's #s comparing Windows 10 vs Windows 7.

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775716

How many times do I have to mention this???

His copy is Windows 10 is either not up-to-date or incorrectly configured.

Coreinfo shows that his copy of Windows 10 thinks that each thread are different cores and that each core has its own L3 which is not the case.

starheap and iBoMbY have copies of Windows 10 that correctly detected that two threads are shared by a core and L3 are shared by four cores.

Aside from what I mentioned above, he also mentioned that he didn't use the same application to measured FPS.
 

Mockingbird

Senior member
Feb 12, 2017
733
741
136
Even if that were true, The Stilt is in the top 0.01% in terms of technical ability. If he isn't using the correct configuration, then how would we expect reviewers (who are on average considerably less technical than The Stilt) to know?

...just look at the Coreinfo

It's not exactly hard to read.
 

Puffnstuff

Lifer
Mar 9, 2005
16,187
4,871
136
This release was like a wedding gone awry with the bride showing up wearing sweat pants and tennis shoes with her bridal gown. Close but not quite there yet.
 

Agent-47

Senior member
Jan 17, 2017
290
249
76
In productivity AMD probably underhyped it - since you can see even a Core i7 6950K get a good seeing too,in a number of instances. The gaming was not a good idea - they should have not have done the direct comparisons with a Core i7 6900K,and showed it off in a more general sense,against what they had before,ie,an FX9590 to show how much better Ryzen was when compared to the FX CPUs.
.
Thing is they only showed 4k gaming. And use words like "keeping up with Intel". Meaning CPU was not the bottleneck. Nothing wrong with that statement. They never claimed that zen had higher bottleneck threshold. They did not do anything wrong.

Only people who are beating the drums on this are the ones in the blue camp. Rest of us are waiting for a rebench. And the good thing is, there will be re done when ryzen 5 comes out.

I think some of the disappointment was more towards it not being exactly Core i7 6900K for gaming,and in some reviews it can get close enough,whilst in others it really does not get that close.
.

People should be patient. Its a new architecture. The cache is different. The hyperthread is different. Why is it so hard be comprehend that the codes have to be optimized?
 

nathanddrews

Graphics Cards, CPU Moderator
Aug 9, 2016
965
534
136
www.youtube.com
I'd like to see independent validation of AMD's original live stream where they compared three machines playing Dota2 and broadcasting simultaneously - one Ryzen, one i7, and one HEDT. Ryzen and HEDT were perfect, the i7 stuttered and struggled.

THAT is the specific use case for gaming that AMD originally made for Ryzen. I don't recall them claiming better pure gaming performance.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,069
3,886
136
I guess I will write more later upon finding time...

Pure performance wise, AMD has done an amazing job in every respect. No one can sniff at that. It is AMDs Conroe, albeit only vs AMD themselves.
Only if you ignore like all the review data, throughput per core is massive, throughput per watt is massive. This market (HEDT )is the one AMD will find it hardest to complete in. In 6 months this core and infrastructure design is going to be in every market AMD/Intel compete in. The scariest one for intel is actually laptop, 8 core Zen idles amazing well, 4 will do even better and isn't going to require a chip set while offering 2x NVME and 4xUSB 3.1 while having a significant better GPU.

This is far more then "Conroe themselves". If you believe that care to make a wager for what the market looks like in 12 months? By your logic AMD's market share across the board wont move and nether will intels prices/sku's because they have only conroe'd themselves...........


For anything Media/DAC/Encryption related, Ryzen is a sure bet. MT performance in a lot of applications is absolutely awesome. You have got to admit. It gives BDe a very good challenge at a fraction of the cost.

So the contradictions in logic start.

The cache, coherence
Any actually data to back this one up? I already know of OEM's (6 months ago infact) saying that the 32 core part is extremely impressive, how does that work across 8 CCX's if cache coherence is poor between 2?


and mem latencies are very poor.
The jury is still out on this one, i have seen no good methodical data to back this up. Separating what software test report how it changes over memory clock and latency and its actual impact on performance hasn't even begun to be tested.

Along with SMT and driver problems, they will produce sucky results in many mem sensitive and nonoptimized benchmarks. Games, for certain. Going forward this should improve somewhat but don't hold your breath on it. AMD should've known all of this.
So they did and the places where they can "fix" it themselves its all patched, like in Linux Kernel. But whats funny here is that you have taken two completely unverified, completely contestable points you can't back up and then fed them into a straw man of "nom optimized benchmarks".

But then we "shouldn't hold our breath" because its so hard for MS to change scheduler and the behavior thats causing such performance difference between high performance and balanced power settings.


Debunk That Hype
More like Debunk your post.....

AMD has a strong internet fanbase. I consider myself a fan.
Really because you look like a white anter . Making the generic im a fan, but then at every point, not taking even the neutral position but taking the negative position.


Then there are what we call AMD fanatics, far removed from reality. Like their counterparts from Intel, they don't understand science, data or reason. Their purpose is just to try and spin everything AMD to the best, craziest, light existable.
Much like you, because you have made a whole bunch of claims so far that you can't backup with fact!

The fact is by simply disabling SMT nets you 40% of the average gaming performance difference between Ryzen and 6900k/7700k
The fact is on the very few reviews that did gaming power consumption where AMD results where poor system power was massively down and then you find reviews like this: https://youtu.be/V5RP1CPpFVE?t=4m41s / videos of the benchamrk runs here https://www.youtube.com/watch?v=BXVIPo_qbc4&t=0s where AMD is completely competitive across the board.
Then you interesting data like this ( form the man you credit with generating awesome data) https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-8#post-38775732

All of a sudden things dont look so bad.


Posting support in hoardes, doesn't bolster the accuracy of your belief.
And yet look at the data.............


They were seriously delusional on many fronts for the past 4 months on here, creating this hugely wishful hype that has inturn made Ryzen look average upon release. They pushed unrealistic expectations in everyone's face, which has hurt AMDs image in the end.
Hardly, the only disappoint part has been overclocks, they will hopefully improve over time. I think the amount of time AMD gave reviewers to test iss also an decision they might be regretting because in the space of 2 days the issues that are largely in in software infrastructure space are being found and the doom and gloom of 1080P gaming reviews wouldn't have happened.


Upon reviews, they post frenetically trying to make the same excuses to defend AMD, excuses we've heard since Phenom. This is a sorry state.
Its going to be interesting to look back on your post in 2 weeks, 1 month, 1 year , 5 years ( because thats how long many people have had sandy bridge CPU's for) and see just where the dice lands.



1. Blender/POVRay was AMDs best case. Selected marketing. All of Horizon was pure marketing.
Complete BS not supported by actual Data, AMD never gave POVray results, how about 7zip or many of the beenchmarks the stilit ran? but we all know only winrar and LINPACK matters right because everyone uses those all the time ......

2. Doing everything altogether in one uarch, it's obvious the platform has A LOT of teething issues, and clocks were problematic. No wonder the delays. The platform is a beta. End users and reviewers NEVER have to wait for all this to be sorted. It is judged how it is sold
.
Are you just as judgmental to Intel ( appears not) because DDR4 memory support was a mess across multiple platforms for them to..... Just more white anting from you.


3. Low Power Plus is Low Power Plus! I heard so much irrational pseudoscience nonsense in the buildup here. Every one of it has been debunked by data now.
yep 4.1ghz single core turbo for their first 14nm CPU. The data of your post doesn't appear to support your position of being and "AMD Fan" because you know clocking at stock just as high as intel's 8 core parts is a complete fail.......


It is obvious power or process is absolutely no where close to Intel. That 1800X is choking being pumped +30W from the model below.
What does broadwell-E power clock curve look like mate? The highest clock chip is always going to have to worst perf per watt. For someone who constantly has looked down on people the fact you think this is somehow relevant just show that you either have no clue or deliberately are being dishonest.

Clocks/volts/currents are ceiling, OC minimal, XFR a gimmick suited to mobile and power way above 90W, and above Intels 140W chips when properly tested.
Thats the point of XFR, more goal posting moving white ant BS. Either you choose to be ignorant as to what XRF was designed to do or you are being deliberately dishonest, XRF ignores TDP and power consumption if system stability and cooling are in order.


No, sorry to all irrational pseudoscience. No magic 0.9v 4GHz at less than 80W because of a Neon FPU.
And no data to backup anything in your post so far either.


4. Piledriver vs Exc tests for IPC show 2% average difference now.
Complete BS showing either 1. of your lack of knowledge or 2. your continually white anting, goal post moving dishonest BS! I'll let you pick which one of the two it is!

It is completely workload dependent, Piledrive benefits massively from having an L3 cache ( look at thinks like SPEC results) and Excavator benefited massively form anything that can be kept within its L1/L2 ( look at things like prime95 etc) . equating them as the same is to ignore the actual data.


And Ryzen ST isn't 1-7% like the hype, but 10-20% behind Intel.
But again you continue with your white anting goal post moving BS ( AMD FAN hey!). Maybe you should go look an the stilts ST data all @ 3.5ghz.
stilt%20data.png


So against Haswell across all tests Zen does very well, not quite as well against skylake, But remove results for tests where 256bit op make a difference and Zen is even better, remove the few out statistical anomalies from both sides ( 2 positive for Zen 2 positive for skylake) and look where we end up.......

Funny what happens when you actually analyse data, Then remember that their will be next to 0 optimization for Zen so far compared to the aggregate of haswell + skylake optimization.


5. For the average guy, Ryzen is certainly not the gamers CPU. 4C, high IPC is still king. Intel has better buys, especially for futureproofing. Excuses don't mitigate that CPU load tests - which give a proper picture at all ranges - show it well behind.

You may want to check your data again,

And seriously. Argue all you like but...110fps vs 100fps is NO DIFFERENCE to a gamer! I played competitive FPS for years since Quake. Charts showing +100FPS are only good to ascertain the technical 'better' but not for actual playability.
Look at that goal post moving and white anting again, now all gamers are competitive gamers... ROLF, Also look how after just 2 days gaming performance data is trending upwards........


6. BitsandChips fed all the wrong zealous hype trains. Seems apparent they just wanted to cash in. Their latest linking a 1% runtime variation in CB to 'Neural Net Prediction'is equally ludicrous. It's called margin of error, for Christ's sake.
I cant stand bitsandchips but you just look desperate here, there is a very clear trend line for Zen while the 6700 is having 1% variances. Either way it will be very easy to disprove, so why dont you disprove it?


7. HEDT doesn't care for price or power. It cares about absolute performance. Which is, still, ruled by Intel.
yes we are all running 6950K's are we..... ROFL

You should have been around for the K7 / Prescott days then, you would know that's simply not true, the amount of 2500/2600+ that AMD sold ( best perf per $) while intel had the performance crown with with 3.2ghz HT P4 is what kept AMD alive while K8 was delayed.

I am a hardware and gaming Enthusiast , go look at some of the data i have provided in threads, the hours i spend generating it. My last CPU was an IVB 3770K @ $450 AUD. there is no way i am spending $1469 on a 6900k. So that leaves me really at 485 for a 7700k or 469/569 for 1700 or 1700X. Now i know that over the next 3-5 years games are going to only use more threads not less, im also happy to overclock, im my opinion the CPU i would recommend gamers especially if they are going to keep it for 3-5years has to be the 1700 by a significant margin.

Any serious gamer would put the money saved into more SSD (who wants to load games from spinning rust!) and More GPU!


AMD has now given Intel a challenger for certain workloads, however.
If by that you mean basically everything that isn't benefiting from 256bit ops then yes.

8. TheStilt did an awesome job! Should be renamed TheKanterStilt.



Now that's actually a quality data point.

Maybe you should actually look at his data then......

Fing hell this post took me 3 hours to write........lol
 
Last edited:

imported_jjj

Senior member
Feb 14, 2009
660
430
136
How many times do I have to mention this???

His copy is Windows 10 is either not up-to-date or incorrectly configured.

Coreinfo shows that his copy of Windows 10 thinks that each thread are different cores and that each core has its own L3 which is not the case.

starheap and iBoMbY have copies of Windows 10 that correctly detected that two threads are shared by a core and L3 are shared by four cores.

Aside from what I mentioned above, he also mentioned that he didn't use the same application to measured FPS.


Even if his Wim 10 would be borked, he gets a boost in Win 7 with SMT enabled over SMT disabled.
Review sites got a huge penalty with SMT enabled under Win 10 in this game.
 

french toast

Senior member
Feb 22, 2017
988
825
136
@itsmydamnation;world record longest post goes to you :)


LOL.
Zen has a couple of severe bottlenecks i can see, it can only dispatch 2 on screen characters per core, but SMT enables "double pumping" of on screen charaters, but they must be half the size.
Intels superior uarch has 2x more throughput of on screen charaters, not 8 half size charaters as such, but a whopping 4 full size charaters per core.
Ryzens neural net predictor hower enables harder game modes, it knows exactly what the player will do next, using xfr to turbo the enemies reaction times.

Its tough pick as both GAME ipc are not comparable really. I think the higher throughput of intel is better for large multiplayer maps that i play.:D
Evidence of nehalem game ipc i posted earlier.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Thing is they only showed 4k gaming. And use words like "keeping up with Intel". Meaning CPU was not the bottleneck. Nothing wrong with that statement. They never claimed that zen had higher bottleneck threshold. They did not do anything wrong.

Only people who are beating the drums on this are the ones in the blue camp. Rest of us are waiting for a rebench. And the good thing is, there will be re done when ryzen 5 comes out.



People should be patient. Its a new architecture. The cache is different. The hyperthread is different. Why is it so hard be comprehend that the codes have to be optimized?

To put it this way I would rather give AMD my money just for the crappy way Intel seems to lock down and monetise every little feature they sell,and changes sockets more than they change their underwear meaning you better hope your motherboard does not die a year or two after the socket has been replaced(unless you want to pay stupid money for a NOS replacement).

However,we heard the same thing about Bulldozer,its a new design and it will take time to optimise. You know what,as time progressed it did actually move ahead of the Phenom II in many aspects,but it took time.
The main concern is not if the improvements will come(they will in my view),but when and how long will it take?

That is the only concern I have,and Intel and Microsoft do have a very close relationship and seem to treat AMD as a second tier partner so it does worry when people promise it will be definitely better,with no real time-frame. We all live in hope that post I made about the update in a month(remember I made the post a while back) will be the silver bullet to solve a lot of these SMT problems.

At the same time if people do need to do a build and can't wait for the platform to be sorted at OS and BIOS level,then I don't see why people should jump on them if they do get a Core i5 or Core i7 for their gaming rig,especially when you have a game like Mass Effect:Andromeda being released in a few weeks.
 

Crumpet

Senior member
Jan 15, 2017
745
539
96
People keep mentioning Bulldozer like Ryzen isn't a really sweet cpu...

Yes, AMD messed up with Bulldozer. Ryzen looks like a pretty good processor to me, even with it's foibles.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
AMD released it a month or 2 to early IMO. Otherwise they have a strong foundation to work with. Much better than BD.

Zen will end up a strong perf/w beast for sure.

This is a way to force others to help you solve your problems.

Microsoft can't be convinced of applying a patch from AMD without hardware in the market - mainboard companies aren't going to invest anywhere near as heavily into a flawless launch when there are months/weeks to go compared to a product which is in the market and receiving complaints from customers... not to mention the absolute need of objective data points and different configurations that have to be tested.

There's an old adage: "We'll test it in production."

This lets you know where the biggest issues - for the customers - really are. Because AMD techs would be focusing on their own little pet areas of the product while potentially more important areas lag behind.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
AMD released it a month or 2 to early IMO. Otherwise they have a strong foundation to work with. Much better than BD.

Zen will be a strong perf/w beast for sure.

In agreement here,and I think it also does not help in the UK at least many who bought the CPUs are still waiting on the motherboards too. AMD really should given a few more weeks for the motherboards to mature,the windows update to released an also it would give the reviews more time to have tested Ryzen properly. From what we are hearing the reviewers were pushed to the last second,especially with new updates being pushed on the motherboards daily,meaning test results needed to be re-done a few times.
 

DeeJayBump

Member
Oct 9, 2008
60
63
91
...Fing hell this post took my 3 hours to write........lol...

3 hours of your life that you can't get back. Do yourself a favor: place these types of forum posters on ignore and move on with your forum life. Their sole purpose in tech forums is not to acquiesce to truth/facts/reason, but to push a singular agenda no matter what.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
This lets you know where the biggest issues - for the customers - really are. Because AMD techs would be focusing on their own little pet areas of the product while potentially more important areas lag behind.

It is quite easy to see where AMD was concentrating more on - it wasn't gaming,it was productivity, Have you noticed that under Linux or Windows,how Ryzen seems to be relatively bug free and has great performance?? What BIOS issues?? What SMT issues?? It seems to work well still.

I have a feeling AMD looked at gaming performance,said its better than what we have now,it will do for now,we can work on it over time,but were more worried about getting non-gaming software performance into order first.

If anything I would say this is the opposite order than what they tend to do things in! :p