AMD Ryzen (Summit Ridge) Benchmarks Thread (use new thread)

Page 155 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sirmo

Golden Member
Oct 10, 2011
1,012
384
136
You don't think they would have run it higher if it could? Think about it.
It's ramping production. It will obviously be a higher bin. These samples come from small batches with very little opportunity to harvest higher clocks parts. Also there are things like steppings and process tweaks that can increase clocks.

Lisa seamed confident. She said "base speeds of 3.4Ghz and higher" So there will be parts with higher base clocks than 3.4Ghz.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Amazingly the guy that did the Zen test at CanardPC (and who once merged his site wich CanardPC) is the very one that got an Athlon 64 sample (at 1.4GHz) 8 months before the CPU was launched, so history seems to repeat itself..

http://www.x86-secret.com/popups/articleswindow.php?id=67

That chip may have had other issues making it underperform, because otherwise it should be beating other chips that are similarly clocked. Its generally accepted Athlon 64 outperformed predecessors by 30% per clock.
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
First AMD RYZEN Review Leaked – Aggregate Performance 46% Faster Than FX-8370 with 93 Watt Power Consumption

http://wccftech.com/amd-ryzen-review-leaked/

http://www.cpchardware.com/cpc-hardware-n31-debarque-kiosque/

That's pretty bold for WCCFTech to indirectly call Lisa Su a liar... She CLEARLY said AT LEAST 3.4Ghz Base (if not more at launch). Here they are claiming there isn't time to get clocks higher than that review/engineering sample (which was 3.15Ghz).
 

bjt2

Senior member
Sep 11, 2016
784
180
86
You don't think they would have run it higher if it could? Think about it.

They overvolted it to avoid crashing during a such critic event and still they managed to stay under 95W (actually under 80W of difference between load and idle, accounting for PSU and VRM efficiency), so no, i don't think so.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
That's pretty bold for WCCFTech to indirectly call Lisa Su a liar... She CLEARLY said AT LEAST 3.4Ghz Base (if not more at launch). Here they are claiming there isn't time to get clocks higher than that review/engineering sample (which was 3.15Ghz).

I think it was in the article (review) from the magazine they are talking about.
They didn't do the review.
 

Abwx

Lifer
Apr 2, 2011
10,968
3,496
136
That chip may have had other issues making it underperform, because otherwise it should be beating other chips that are similarly clocked. Its generally accepted Athlon 64 outperformed predecessors by 30% per clock.

Well that was quite an early DT sample, the review is dated january 26 and the server parts where released late march, DT was in october, times have changed apparently..
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
I think it was in the article (review) from the magazine they are talking about.
They didn't do the review.

Nope it was the WCCFTech portion:

This shows that the review was probably conducted before the Ryzen event and is probably an engineering sample sent out to press to test under NDA. While it is possible (read: improbable) that the final product will have higher clocks, at this point it is starting to look very unlikely since AMD would have made sure that any outdated engineering samples were replaced before the reviews went live. Without any further ado, let’s dig into the review:
 

Sven_eng

Member
Nov 1, 2016
110
57
61
Nobody in the press has an official Ryzen sample from AMD, ES or final. The Canard Ryzen sample probably came from a motherboard company.
 
  • Like
Reactions: Doom2pro

Doom2pro

Senior member
Apr 2, 2016
587
619
106
Nobody in the press has an official Ryzen sample from AMD, ES or final. The Canard Ryzen sample probably came from a motherboard company.

It seems WCCFTech is making a lot of bold assumptions, calling this a review instead of a leak is one of them, then stating that the 3.15Ghz base is likely the final clocks even more bold considering Lisa Su said herself 3.4 base at least.
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
That's pretty bold for WCCFTech to indirectly call Lisa Su a liar... She CLEARLY said AT LEAST 3.4Ghz Base (if not more at launch). Here they are claiming there isn't time to get clocks higher than that review/engineering sample (which was 3.15Ghz).

"Before the reviews went live" LOL, it hardly qualifies for a "review". What a clueless site that is. This guy took an ES, ran it through some software, took some data points and calculated some relative numbers. Same way as he did back then with the 1.4GHz Athlon 64 ES in 2003. The only way you do that without breaking the law is by not signing an NDA and having some serious contacts in the industry... still, someone's gonna have to answer for this, AMD is sure not happy. This got spoiled somewhat, only two weeks away.

It seems wccf didn't pay any attention to what AMD stated at new horizon (3.4+ GHz base clock and exceeded initial 40% over XV IPC target). It is now very clear it wasn't a marketing statement hiding for example, a 41% increase. No, this is much higher. This 3.15/3.4GHz sample scores 5-10% below 6900k, this gap can easily be closed by getting the base clocks into the 3.4/3.5GHz range and is probably what AMD is working on right now until launch. It's insane how they managed to jump from faildozer to BW-E class with a new uarch and a shoestring budget. This is AMD's Conroe moment.

I think wccf was too fixated on the 40% figure expecting SB IPC as we all were, and wrote that out of shock/denial. Oh the following days and weeks are going to be hilarious. I love this. Finally some excitement in the x86 CPU world. Competition, I'm glad you're back!
 
  • Like
Reactions: prtskg and psolord

jpiniero

Lifer
Oct 1, 2010
14,629
5,246
136
It's ramping production. It will obviously be a higher bin. These samples come from small batches with very little opportunity to harvest higher clocks parts. Also there are things like steppings and process tweaks that can increase clocks.

I'm sure they will do another stepping. I don't think they want to wait for it though before releasing something. I guess the question is whether this is good enough to really tempt people at say $700 if this can only do 3.4 and not much more.
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
Someone in WCCFTech Comments said this in reference to the ES Code (AMD 2D3151A2M88E).

Rev 2D
Clock 315
2M 2MB cache L2
88E 8c 8t eco
This's a business cpu 65w TDP, no HT.
It's not SR7|SR5 consumer product.

Can anyone confirm this?
 
  • Like
Reactions: prtskg and USER8000

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I'm sure they will do another stepping. I don't think they want to wait for it though before releasing something. I guess the question is whether this is good enough to really tempt people at say $700 if this can only do 3.4 and not much more.

/facepalm
5960X can do 4.8Ghz with full Load ( all cores)
 

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
It's insane how they managed to jump from faildozer to BW-E class with a new uarch and a shoestring budget. This is AMD's Conroe moment.
Competition, I'm glad you're back!

Let's not get ahead of ourselves too much here. Yes, the early signs are looking good, if not great, but we still don't know what pricing or availability will be like. Once final hardware is on the shelves, tested and given the thumbs up we can start that conversation.
 
  • Like
Reactions: Burpo

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,858
136
I think AMD will have to price 4C/8T at USD 160 - USD 170 , 6C/12T at USD 260 - USD 270 , 8C/16T at USD 350 and 8C/16T flagship SKU at USD 450 - USD 500.

Seems rational. The top-end SKU pricing you estimate here falls in line with my expectations.

CanardPC is oldskool.

Rollin down the street, smokin Ryzen, sippin on gin and juice.

So,AMD actually managed to get close to Broadwell level IPC. That is much better than any of us thought they would get.

I am not . . . not really. It falls in line with the Blender/Handbrake demo.

As a general rule of thumb, if you OC your chip 10% you should see close to that much performance improvement, maybe a percentage point less.

Eh depends on RAM/cache scaling. Fortunately bus speeds (or ring/mesh speeds) are fast enough that those aren't a big bottleneck anymore, but if you just jack up CPU clock without the memory/cache subsystem being able to catch up, then you do not get 100% scaling.

That's why I miss bclk/htt OC like from my Stars days. Raise the clockspeed on everything!

I guess Broadwell has higher IPC than Skylake, given that 8C/16T Broadwell-E is 13.3% faster than 4C/4C Skylake-S in the games tested (3.2-3.7 GHz vs 3.2-3.6 GHz). Fair comparison. :D:eek:

When you give Broadwell lots of extra l3 cache (or l4, heh heh) then yes, it will sport higher IPC in games.

No, Broadwell-E (6900K) has higher clockspeed and bigger cashes than Skylake-S in this test while being just ~3% behind Skylake IPC wise (as per AT review). Games they tested are clock/IPC taxing except BF1 which can tax more cores.

Yeah what you said.

I can't wait for retail top SKU reviews and Francois Piednoel reactions over twitter :D

Oh goody.

Amazingly the guy that did the Zen test at CanardPC (and who once merged his site wich CanardPC) is the very one that got an Athlon 64 sample (at 1.4GHz) 8 months before the CPU was launched, so history seems to repeat itself..

Well, they ARE old school. Sometimes it pays to stay in the game.

Why do you think that? Base clock means what is says: minimum guaranteed clock for a SKU. I think you are just confused.

XFR may surprise us. If my guesstimations are correct, it may fluidly downclock below the "base" clock as it approaches the thermal throttling point. So there may not be a hard-and-fast set of clockspeeds or p-states like in previous AMD products. In other words, there may not be the behavior where it will sit at base clock until it crosses some arbitrary temperature threshold beyond which it throttles to 1.7 GHz immediately or . . . whatever. It'll probably follow a curve.

There will likely be a few points on the temperature vs. power consumption curve where it will sit at 3.4 GHz, which is where they expect the CPU to sit in an average case with the stock Wraith cooler during "normal" usage.
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,776
3,164
136
This is going to be my last post on the matter, im not wasting anymore time.

Lazy or incompetent developers who haven't bothered to take advantage of old instructions that have been in the market since 2011 and which drastically boost performance is a rebuttal to my point?

This is a complete load of crap. Do you actually understand what you just said? Its not about just setting some flags, you have to write/ refactor your code so that unrelated variables can be packaged together and executed with the same operation. Its so HARD that Compiler developers spend a massive amount of time & money on making their compilers do auto-vectorisation and guess what it still sucks and doesn't work well. Those lazy ass Compiler developers who do they think there are with their fancy Comp Sci based degree's and PHD's if they just stopped being so lazy :rolleyes:.

I guess all computer game Dev's are lazy for not using AVX either, you realize you now expect the developer to maintain two complete different sets of code for one application. One factored in a way that allows the Dev to take advantage of AVX/AVX2/FMA operations and another to keep all the Pentiums and Celerons K10's westmere's etc usable.

Or are you advocating for Software to not run well or at all on all modern hardware that is currently being sold?

Just using a different compiler or two causes a huge performance increase and we're supposed to believe that it makes sense to not just use the better compiler(s)?
You obviously dont understand, Compilers have many many flags that can be set, there are always trade offs to be made. There is then the version of the compiler which can have a big impact and the COST of the compiler can also be a factor. Choose the wrong optimization and things can go horribly wrong depending on how your code operates.

It's also a red herring to mention other software because Blender is what AMD chose to use as its benchmark and not all other software needs to be a benchmark or claims to be one.
No it isn't, you completely missed the point, all the things that tried to just be a benchmark get gamed to hell and back ( SPEC for example). All applications are benchmarks even if they are completely crap internally. They all tell you something if you take the time to understand. Blender is telling us that Bulldozer has internally bottlenecks that other cores don't have.


This is fascinating technical talk that dodges what I'm talking about and uses the old "Bulldozer has a bad design" chestnut to get an emotional response (prompting people to forget what we were actually talking about).

No it isn't, its the complete point that you choose to ignore and dream up other crap like its the app and the Dev's fault. Those lazy dev's (shakes fist).
Bulldozer has serious technical issues why do you think it's performance is so bad per clock! Go look at an ARM A73 core its a narrower design yet smokes it in performance per clock, why?

What's the point here? The Stilt's builds offer much better performance on Intel and Piledriver — with current Blender code. 2.75a is a lot slower.

Blender's official builds aren't a problem on Lynnfield but it's not 2009 anymore.
The point is its not just setting a flag you have to make your code fit the model the execution units use. if you go use the 2.75 code base and compiled it with AVX,AVX2,FMA/etc you would see that. Then what does refactoring your code like that do to products that don't support that optional instruction set.

This is not a easy thing to do and sometimes isn't obvious. You realize Blender is already using 128bit SSE operations its not like those lazy developers didn't know how to use SIMD:rolleyes:


the FACT is bulldozer has issues with large amounts of FP operations in flight
the FACT is bulldozer has issues with a large amount of FP stores in flight
the FACT is bulldozer/piledriver has to round Robbin its instruction decode between both cores
the FACT is if you vectorise something (pack 4 32 bit ops into 1 128bit op) you reduce:
the amount of instruction Decode from 4 to 1
the amount of scheduled ops from 4 to 1
the amount of stored data from 4 to 1

There is no conspiracy here if bulldozer see's a larger gain from the REFACTORING of code then other products then Bulldozer has a bottleneck in a space where no other Core does! Now given one of the Zen architects explicitly called out this issue with Bulldozer.........

if looks like a duck, walks like a duck, and sounds like a duck. ITS A BLOODY DUCK!

By your logic we should all be using Cell based CPU's, those Lazy Dev's should just refactor their code.... Maybe its time for you to join the real world. You create a core that doesn't run well with the software that's available that's your problem, AMD even made this admission.
 
Last edited:

bjt2

Senior member
Sep 11, 2016
784
180
86
Seems rational. The top-end SKU pricing you estimate here falls in line with my expectations.



Rollin down the street, smokin Ryzen, sippin on gin and juice.



I am not . . . not really. It falls in line with the Blender/Handbrake demo.



Eh depends on RAM/cache scaling. Fortunately bus speeds (or ring/mesh speeds) are fast enough that those aren't a big bottleneck anymore, but if you just jack up CPU clock without the memory/cache subsystem being able to catch up, then you do not get 100% scaling.

That's why I miss bclk/htt OC like from my Stars days. Raise the clockspeed on everything!



When you give Broadwell lots of extra l3 cache (or l4, heh heh) then yes, it will sport higher IPC in games.



Yeah what you said.



Oh goody.



Well, they ARE old school. Sometimes it pays to stay in the game.



XFR may surprise us. If my guesstimations are correct, it may fluidly downclock below the "base" clock as it approaches the thermal throttling point. So there may not be a hard-and-fast set of clockspeeds or p-states like in previous AMD products. In other words, there may not be the behavior where it will sit at base clock until it crosses some arbitrary temperature threshold beyond which it throttles to 1.7 GHz immediately or . . . whatever. It'll probably follow a curve.

There will likely be a few points on the temperature vs. power consumption curve where it will sit at 3.4 GHz, which is where they expect the CPU to sit in an average case with the stock Wraith cooler during "normal" usage.

On the New Horizon event Zen ran steadily at 3.4GHz on a quite high load, drawing about 80W, with a quite overvolt (to be safely stable) and was a preproduction chip. What let you think that retail chip with correct voltage can't stay at 3.4GHz+ at 95W?
 

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
the FACT is bulldozer has issues with large amounts of FP operations in flight
the FACT is bulldozer has issues with a large amount of FP stores in flight
the FACT is bulldozer/piledriver has to round Robbin its instruction decode between both cores
the FACT is if you vectorise something (pack 4 32 bit ops into 1 128bit op) you reduce:
the amount of instruction Decode from 4 to 1
the amount of scheduled ops from 4 to 1
the amount of stored data from 4 to 1

There is no conspiracy here if bulldozer see's a larger gain from the REFACTORING of code then other products then Bulldozer has a bottleneck in a space where no other Core does! Now given one of the Zen architects explicitly called out this issue with Bulldozer.........

if looks like a duck, walks like a duck, and sounds like a duck. ITS A BLOODY DUCK!

Occam's razor to the T. That and it shouldn't be surprising that a unique design like the construction cores see unique quirks in it's behavior. Maybe if there were other CMT based designs with extensive sharing of units we might see similar outcomes. That and it isn't like Blender was designed as a benchmark tool anyways. While performance is nice, I'm sure most of the team is more concerned with stability and portability considering the number of platforms they support and the type of work typically done with the program. This is even more reason why we should take any of these per-release results with a grain of salt, we simply have no way of knowing where they sit on the bell curve (assuming performance is normal data of course).
 

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,858
136
On the New Horizon event Zen ran steadily at 3.4GHz on a quite high load, drawing about 80W, with a quite overvolt (to be safely stable) and was a preproduction chip. What let you think that retail chip with correct voltage can't stay at 3.4GHz+ at 95W?

First, it's obvious that they disabled XFR completely for that demo, knowing full well that it would not enter dangerous operating territory at the specified voltages.

Secondly, I never said it wouldn't be able to stay at 3.4 GHz at the "correct" voltage. What I am saying is that the entire operation of the chip may be more fluid than past products. 3.4 GHz will be one point on a long curve on a temp vs. power consumption scale . . . or that would be my guess, rather.

If you stuff the thing in a restrictive case and let temps climb it will probably go below 3.4 GHz. If you set up a good case with great airflow (or an open-air bench) my guess is that it will run above 3.4 GHz through most workloads. It will probably not spend a lot of time at 3.4 GHz. It could get complicated since XFR would need to be able to dynamically adjust voltage and clockspeed to perform as advertised, giving it potentially very complex behavior.

To put it differently:

A 7890k will chug along happily at any temperature with a clockspeed of 4.1 GHz until thermal margin reaches 0, at which point it instantly takes a dump and goes to 1.7 GHz or . . . something really slow (that actually is the VRM temp throttle speed but it may be the same for the CPU overheating). It will take a corresponding drop in voltage since that's what is mandated by the p-state.

Let's say Summit Ridge has a max core temp of 72C (assuming AMD isn't using thermal margin, let's hope not). Approaching that threshold may cause the CPU to subtley downclock AND undervolt itself to avoid overheating without suffering a precipitous drop in performance. So at 70C it may go -xxx v vcore (where xxx is some arbitrary value) and downclock to 3.1-3.2 GHz or so. Under normal circumstances, AMD is projecting that the stock cooler in a decent case will never let the CPU get that hot. The speed they think most people will see in "normal" use for the referenced Summit Ridge chip will possibly be 3.4 GHz.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
If AMD SMT effiency is better than Intel we could see closer to Haswell level single core performance,but if the AMD SMT implementation is worse it could be better than Broadwell.

The game benchmarks include ARMA 3,Anno2070 and GRID which do horrendously on AMD chips as they tend to prioritise single core performance:
http://www.techspot.com/articles-info/849/bench/Gaming_08.png
http://www.hardware.fr/medias/photos_news/00/39/IMG0039213.png
http://www.gamegpu.com/images/stori...RID_Autosport/test/GRIDAutosport_proz_amd.jpg

The FX8350 barely matches a Core i3 4330 in Far Cry 4:

http://www.gamegpu.com/images/stories/Test_GPU/Action/Far_Cry_4/nv/test/fc_proz.jpg

That hints more at per core performance increasing a considerable amount since they tend to not scale very well with SMT.
 
  • Like
Reactions: cytg111
Status
Not open for further replies.