• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

ZEN ES Benchmark from french hardware Magazine

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Tup3x

Senior member
Dec 31, 2016
446
315
106
It's unknown to perform better, too.

Sent from HTC 10
(Opinions are own)
True but I seriously doubt that they pull Phenom this time too. They'd definitely delay rather than release another bubblegum fix that would hurt the performance a lot. If they want to sell their CPUs...
 
  • Like
Reactions: prtskg

KTE

Senior member
May 26, 2016
478
130
76
True but I seriously doubt that they pull Phenom this time too. They'd definitely delay rather than release another bubblegum fix that would hurt the performance a lot. If they want to sell their CPUs...
I hope so, too. I think the delays are good rather than releasing a half botched product. If only that AFTER the delays, the product is fully functional upon release.

Sent from HTC 10
(Opinions are own)
 

ecogen

Golden Member
Dec 24, 2016
1,158
1,158
106
I hope so, too. I think the delays are good rather than releasing a half botched product. If only that AFTER the delays, the product is fully functional upon release.

Sent from HTC 10
(Opinions are own)
If we take the blender test they did at face value, isn't it safe to say the SMT bug at the very least is fixed? If it wasn't wouldn't that mean that they matched the 6900k with broken SMT (which would mean insane levels of IPC) or SB level IPC (which would make their SMT significantly better than Intel's).
Both scenarios sound fairly unrealistic.

Please correct me if I'm wrong.
 

cytg111

Lifer
Mar 17, 2008
14,328
4,611
136
If we take the blender test they did at face value, isn't it safe to say the SMT bug at the very least is fixed? If it wasn't wouldn't that mean that they matched the 6900k with broken SMT (which would mean insane levels of IPC) or SB level IPC (which would make their SMT significantly better than Intel's).
Both scenarios sound fairly unrealistic.

Please correct me if I'm wrong.
Why do you assume that Blender should trigger an unspecified SMT bug? There is a billion scenarios that could trigger a bug that is not Blender... I would think.
 
  • Like
Reactions: prtskg

ecogen

Golden Member
Dec 24, 2016
1,158
1,158
106
Why do you assume that Blender should trigger an unspecified SMT bug? There is a billion scenarios that could trigger a bug that is not Blender... I would think.
The article made it seem like SMT was bugged in general, maybe I misunderstood.
 

KTE

Senior member
May 26, 2016
478
130
76
If we take the blender test they did at face value, isn't it safe to say the SMT bug at the very least is fixed? If it wasn't wouldn't that mean that they matched the 6900k with broken SMT (which would mean insane levels of IPC) or SB level IPC (which would make their SMT significantly better than Intel's).
Both scenarios sound fairly unrealistic.

Please correct me if I'm wrong.
I couldn't correct anyone for an unknown

Both scenarios seem plausible but there are other scenarios too. I am certain some workloads will favor Zen uarch enough to put in near to BDe.

But as for Blender, and even if it proves anything that can be generalized, is highly debatable.

Back in the Phenom days, K10 2.5GHz was around Kentsfield 2.4GHz in this bench: http://www.anandtech.com/show/2754/12

Clarkdale was miles ahead, near 80% per clock over Regor with no L3. Regor 3.0GHz was about equal to K8, and Conroe 2.4GHz. http://www.anandtech.com/show/2775/7

(ATs Deneb Blender benches seem bugged because they have Deneb 2.8GHz = Deneb 3.1GHz, yet 3.0GHz (X4 940) is faster than their 3.4GHz (X4 965) result... So I'll skip those low performing samples, like where Deneb 970BE 3.5GHz is 25% faster than Deneb 965BE 3.4GHz.)

http://www.anandtech.com/bench/CPU/42

But look at the rest. Llano 2.9GHz is faster than a 2.8GHz Thuban.

Bulldozer and Piledriver gain immensely from K10 in Blender. Where else did BD outperform K10? Apart from the huge L2+L3 cache.

Piledriver 3M/6C 3.5GHz 6MB L2/8MB L3 is equal to Thuban 6C 3.3GHz 2MB L2/6MB L3 which is equal to Deneb 4C 3.6GHz 2MB L2/6MB L3! Can you see the affect Cache per Core is having?

The instr profiling I posted on an AMD and Intel chip confirmed the same, that it is an L1-L3 bound test. Remember Excel 07 and WinRAR? I remain unconvinced that these are anything more than cache prowess benchmarks.

Sent from HTC 10
(Opinions are own)
 

Abwx

Diamond Member
Apr 2, 2011
9,117
902
126
Both scenarios seem plausible)

Wrong, that s not what CPC said about their tests, indeed i posted the info but for some reasons you prefer
to ignore what CPC published and are instead making up misleading info..

CPC stated that their measurement should be taken as a "a minima" case, wich mean minimal values to expect from the chip, so it cant go in any other way than an eventual improvement...
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
We don't know the extent of the SMT or uop$ bugs, they could be a performance bug or a stability bug... All we know is that A0 has the bugs with a BIOS (likely microcode) fix that disables SMT and uop$ and or causes a huge performance penalty by soft fixing them.

Leaving the buggy ES fully operational could cause a performance hit or could cause the platform to hang or even cause code to execute improperly, we don't know.
 

Abwx

Diamond Member
Apr 2, 2011
9,117
902
126
Buggy ES wont have improved perfs in respect of final silicon, but seems that there s some people supporting this non sense.

Indeed CPC plateform didnt support Nvidia GPUs, and we know that at New Horizon the plateform used by AMD did, so that s a hint that CPC didnt have access to the latest set ups, probably that their tests were done well before that event but were delayed opportunistically, because their paper is published the 20th of the month...
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
16,627
5,637
136
Buggy ES wont have improved perfs in respect of final silicon, but seems that there s some people supporting this non sense.
It's from the Agena days when AMD couldn't really fix the TLB issue (they had it licked by Deneb), so they just disabled the TLB altogether as a "fix" which did hurt performance.

For a lot of users that were not affected by the bug, disabling the fix in the BIOS improved performance.

We probably won't see another "fix" like that again, at least not with Summit Ridge.
 

Abwx

Diamond Member
Apr 2, 2011
9,117
902
126
As posted by Abwx on SA: Possible new Ryzen Benchmark (Looks like another A0, June stepping)

http://ranker.sisoftware.net/show_r...d5e3d5e3d1e2d0f684b989afcaaf92a284f7cafa&l=fr
That s the good thread since this is the very chip, or his brother, that was tested by Canard PC, from Sisoftware
numbers we can see that there s some limitations in some loads, and that those limitations didnt exist to this extent in the submitted server plateforms wich results are available at Dresdenboy site.

Compared to the server variants the chip tested by CPC display the following behaviour, i bolded the improvements :

INT32 throughput/Thread/Hz is 30% higher.

INT64 is 10% lower.

INT128 is 20% higher.

FP32 is 20% lower.

FP64 is 28% lower.

FP128 is 10% lower.

Agregated score is 10% lower/thread/Hz, but more importantly the absolute score is 60% higher than the one of a FX8350/8370, wich is about the percentage measured by CPC in their tests.
 

cytg111

Lifer
Mar 17, 2008
14,328
4,611
136
Buggy ES wont have improved perfs in respect of final silicon, but seems that there s some people supporting this non sense.
I cant decode this properly, are you saying that fixing bugs in beta silicon wont hurt performance? How can you possibly know? - and to the extent calling it nonsense? Evidence?
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
I cant decode this properly, are you saying that fixing bugs in beta silicon wont hurt performance? How can you possibly know? - and to the extent calling it nonsense? Evidence?
Not all errata workarounds hurt the performance, that is errata specific obviously.
 

Abwx

Diamond Member
Apr 2, 2011
9,117
902
126
I cant decode this properly, are you saying that fixing bugs in beta silicon wont hurt performance? How can you possibly know? - and to the extent calling it nonsense? Evidence?
For one CPC said that (at the date of their tests..) a fix was already on the work, besides they stated that theirs numbers should be taken as a worst case scenario.

Now looking at Sandra 2015 numbers it is obvious that the FP throughput is not what it should be since the A4 stepping has way higher numbers/Hz than CPC s A2 stepping.

That s surely the reason why Zen trail the 6900K in their test, i would agree that Blender could be a best case for Zen, but i dont think that it would be the only one on the CPC list wich comprise 5 such FP benches and only two that are Integer.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
569
136
Despite the concerted effort in these forum to discredit it, power consumption does matter, and Polaris was a dud in this area. It was especially disappointing after all the hype by AMD (and the fans in this forum) about 2.5x performance per watt improvement, how it would beat nVidia in efficiency, blah, blah, blah, when the majority of cards failed to even come close to that. In fact, despite AMD's backtracking that it was for only one model, I dont know if any model ever reached that efficiency.
The Radeon Pro WX 5100 (Polaris 10) is good for 3.9 TFlops of compute performance, and consumes 75W. Its predecesor, the FirePro W5100 (Bonaire), used the same 75W of power in the same form factor and only did 1.43 TFlops. That's a ~2.7x improvement in performance per watt.

Nvidia's Quadro M5000 (Pascal, GP104) does 4.3 TFlops at a 150W TDP. That means WX 5100 (and WX 7100) actually have better perf/watt in compute applications than Nvidia's newest professional card. It's true that the consumer RX 480 offered subpar perf/watt, for two reasons: first, a lot of marginal silicon went in that should have gone into the trash can, and second, AMD lags behind Nvidia in DX11. Fortunately for AMD, Apple doesn't care about DX11, which is why they are sticking with AMD GPUs for the forseeable future.
 

cytg111

Lifer
Mar 17, 2008
14,328
4,611
136
For one CPC said that (at the date of their tests..) a fix was already on the work, besides they stated that theirs numbers should be taken as a worst case scenario..
I missed where they stated that?
I thought they ran the tests *with* the bugs present and either didnt trigger them or at least not in a fatal way.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
Precisely what I would expect.
The fact that the New Horizon displayed same performance in Blender and higher performance in Handbrake, contrary to CPC review in which was lower, even accounting for clock and even if in the mean there are also PoVray and other softwares, make me think that the bugs impair performance somehow, otherwise if it is the other way around, the CPC review should have been higher or comparable in results respect to 6900K...
 

KTE

Senior member
May 26, 2016
478
130
76
Bugs requiring CPU parts to be disabled always have a performance penalty (a la Agena).

I'm not sure if this is a minor or major CPU errata, or a BIOS/board bug. Either way it's not clear how much they even affect a benchmark yet.

Sent from HTC 10
(Opinions are own)
 

Abwx

Diamond Member
Apr 2, 2011
9,117
902
126
The fact that the New Horizon displayed same performance in Blender and higher performance in Handbrake, contrary to CPC review in which was lower, even accounting for clock and even if in the mean there are also PoVray and other softwares, make me think that the bugs impair performance somehow, otherwise if it is the other way around, the CPC review should have been higher or comparable in results respect to 6900K...
That s spot on, even if Zen had just matched BDW in Blender and the two X264/5 encoding this would had still
required the 6900K to outmatch Zen by 25% on average for the four remaining benches (Povray/Corona/3DSMax MR/WPrime), and as you point it it s unlikely since in Povray AMD s previous FPU is very good.

Indeed the Sisoftware results for this very chip point to subpar FP perf in respect of the later revision used in their Naples submission at the same site, as already posted FP32 perf is 20% below the later revision while FP64 is 30% below, that s if the chip is at 3.14GHz but if it worked up to 3.3 like in the CPC tests those percentages are 25% and 35% respectively.

Assuming the numbers are 20/30% (= Sandra tests made at 3.14GHz) the impact on the CPC tests would be 40% better perf in Povray, 25% in the other FP benches and about nothing in the encoding tests, the whole average would be 20% higher than what CPC got (and 25% higher if the Sandra tests where done at 3.3GHz).

Of course i m not saying that the final silicon will perform this way, but that s the apparent theorical margin left if we are to compare the DT and server chips throughputs in the Sisoftware submissions, the proportion of wich in actual tests being left to speculations.
 

cytg111

Lifer
Mar 17, 2008
14,328
4,611
136
I guess that is a bit too 'assuming' and fidling with numbers that is somewhat questionable to begin with(plus OR minus) - for my taste. I get where you are coming from and IMO it is a valid maybe. :)
 

iBoMbY

Member
Nov 23, 2016
175
103
86
The fact that the New Horizon displayed same performance in Blender and higher performance in Handbrake, contrary to CPC review in which was lower, even accounting for clock and even if in the mean there are also PoVray and other softwares, make me think that the bugs impair performance somehow, otherwise if it is the other way around, the CPC review should have been higher or comparable in results respect to 6900K...
That's because, I'm pretty sure, they used the 1D3201A2M88F3_35/32_N sample, at least for most of preview.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
That's because, I'm pretty sure, they used the 1D3201A2M88F3_35/32_N sample, at least for most of preview.
I would not be so suspicious... Probabily the problem was solved relatively recently... For electrical validation an old ES would suffice and then they would go around, maybe on ebay or other means...
 

Atari2600

Golden Member
Nov 22, 2016
1,224
1,373
136
Bugs requiring CPU parts to be disabled always have a performance penalty (a la Agena).
Correct. The disabling is usually done in microcode.

"Real" fixes (i.e. without penalties) are done in re-spins.

So - an early release that has a number of errata fixed by microcode should be more crippled than a later spin that has most of these caught and fixed on the die (and thus not requiring disabling of anything).
 

ASK THE COMMUNITY