AMD Ryzen (Summit Ridge) Benchmarks Thread (use new thread)

Page 162 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.
May 11, 2008
22,224
1,414
126
6700K is a higher clocked Quad Core, if Zen is at Broadwell E class, who's to say a Quad Core version wouldn't clock higher and make up this "gap"? Besides you are ignoring the lower clocks and bugs in the early silicon this benchmarked 20% gap you are going off of.

The problem with large core count SKUs are their shared package TDP, all those cores have to fit in the same package TDP, so clocking them all at a high base clock without having an insane TDP is very hard.

This is why Intel quad cores are kings at gaming, they have around half the cores of 8 and 10c SKUs and they have the same TDP so they clock much higher and with most games having issues utilizing more than 4 cores, it only increases this effect.

You can't compare an 8c/16t ENGINEERING sample to a Production Ready QUAD CORE 6700K. Wait for the 4c/8t Ryzen benchmarks to make any determination.

I wonder if it is possible for AMD to let an 8 core model power down 4 cores at software demand to let the 4 remaining running cores clock higher.
The cores that are powered down could then be spread in an even way to spread the dissipation hotspots over the die.
Let say the 8 cores are configured like this :


1256
3478

If cores 2,3,6,7 are powered down, there is some silicon that acts to absorb some heat :

1-5-
-4-8

I wonder if that is possible. I mean zen is hyped for all its sensors and on chip measurement, why not make practical use of it.
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
I wonder if it is possible for AMD to let an 8 core model power down 4 cores at software demand to let the 4 remaining running cores clock higher.
The cores that are powered down could then be spread in an even way to spread the dissipation hotspots over the die.
Let say the 8 cores are configured like this :


1256
3478

If cores 2,3,6,7 are powered down, there is some silicon that acts to absorb some heat :

1-5-
-4-8

I wonder if that is possible. I mean zen is hyped for all its sensors and on chip measurement, why not make practical use of it.

It might be possible but an 8 core with 4 cores soft disabled, or even BIOS disabled will still draw more power than a pure quad core... But it should be better than having the OS decide. I'm confident AMD has created ways to fuse off cores to create a 4C SKU from the 8 core parts which should prevent them from using more than a significant amount of TDP headroom unless they plan on using their 4c/8t APU model with GPU fused off, or both.
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
I wonder if it is possible for AMD to let an 8 core model power down 4 cores at software demand to let the 4 remaining running cores clock higher.
The cores that are powered down could then be spread in an even way to spread the dissipation hotspots over the die.
Let say the 8 cores are configured like this :


1256
3478

If cores 2,3,6,7 are powered down, there is some silicon that acts to absorb some heat :

1-5-
-4-8

I wonder if that is possible. I mean zen is hyped for all its sensors and on chip measurement, why not make practical use of it.

Such behavior has been a standard feature, since... K10 (Pharaohound) I believe?
For example on 15h CPUs and APUs the highest turbo frequencies (states) had activation conditions, which usually required half of the CUs being power gated before it could fire.

If you have a 8 core CPU with TDP of 45W or more, your maximum single core boost is obviously never power limit bound.
 
May 11, 2008
22,224
1,414
126
Such behavior has been a standard feature, since... K10 (Pharaohound) I believe?
For example on 15h CPUs and APUs the highest turbo frequencies (states) had activation conditions, which usually required half of the CUs being power gated before it could fire.

If you have a 8 core CPU with TDP of 45W or more, your maximum single core boost is obviously never power limit bound.

Oke, but i noticed when i do some tests with my piledriver based A10-6700 that always all cores jump up to multiplier 43, not just one core. Maybe it is windows that is juggling the thread in a round robin fashion from core to core. I do not know. It is only after the thermal limit is reached that i see all cores lower the multiplier to stay under the thermal limit.

edit:
Maybe my testing method is flawed.
I tested it with prime 95 with one torture thread (small fft) and all cores go up to 4300MHz and not one core is utilized 100%, All cores are at ~50%.
 
Last edited:
May 11, 2008
22,224
1,414
126
It might be possible but an 8 core with 4 cores soft disabled, or even BIOS disabled will still draw more power than a pure quad core... But it should be better than having the OS decide. I'm confident AMD has created ways to fuse off cores to create a 4C SKU from the 8 core parts which should prevent them from using more than a significant amount of TDP headroom unless they plan on using their 4c/8t APU model with GPU fused off, or both.

It would be nice if the os has a feature that it can be told to adjust the amount of user selectable cores on the fly.
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
Oke, but i noticed when i do some tests with my piledriver based A10-6700 that always all cores jump up to multiplier 43, not just one core. Maybe it is windows that is juggling the thread in a round robin fashion from core to core. I do not know. It is only after the thermal limit is reached that i see all cores lower the multiplier to stay under the thermal limit.

edit:
Maybe my testing method is flawed.
I tested it with prime 95 with one torture thread (small fft) and all cores go up to 4300MHz and not one core is utilized 100%, All cores are at ~50%.

A10-6700 should hit 4.2GHz on all cores and 4.3GHz when a single CU is utilized. The frequency can naturally be lower, if the TDP limit is reached.
 
May 11, 2008
22,224
1,414
126
A10-6700 should hit 4.2GHz on all cores and 4.3GHz when a single CU is utilized. The frequency can naturally be lower, if the TDP limit is reached.

It does, but when reading what you posted, i kind of expected that only one core would jump up to multiplier 42 or 43 and the other 3 cores stay at 3.7GHz or lower. I do vaguely remember that this was the case with win7 -64 bit. But i am not 100% sure.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
I wonder if it is possible for AMD to let an 8 core model power down 4 cores at software demand to let the 4 remaining running cores clock higher.
The cores that are powered down could then be spread in an even way to spread the dissipation hotspots over the die.
Let say the 8 cores are configured like this :


1256
3478

If cores 2,3,6,7 are powered down, there is some silicon that acts to absorb some heat :

1-5-
-4-8

I wonder if that is possible. I mean zen is hyped for all its sensors and on chip measurement, why not make practical use of it.

The windows algorithm of core rotation to balance the heat should be enough... If you need 4 core, you will see all 16 thread occupied for 25%... Only if you have core parking activated you may have some overheat on some core... But the auto OC feature should sense this and lower the clock... So with coreparking disabled i think that the auto OC would push the clock slightly higher... With coreparking enabled, some cores would become an hotspot and so the clock probabily would not be so high...
 
May 11, 2008
22,224
1,414
126
The windows algorithm of core rotation to balance the heat should be enough... If you need 4 core, you will see all 16 thread occupied for 25%... Only if you have core parking activated you may have some overheat on some core... But the auto OC feature should sense this and lower the clock... So with coreparking disabled i think that the auto OC would push the clock slightly higher... With coreparking enabled, some cores would become an hotspot and so the clock probabily would not be so high...

Ok.
I always wondered how windows would do that.Especially windows 10 i expect to be state of the art for extracting maximum performance from a given processor.
But that indeed fits with what i see when using 1 thread for small fft with prime 95 on an A10-6700.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Oke, but i noticed when i do some tests with my piledriver based A10-6700 that always all cores jump up to multiplier 43, not just one core. Maybe it is windows that is juggling the thread in a round robin fashion from core to core. I do not know. It is only after the thermal limit is reached that i see all cores lower the multiplier to stay under the thermal limit.

My 6800K routinely runs at 4.4GHz on all cores at full load (base is 4.1), its only when you load up the GPU that it starts to throttle if the temperature gets too high. Which it does occasionally due to the cramped case I'm using.
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
The windows algorithm of core rotation to balance the heat should be enough... If you need 4 core, you will see all 16 thread occupied for 25%... Only if you have core parking activated you may have some overheat on some core... But the auto OC feature should sense this and lower the clock... So with coreparking disabled i think that the auto OC would push the clock slightly higher... With coreparking enabled, some cores would become an hotspot and so the clock probabily would not be so high...

If core parking is causing a core to overheat you have a defective CPU.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
If core parking is causing a core to overheat you have a defective CPU.

I was talking of Zen. Since the auto OC pushes the limits of the CPU, if you have 1 thread spread on 8 core (or 16 thread), then each core will receive 1/8 of the heat and probabily the auto OC feature will clock it higher. If you have core parking enabled, there will be one core loaded at 100% and we know that there is a limit to power density and anyway there will be an hot spot that will demand lower clocks...
 

DrMrLordX

Lifer
Apr 27, 2000
22,702
12,652
136
How long did we hold onto those free "Half Life 2!" vouchers with GPU bundles before the game was actually released? 1, 1.5 years? :D I know it's not the same thing, but there have been such delays with free vouchers for hardware in the past, right?

Oh Valve. Still making people wait on Half-Life 3. Pity anyone who has a voucher for that.

I always dislike how reviews have to recommend a new product/or not mention a recommendation at all, even tho you may be able to buy older products that are now far cheaper and hence a better performance/value proposition.

For instance, I would rather pay 500 for Core i7-5960x today rather than any of the 7x00/6xx0 series chips (I've just bought one on sale for slightly less).

Too bad Intel is still charging $999 for the 5960x.
 

KTE

Senior member
May 26, 2016
478
130
76
Oh Valve. Still making people wait on Half-Life 3. Pity anyone who has a voucher for that.



Too bad Intel is still charging $999 for the 5960x.
Yea but older chips can easily be had for cheaper, and OEM/ODMs also get refurbs cheaper.

MFGs would price gouge to make sure they don't kill the sales of their new chips.

Sent from HTC 10
(Opinions are own)
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
Why is the Handbrake demo from New Horizon so much overlooked? Rysen was actually ~16% faster in this test. Is this irrelevant?
I looked into running it but I didn't see AMD putting up a simple file to use. Instead, I found larger files to download that needed to be clipped somehow to 60 or 90 seconds (don't recall which). Someone said Handbrake can be set to only encode just the necessary amount but I don't know how to do that and haven't taken the time to find out.

Putting up a simple file that doesn't need to be fussed with much, as they did with the Blender benchmark, makes it easier for people to get interested enough to make the effort to run the bench. The inconvenience of the lack of such a file for Handbrake is likely a big reason why it has received less attention.

I wonder if Handbrake has a similar compiler bottlenecking issue. I doubt it. I do remember that when I was encoding H.265 I got poor thread utilization on Piledriver, in contrast to H.264 — but that's likely an program (not compiler) optimization issue. I did the testing quite a while ago. Also, I've read that video encoding is not optimal on GPUs because it's progressively harder to parallelize the processing as the quality settings are added. This is apparently why GPU encoding is so limited in terms of quality. Since I was using high quality settings with H.265 I suppose that could explain some of the poor utilization. However, H.264, as I recall, did a lot better in Handbrake in terms of loading the Piledriver CPU at higher settings. This may be a compiler and/or a program optimization issue. I think The Stilt has spoken about Handbrake optimization but I don't recall the specifics.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,702
12,652
136
Yea but older chips can easily be had for cheaper

Okay, we're sort-of getting off topic here. There really isn't an older chip that can do what the 5960x can do, other than the 5960x itself. So yeah, I'm sure people would rather buy that, but the point is that it hasn't depreciated in value enough to occupy that price point yet. It's possible that supplies will dry up before it ever reaches that point (unless Summit Ridge leads to instant devaluation).

Also some of the older chips, like the 4790k, are still rather expensive:

http://pcpartpicker.com/product/7p98TW/intel-cpu-bxf80646i74770k

http://pcpartpicker.com/product/zmyFf7/intel-cpu-bxf80646i74790k

You might get a better deal off of eBay for the 4770 or 4790k but caveat emptor. Best price I see for an allegedly factory-sealed 4770k on eBay is $294, while the 4790k is $324.

Why would those chips cost so much when you can score a 6700k for $330? :

http://pcpartpicker.com/product/tdmxFT/intel-cpu-bx80662i76700k

Simple, people with older LGA1150 boards that want an upgrade are keeping the prices up. That'll go on for awhile. It wasn't so long ago that even Sandy Bridge CPUs had inflated prices (now you can finally get a SB quad for under $100).

When it comes time to review Summit Ridge, it would be interesting to see if reviewers would compare it to older Intel or AMD designs and let users decide for themselves if there's bang/buck in getting the new chip. A $500 Summit Ridge may or may not hold up all that well compared to a ~$300 6700k depending on the needs of the end-user. Compared to used Intel 8c/16t chips though? It'll be a bargain.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
AMD has to be, AMD MUST be at the very top of the gaming charts in order to break its way back into the HEDT and server markets. We can argue that the workloads are different and yada yada bla bla but its just the way it is. HEDT wouldnt exist if everyone had to lose 10-20% fps just to game on them. And AMD isnt producing any data to suggest they have captured the gaming crown.

Lol, I don't see how you can possibly believe this, but you have no data to suggest this being the case. Every decision maker that I've ever worked with in the IT industry (the guy who actually rubber stamps the order) couldn't care less about games and gaming benchmarks. AMD does not need to win in games to win in the server market. If AMD can bring the price / performance / Watt in the latest buzzword benchmarks printed on the latest trade industry "come get free stuff at our booth" brochure, then they can get sales in the market.

If you really believe gaming benchmarks make any difference in the server market you're completely out of touch with how that industry works. It's all about wining / dining the decision makers, not about bringing any real deliverables. See Cisco and the 3850.
 

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
When it comes time to review Summit Ridge, it would be interesting to see if reviewers would compare it to older Intel or AMD designs and let users decide for themselves if there's bang/buck in getting the new chip.
Not just compare but use useful modern comparisons (unlike the stock Blender builds for Windows) that don't leave a lot of performance on the table from 2011/2012 architectures.

So far, Anandtech has used inferior APUs for comparisons with Broadwell and Skylake. I don't expect they'll use a Piledriver at 4.4 GHz or so for comparison. One reviewer stated in the comments that no one who reads Anandtech's reviews is interested in Piledriver's performance and that comment is from around when Broadwell-C came out.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
We won't know the full potential of the chip until it's in our hands. Just look at Polaris and the voltage snafu there. I bet users will figure out a way to get more performance, and then we'll figure out how well the chip can actually perform. Same with intel chips .
 

KTE

Senior member
May 26, 2016
478
130
76
Okay, we're sort-of getting off topic here. There really isn't an older chip that can do what the 5960x can do, other than the 5960x itself. So yeah, I'm sure people would rather buy that, but the point is that it hasn't depreciated in value enough to occupy that price point yet. It's possible that supplies will dry up before it ever reaches that point (unless Summit Ridge leads to instant devaluation).

Also some of the older chips, like the 4790k, are still rather expensive:

http://pcpartpicker.com/product/7p98TW/intel-cpu-bxf80646i74770k

http://pcpartpicker.com/product/zmyFf7/intel-cpu-bxf80646i74790k

You might get a better deal off of eBay for the 4770 or 4790k but caveat emptor. Best price I see for an allegedly factory-sealed 4770k on eBay is $294, while the 4790k is $324.

Why would those chips cost so much when you can score a 6700k for $330? :

http://pcpartpicker.com/product/tdmxFT/intel-cpu-bx80662i76700k

Simple, people with older LGA1150 boards that want an upgrade are keeping the prices up. That'll go on for awhile. It wasn't so long ago that even Sandy Bridge CPUs had inflated prices (now you can finally get a SB quad for under $100).

When it comes time to review Summit Ridge, it would be interesting to see if reviewers would compare it to older Intel or AMD designs and let users decide for themselves if there's bang/buck in getting the new chip. A $500 Summit Ridge may or may not hold up all that well compared to a ~$300 6700k depending on the needs of the end-user. Compared to used Intel 8c/16t chips though? It'll be a bargain.

I am venting just for the sake of venting

Sent from HTC 10
(Opinions are own)
 

cytg111

Lifer
Mar 17, 2008
25,663
15,162
136
Waiting for RyZen to officially lauch feels like being a child on Christmas Eve!

PLEASE SANTA when are you going to get here!:fearscream:

I know! It has been ages since we've been given new presents. Sure Intel has been delivering like a clockwork but we always know what lies beneath the wrapping paper (good stuff, but no surprises). I am excited to see this system put through hoardes of tests and benchmarks and reading/analyzing them here with you guys (no matter what direction it will take, good or bad)
 
  • Like
Reactions: Arachnotronic

sm625

Diamond Member
May 6, 2011
8,172
137
106
Lol, I don't see how you can possibly believe this, but you have no data to suggest this being the case. Every decision maker that I've ever worked with in the IT industry (the guy who actually rubber stamps the order) couldn't care less about games and gaming benchmarks. AMD does not need to win in games to win in the server market. If AMD can bring the price / performance / Watt in the latest buzzword benchmarks printed on the latest trade industry "come get free stuff at our booth" brochure, then they can get sales in the market.

If you really believe gaming benchmarks make any difference in the server market you're completely out of touch with how that industry works. It's all about wining / dining the decision makers, not about bringing any real deliverables. See Cisco and the 3850.

Out of touch or not, I am 100% confident that if AMD stays 20% behind intel in games, I guarantee that 5 years from now AMD will be less than 2% of the server market. No need to argue about it. I will simply bookmark this statement like I bookmarked my 13000 Multithread 1800 Single thread passmark prediction for Zen.
 

Doom2pro

Senior member
Apr 2, 2016
587
619
106
Out of touch or not, I am 100% confident that if AMD stays 20% behind intel in games, I guarantee that 5 years from now AMD will be less than 2% of the server market. No need to argue about it. I will simply bookmark this statement like I bookmarked my 13000 Multithread 1800 Single thread passmark prediction for Zen.

Whatever, we'll see soon enough... Literally they are the opposite ends of the spectrum, Gaming being leaky, low core, high clocking parts while Servers are low leaking, high core count, low clocking parts.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Out of touch or not, I am 100% confident that if AMD stays 20% behind intel in games, I guarantee that 5 years from now AMD will be less than 2% of the server market. No need to argue about it. I will simply bookmark this statement like I bookmarked my 13000 Multithread 1800 Single thread passmark prediction for Zen.

That makes absolutely no sense. AMD could be 20% behind Intel in games, and AMD could be less than 2% of the server market in 5 years but correlation does not imply causation. Seriously, it does not take any extraordinary thought process to realize that one does not imply the other. You already said you won't argue about it, and I assume that's because you have no real evidence to provide.

Seriously though, if the worst happens and both of the above occur, if you bring this back up with that connecting implied "if / then" statement you will just get laughed at. Because you being right about 2 facts have nothing to do with your assumed reasons for being right.

I ate more bacon this year, therefore I play more video games this year. <-- That's what you're doing right now.
 
Status
Not open for further replies.