Ryzen: Strictly technical

Page 74 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Timur Born

Senior member
Feb 14, 2016
277
139
116
I've looked into Darktable, but the problem with it is that using CPU for the processing is pretty much pointless.
Tasks like this are up to order of magnitude faster when done on OpenCL capable GPU.
In practice this still is mostly done by CPUs. In Adobe Lightroom you can enable GPU acceleration, but it's (mostly?) only used for preview of the development module (when sliders are moved). And the final output still is CPU rendered and may even differ slightly from the GPU accelerated preview.

So with CPU based raw/image processing still being the norm it seems like a legit test case. I like the reviews Pugetsystems for these kind of comparisons:

https://www.pugetsystems.com/labs/a...erformance-AMD-Ryzen-2-vs-Intel-8th-Gen-1136/
 
  • Like
Reactions: Drazick

Abwx

Lifer
Apr 2, 2011
10,926
3,414
136
So with CPU based raw/image processing still being the norm it seems like a legit test case. I like the reviews Pugetsystems for these kind of comparisons:

https://www.pugetsystems.com/labs/a...erformance-AMD-Ryzen-2-vs-Intel-8th-Gen-1136/

Dunno how Puget got the 2700X running slower than a 8700K in Cinema 4D renderer given that this latter is the very one used in Cinebench R15, definitly that s marketing benching because they are system sellers, it would be like Dell releasing bencmarks...

pic_disp.php


https://www.pugetsystems.com/labs/a...Comparison-AMD-Ryzen-2-vs-Intel-8th-Gen-1137/
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
It's not just rendering, it's FX + rendering. So the bottleneck likely is the After Effects FX pipeline that comes before the final rendering pipeline.

And they specifically use openly available projects, so you can download and test them yourself.

One user commented: "Users often open multiple After Effects (or AE render engine) and render AE project to image sequence. Then import these image in Pr to create the video. In this way, all CPU threads can be used and Ryzen may perform much better. Network muti-computer rendering also take this method."
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,691
136
@The StiltAre the USB 3.1 ports on 2700X Gen2 or Gen1 (aka 3.0)?

Are you referring to the on-chip or chipset provided ones? In the first case, they're Gen1's. The Ryzen chipset (even the lowly A320) has a Gen2 controller, but it depends on the specific board in question if Gen2 is implemented.

The chipset provided Gen2 are very, very decent compared to 3rd party add-in cards/chips. Likely comes from the PCIe 3.0 x4 interface. I recently got around to getting an adaptor cable to use the chipset provided Gen2 on my Crosshair VI* and I'm impressed.

*The Crosshair VI features two Gen2 controllers. The chipset and an asmedia 1142, the chipset one is only hooked up to the front panel connector. Hence the need for an adaptor cable.
 

Abwx

Lifer
Apr 2, 2011
10,926
3,414
136
It's not just rendering, it's FX + rendering. So the bottleneck likely is the After Effects FX pipeline that comes before the final rendering pipeline.

The graph i posted is for the rendring task only, you should read better in the sitesy you re quoting before answering....

Anyway that s OT...
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
So Wich core with potential to clock better ?

And with widely different power consumption then it's not surprising that the review is all over the place, it's really are depends on cooler it's used.

In case of Pinnacle Ridge, the higher SIDD specimens seems to be the way to go.
That is actually rather unusual, and it appears to be down the process characteristics.
Basically the power draw increases so rapidly with the increase of the voltage (far beyond the nominal ²) that despite the significantly higher SIDD the total power efficiency will still be lower.


With the release of the polaris gpu from amd, cpuz had this option about asic quality.
since both ryzen and polaris are made on the same process that got me wondering.
Is there such a number also for ryzen ?
I never really understood what asic quality meant.
Was it not :
The higher the asic quality number, the lower the leakage and the lower the maximum overclock ?
Was it not that more leakage (static power consumption) means more overclock ?
Maybe i have it mixed up. I am not sure.

The "ASIC Quality" displays the fused leakage value. The older AMD GPUs used this value to calculate the voltages during base and boost clocks (simple LUT method).
On AMD cards the value varies between 0 and 1023. Higher the value, higher the SIDD for the ASIC specimen is.

The leakage itself has very little to do with the maximum frequency capabilities of the silicon. Only if there is a voltage limit, in which the lower leakage pieces of silicon are likely to run into then there will be a difference.
 

Timur Born

Senior member
Feb 14, 2016
277
139
116
Are you referring to the on-chip or chipset provided ones? In the first case, they're Gen1's. The Ryzen chipset (even the lowly A320) has a Gen2 controller, but it depends on the specific board in question if Gen2 is implemented.
I meant the CPU USB controller. I wasn't sure if Zen+ changed from Gen1 to Gen2.
 
May 11, 2008
19,466
1,157
126
In case of Pinnacle Ridge, the higher SIDD specimens seems to be the way to go.
That is actually rather unusual, and it appears to be down the process characteristics.
Basically the power draw increases so rapidly with the increase of the voltage (far beyond the nominal ²) that despite the significantly higher SIDD the total power efficiency will still be lower.




The "ASIC Quality" displays the fused leakage value. The older AMD GPUs used this value to calculate the voltages during base and boost clocks (simple LUT method).
On AMD cards the value varies between 0 and 1023. Higher the value, higher the SIDD for the ASIC specimen is.

The leakage itself has very little to do with the maximum frequency capabilities of the silicon. Only if there is a voltage limit, in which the lower leakage pieces of silicon are likely to run into then there will be a difference.

What does SSID stand for ?
That is interesting about the lookup table method to calculate the voltage.
You mentioned older cards. Do the newer models like polaris and vega do it differently ?
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,686
1,220
136
What does SSID stand for ?
SIDD (Static VDD Power Supply Current)
"SIDD(Temp(SIDD)) – feedback between static leakage and temperature can induce self heating." - Salishan Hi Speed Conference 2007 (April 26, 2007 - V & V @ AMD: Ensuring a Solid HPC Foundation and Data Confidence from Core to Cache and Beyond . . .)

https://i.imgur.com/A752J7G.png
It is related to static leakage.

High SIDD is better because of electron ballistics and Ti-states(XFR/Boost). Hotter the transistor the more it can be overclocked for a given voltage, etc. Complex stuff.
"The key is to counterbalance leakage power increase at higher temperatures with dynamic power reduction by the Ti-states." - Ti-states: processor power management in the temperature inversion region ( Taipei, Taiwan — October 15 - 19, 2016 @ MICRO-49(The 49th Annual IEEE/ACM International Symposium on Microarchitecture)
 
Last edited:

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
While we don't have a lot of data points yet, it looks like 12nm is very consistent. Pretty much all chips do 4.2GHz at nearly the same voltage.
 

Abwx

Lifer
Apr 2, 2011
10,926
3,414
136
High SIDD is better because of electron ballistics

Lol, that s a first...

FYI electrons speed in a conductor is about 0.02mm/s, so to say with alternative currents they are about motionless, it s the flux that travel at 2/3 of light speed in said conductors, and it s happy given that there s no mass displacement, just imagine that if it were the electrons moving our cables and anything conducting would be torned apart by their kinetic energy....
 

Hitman928

Diamond Member
Apr 15, 2012
5,229
7,745
136
SIDD (Static VDD Power Supply Current)
"SIDD(Temp(SIDD)) – feedback between static leakage and temperature can induce self heating." - Salishan Hi Speed Conference 2007 (April 26, 2007 - V & V @ AMD: Ensuring a Solid HPC Foundation and Data Confidence from Core to Cache and Beyond . . .)

https://i.imgur.com/A752J7G.png
It is related to static leakage.

High SIDD is better because of electron ballistics and Ti-states(XFR/Boost). Hotter the transistor the more it can be overclocked for a given voltage, etc. Complex stuff.
"The key is to counterbalance leakage power increase at higher temperatures with dynamic power reduction by the Ti-states." - Ti-states: processor power management in the temperature inversion region ( Taipei, Taiwan — October 15 - 19, 2016 @ MICRO-49(The 49th Annual IEEE/ACM International Symposium on Microarchitecture)

No offense intended, but I am quite certain you don't understand the material you are referencing as your interpretation is way off.
 
  • Like
Reactions: IEC and CHADBOGA

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
What does SSID stand for ?
That is interesting about the lookup table method to calculate the voltage.
You mentioned older cards. Do the newer models like polaris and vega do it differently ?

Few pieces to check out, if you're interested about the subject.

https://www.cs.utexas.edu/~skeckler/pubs/UTCS_tr2001_18.pdf

https://community.jmp.com/t5/Discov...erformance-Sensitive-Semiconductor/ta-p/24111

The LUT method was used on older cards. Bonaire was the first ASIC to support the new EVV method.
LUT was extremely inefficient as usually there were only few (IIRC eight) different voltages for the whole range of ASICs with different levels of leakage.
On cards which support EVV the correct voltage at different frequencies is calculated by the SMU.
 
May 11, 2008
19,466
1,157
126
Few pieces to check out, if you're interested about the subject.

https://www.cs.utexas.edu/~skeckler/pubs/UTCS_tr2001_18.pdf

https://community.jmp.com/t5/Discov...erformance-Sensitive-Semiconductor/ta-p/24111

The LUT method was used on older cards. Bonaire was the first ASIC to support the new EVV method.
LUT was extremely inefficient as usually there were only few (IIRC eight) different voltages for the whole range of ASICs with different levels of leakage.
On cards which support EVV the correct voltage at different frequencies is calculated by the SMU.

Ah, thank you.
That will be a good read.
 
  • Like
Reactions: Drazick

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
While we don't have a lot of data points yet, it looks like 12nm is very consistent. Pretty much all chips do 4.2GHz at nearly the same voltage.

I think most 2700x chips are able to do 4.2 Ghz at 1.35-1.4v . The fact that AMD chose to use just the 12nm transistor improvements and port it to the same 14LPP design of Zen (same cell libraries) is reflected in the small clock gains. AMD chose the route with least risk and work. AMD is completely focussing their resources on 7nm Zen 2 which is a smart strategy. Hopefully AMD and GF can deliver on a crackerjack Zen 2 :)
 

Gyronamics

Junior Member
Apr 22, 2018
5
1
36
Hello @The Stilt

I was looking at the power consumption section and found the highlighted lines odd.

  • R7 2700X
  • ASUS Crosshair VII Hero (bios 0505, PinnaclePI 1.0.0.2)
  • G.Skill FlareX 3200C14 2x8GB, 2933MHz CL14-14-14-32
  • Windows 10 Enterprise x64 16299.248 / 16299.334
  • PR Notes: "ASUS Performance Enhancement" == "Default", "Precision Boost Override" & "Precision Boost Override Scalar" == "Auto" (Enabled).

The power consumption

When comparing the new flagship 2700X SKU against its predecessor the 1800X, the 2700X provides on average 6.1% higher single threaded performance and 10.2% higher multithreaded performance when using a 400-series motherboard. The improvement however doesn’t come without a cost; Despite the advertised power rating of 2700X has only increased by 10W (or by 10.5%), the actual power consumption has increased by significantly more: over 24%. At stock, the CPU is allowed to consume >= 141.75W of power and more importantly, that is a sustainable limit and not a short-term burst like of limit as on Intel CPUs (PL1 vs. PL2).

The chart below illustrates what this means in practice.

DwPWcLa.png


Personally, I think that AMD should have rated these CPUs for 140W TDP instead of the 105W rating they ended up with. The R7 2700X is the first AMD CPU I’ve ever witnessed to consume more power than its advertised power rating. And honestly, I don’t like the fact one bit. Similar practices are being exercised on Ryzen Mobile line-up, however with one major difference: The higher than advertised (e.g. 25W boost on 15W rated SKUs) power limit is not sustainable, but instead a short-term limit like on Intel CPUs. The way I see it, either these CPUs should have been rated for 140W from the get-go, or alternatively the 141.75W power limit should have been a short-term one and the advertised 105W figure a sustained one.

Your claim is that the 2700X is consuming 140W at stock when it is advertised at 105W.

I noted that you have PSO enabled which lets the CPU run out of spec and above normal TDP.

Over on tomshardware he ran the 2700X with PSO disabled and hit the 105W TDP in AVX Prime95. I'm taking this as evidence that without PSO boosting it does stick to TDP.

But you said this is on AMD. Are "stock" settings which have PSO enabled by default not actually on the motherboard manufacturer?

For example: https://www.youtube.com/watch?v=LiFPHdXLiPc at 7:00

tmp.png


A youtube video about PSO using a Gigabyte board, he has to agree with a warning that PSO exceeds spec to enable it.

Either way it didn't seem right that AMD was being blamed for an OC setting being on and calling that "stock".
 
  • Like
Reactions: Drazick

Space Tyrant

Member
Feb 14, 2017
149
115
116
bit of a stab in the dark, but does anyone know if disabling SMT still breaks S3 with PR ? (specifcally a problem on CH6 maybe?, not sure about other boards)
If S3 is a standby state, yes, it's broken on my MB. I run with SMT disabled and -- no sleep for me!
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
Hello @The Stilt

I was looking at the power consumption section and found the highlighted lines odd.



Your claim is that the 2700X is consuming 140W at stock when it is advertised at 105W.

I noted that you have PSO enabled which lets the CPU run out of spec and above normal TDP.

Over on tomshardware he ran the 2700X with PSO disabled and hit the 105W TDP in AVX Prime95. I'm taking this as evidence that without PSO boosting it does stick to TDP.

But you said this is on AMD. Are "stock" settings which have PSO enabled by default not actually on the motherboard manufacturer?

For example: https://www.youtube.com/watch?v=LiFPHdXLiPc at 7:00

tmp.png


A youtube video about PSO using a Gigabyte board, he has to agree with a warning that PSO exceeds spec to enable it.

Either way it didn't seem right that AMD was being blamed for an OC setting being on and calling that "stock".

PBO settings are under the CBS ("Common Board Setting") and they are provided by AMD on all boards (i.e. no ODM specific code required).
I haven't accepted any warnings in order to see the power figures I'm seeing.

Also 141.75W is the PPT limit, by the infrastructure definitions for 105W rated SKUs.
When Precision Boost Override is enabled PPT limit becomes 1000W.

All reviews in which the 2700X scored over >= 1800 in Cinebench R15 nT had PBO enabled.
 
  • Like
Reactions: Drazick and .vodka

Gyronamics

Junior Member
Apr 22, 2018
5
1
36
PBO settings are under the CBS ("Common Board Setting") and they are provided by AMD on all boards (i.e. no ODM specific code required).
I haven't accepted any warnings in order to see the power figures I'm seeing.

Also 141.75W is the PPT limit, by the infrastructure definitions for 105W rated SKUs.
When Precision Boost Override is enabled PPT limit becomes 1000W.

All reviews in which the 2700X scored over >= 1800 in Cinebench R15 nT had PBO enabled.

I see.

But on a different board you would not have the same default settings (as the gigabyte video shows it has to be specially enabled). So the motherboard manufacturer is dictating what these CPUs are doing at "stock".

Also could you say why the tomshardware power draw stopped exactly at the TDP if all he did was disable PSO.
 

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
I see.

But on a different board you would not have the same default settings (as the gigabyte video shows it has to be specially enabled). So the motherboard manufacturer is dictating what these CPUs are doing at "stock".

Also could you say why the tomshardware power draw stopped exactly at the TDP if all he did was disable PSO.

Ever considered that it's Gigabyte (who has the worst bioses in the industry) who's playing with the stock settings?

Techpowerup who made their tests on MSI X470 M7 also got > 1800pts in Cinebench R15, meaning that PBO was definitely enabled.

https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700X/9.html
https://www.techpowerup.com/reviews/AMD/Ryzen_7_2700X/9.html
Also these CPUs don't have any 105W limit.
The minimum limit is the 141.75W, even when you disable PBO manually.
 
Status
Not open for further replies.