Ryzen: Strictly technical

Page 73 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
It's Global Foundries that claimed 10% improvement.

These CPUs(Ryzen/Coffeelake/SKLX) are all running at their limits. Process changes won't result in much gains, if at all. 10% claim may be realistic for lower frequency parts such as server or mobile.

The CPU is a solid improvement. Nice job by AMD.

And thanks @ Stilt for the benchmarks. I think you are doing a better job than most review sites.

Indeed, it was Glo Fo with those claims, not AMD directly. I stand corrected. AMD themselves claimed a 10% performance uplift which on the whole I think they achieved.
 

scannall

Golden Member
Jan 1, 2012
1,946
1,638
136
Makes me wonder about those 5+ GHz claims on 7nm by GloFo if it ever shows up in H2 2018/Q12019.
Their 7nm is essentially an IBM node, that came along with the package. IBM has a long history of doing high clock speeds, and doing them well. Of course we won't know for sure until the parts are actually on the market. But it is more than plausible.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Their 7nm is essentially an IBM node, that came along with the package. IBM has a long history of doing high clock speeds, and doing them well. Of course we won't know for sure until the parts are actually on the market. But it is more than plausible.

The IBM chips have very high TDPs to reach those clocks though.

And max conventional cooling overclocks only went up by 200MHz between 14nm+(KBL) and 14nm++(CFL).

Sandy Bridge at 32nm could do 4.5GHz overclocks. The 14nm++ transistors in Coffeelake are 50-80% better performing. That resulted in less than 15% frequency improvement. The base clock gains are higher, but Intel has been just eating into overclocking headroom to give us the base increases.

Netburst chips with ridiculously high number of pipeline stages were cancelled because at certain clocks the heat would be so concentrated at very high frequencies that engineers said it would have the thermal density of our sun. Back then the limit where scaling would be drastically harder were 4-5GHz. Nothing has changed.
 

BeepBeep2

Member
Dec 14, 2016
86
44
61
The IBM chips have very high TDPs to reach those clocks though.

And max conventional cooling overclocks only went up by 200MHz between 14nm+(KBL) and 14nm++(CFL).

Sandy Bridge at 32nm could do 4.5GHz overclocks. The 14nm++ transistors in Coffeelake are 50-80% better performing. That resulted in less than 15% frequency improvement. The base clock gains are higher, but Intel has been just eating into overclocking headroom to give us the base increases.

Netburst chips with ridiculously high number of pipeline stages were cancelled because at certain clocks the heat would be so concentrated at very high frequencies that engineers said it would have the thermal density of our sun. Back then the limit where scaling would be drastically harder were 4-5GHz. Nothing has changed.
A good 2600K was able to do 4.8-5 GHz+ on water cooling. I had one that was able to bench single threaded at 5.2 GHz on the stock dinky heatsink. Frequency wise, Intel has gone nowhere since 32nm.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
A good 2600K was able to do 4.8-5 GHz+ on water cooling. I had one that was able to bench single threaded at 5.2 GHz on the stock dinky heatsink. Frequency wise, Intel has gone nowhere since 32nm.

I don't count water, as that's not representative of a commercially available setup. Frequency did advance somewhat, but nowhere near expected. That's why people expecting big increases have unrealistic expectations. It's no coincidence people working hard at these things admitted themselves clock scaling stopped years ago.

It's not just Intel. It's an industry wide thing. Thermals increase greatly after some point, which causes a great increase in leakage current, which increases thermals, and over and over. Intel just happens to be at the forefront of the problem because their chips are at the limits already.
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,762
783
126
2700x seems like a nice boost to me, despite it's almost non-existent overclocking ability.

Am interested to see which direction AMD focus with on Zen 2. Clock speed or more cores. With Intel's woes with 10nm and getting better ipc/clockspeed, I am thinking AMD may go for additional cores.
 

B-Riz

Golden Member
Feb 15, 2011
1,482
612
136
So has over-clocking become a bug or a feature now? If CPU's have enough sensors to intelligently manage themselves better than manual tweaking, this is a new era. If someone is buying a CPU *just* to over-clock it, when they could get another that does not need to be, but could be, is that a misguided purchase?

Intel's product stack feels so meh and blah after AMD decided to leave all cpu's unlocked and let a mid tier chipset OC them.
 
  • Like
Reactions: Lodix

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
So has over-clocking become a bug or a feature now? If CPU's have enough sensors to intelligently manage themselves better than manual tweaking, this is a new era. If someone is buying a CPU *just* to over-clock it, when they could get another that does not need to be, but could be, is that a misguided purchase?

Intel's product stack feels so meh and blah after AMD decided to leave all cpu's unlocked and let a mid tier chipset OC them.

The situation we're current in is rather unique. Never before we've been in a situation, where the manufacturers ship silicon at higher frequencies that it can actually sustain.
Basically everyone is currently manufacturing process limited, either in terms of the actual Fmax, or the power consumption or the voltage reliability.

The modern CPUs being smarter than ever obviously makes it a lot easier to the manufacturers to ship their products with smaller margins than ever before.
Not because of the standards would be worse than before, but because of the CPUs are smarter than ever before.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,599
5,765
136
But if we are Fmax limited, how much wider can we go, or rather how wide are we already that we can't benefit from going wider?
For desktop use, day to day computing... not for specialized workloads.
Have we eliminated all stalls? Can we still gain by reducing latencies.... (although this is not very strictly core related)
Time to switch to new materials? or would we gain going VLIW style :D
Or direct to quantum? we are still some ways off from quantum though.

CPU space needed innovation, at least for now we are scaling core wise. We will hit that wall soon here too.
 
  • Like
Reactions: Drazick
May 11, 2008
19,466
1,157
126
But if we are Fmax limited, how much wider can we go, or rather how wide are we already that we can't benefit from going wider?
For desktop use, day to day computing... not for specialized workloads.
Have we eliminated all stalls? Can we still gain by reducing latencies.... (although this is not very strictly core related)
Time to switch to new materials? or would we gain going VLIW style :D
Or direct to quantum? we are still some ways off from quantum though.

CPU space needed innovation, at least for now we are scaling core wise. We will hit that wall soon here too.

We are waiting for gallium nitride chips perhaps...
There have been many people in the field touting this as the next step.
Alex Lidow claims it is a matter of time.
But i am sure there are many hurdles to be taken.
Right now GaN rf power transmitters in the gHz range are very common but that is totally different from being billions of tiny mosfets cramped together to form a modern cpu.
https://venturebeat.com/2015/04/02/move-over-silicon-gallium-nitride-chips-are-taking-over/

The advantage is that current manufacturing technologies can be used, though.
But it is wait and see...
 

Hitman928

Diamond Member
Apr 15, 2012
5,229
7,745
136
We are waiting for gallium nitride chips perhaps...
There have been many people in the field touting this as the next step.
Alex Lidow claims it is a matter of time.
But i am sure there are many hurdles to be taken.
Right now GaN rf power transmitters in the gHz range are very common but that is totally different from being billions of tiny mosfets cramped together to form a modern cpu.
https://venturebeat.com/2015/04/02/move-over-silicon-gallium-nitride-chips-are-taking-over/

The advantage is that current manufacturing technologies can be used, though.
But it is wait and see...

I do designs on GaN from time to time (RFIC) and I don't see them taking over anytime soon (or ever) in the consumer digital world (i.e. where x86 CPUs and most ARM CPUs live). The Fmax of GaN isn't even really higher than Si at the same node, but it has much higher voltage/thermal tolerances, can have much higher power density, and is more efficient. Of course there are downsides to GaN as well not the least of which being cost. It is interesting to read his claim that modern GaN-on-Si is cheaper than a standard CMOS process once packaging is included. This has not been my experience at all, not when comparing the same node, at least not against a standard CMOS process.

He is probably comparing it against power MOSFETS (e.g. LDMOS) which largely takes the economy of scale away from the silicon which is a big part of why GaN is more expensive than standard CMOS in the first place, so it is not really a valid comparison for digital processors. The other reason GaN is more expensive is because it is just difficult to make compared to standard Si (again, same node comparison) and yields have only recently (last 10 years or so, with another corner turned in the last 5) started to look good enough to consider using GaN in lower cost applications (e.g. automotive). Even then, I think it is still going to cost more but will be used where GaN's advantages are worth it and trying to get Si to perform as well in those situations would actually cost more like Lidow says. Additionally, enhancement mode GaN is even less mature and less supported and I imagine would be required for traditional digital design. In the end, GaN can have some great advantages in things such as RFPAs and power converters like he mentions, but a GaN based consumer CPU is probably not going to happen.

TL;DR GaN is great for high power and power management devices and is successfully expanding into those areas, but is not suitable at this time for CPUs compared to other options and will probably never be.
 
Last edited:
May 11, 2008
19,466
1,157
126
I do designs on GaN from time to time (RFIC) and I don't see them taking over anytime soon (or ever) in the consumer digital world (i.e. where x86 CPUs and most ARM CPUs live). The Fmax of GaN isn't even really higher than Si at the same node, but it has much higher voltage/thermal tolerances, can have much higher power density, and is more efficient. Of course there are downsides to GaN as well not the least of which being cost. It is interesting to read his claim that modern GaN-on-Si is cheaper than a standard CMOS process once packaging is included. This has not been my experience at all, not when comparing the same node, at least not against a standard CMOS process.

He is probably comparing it against power MOSFETS (e.g. LDMOS) which largely takes the economy of scale away from the silicon and then adding which is a big part of why GaN is more expensive than standard CMOS in the first place, so it is not really a valid comparison for digital processors. The other reason GaN is more expensive is because it is just difficult to make compared to standard Si (again, same node comparison) and yields have only recently (last 10 years or so, with another corner turned in the last 5) started to look good enough to consider using GaN in lower cost applications (e.g. automotive). Even then, I think it is still going to cost more but will be used where GaN's advantages are worth it and trying to get Si to perform as well in those situations would actually cost more like Lidow says. Additionally, enhancement mode GaN is even less mature and less supported and I imagine would be required for traditional digital design. In the end, GaN can have some great advantages in things such as RFPAs and power converters like he mentions, but a GaN based CPU is probably not going to happen.

TL;DR GaN is great for high power and power management devices and is successfully expanding into those areas, but is not suitable at this time for CPUs compared to other options and will probably never be.

Interesting.
As alternative, IBM is seriously doing research in carbon sheets, what do you think will happen ?
Usually when IBM takes a direction, we can be sure that the rest of the tech world is going into that direction as well.
IBM research is way ahead in modelling and in theoretical approach of how atoms really behave.
But the thing, is they use huge laboratory equipment to line up atoms to create crystal lattices without any defects to get a specific behavior. Of course for theoretical research.
 

Hitman928

Diamond Member
Apr 15, 2012
5,229
7,745
136
Interesting.
As alternative, IBM is seriously doing research in carbon sheets, what do you think will happen ?
Usually when IBM takes a direction, we can be sure that the rest of the tech world is going into that direction as well.
IBM research is way ahead in modelling and in theoretical approach of how atoms really behave.
But the thing, is they use huge laboratory equipment to line up atoms to create crystal lattices without any defects to get a specific behavior. Of course for theoretical research.

I'm not very familiar with the carbon sheet / nanotube research. I read up on it every few years and it seems like it's two steps forward one step back kind of thing and is still far from being able to be used on a chip requiring millions if not billions of gates. That's not the only alternative research going on as well though, there are others such as other III-V semiconductors and even things like selenides are being researched.

I think Si has lasted a lot longer than anyone thought it would have 20 years ago and because so much R&D has gone into getting more life out of Si and has been successful, there hasn't been as much money put into alternatives which has delayed the advent of a viable alternative beyond what many experts predicted back then. It's understandable though why companies want to push Si as long as possible. All of the foundry tools and infrastructure, all of the modeling software, and probably even the way we design processors will need to be adjusted if not completely revamped to support a new material (depending on what it ends up being). You're talking an over $1 Trillion industry (consumer electronics). Granted, a lot of devices that don't live on the bleeding edge won't need to make that adjustment, but still, that's a major disruption. It's pretty crazy to think about. What eventually happens and how it will effect the current market players and even the economies of the world is way beyond me but it will be an exciting ride.
 

IRobot23

Senior member
Jul 3, 2017
601
183
76
Doesn't ASUS have option for aouto OC on C7H at 4.5GHz ST and 4.3GHz for MT?

@The Stilt you said you had problem with TDP rating?
De8auer did power per MHz and with voltage and he came around that at 4050 all cores in CB R15 it will use around 105W.

Ryzen is very power efficient CPU, I don't know how people get that R7 2700X uses like 2x more power than i7 8700k.
Saying that i5 8400 is 65W TDP and uses same power as i5 7600 is also big thing. I have locked i5 8400 to 65W TDP on one PC and even at non AVX it started to throttle to ~3.5GHz... yeah you heard right! With AVX at full load it will go way down to 3.2GHz.

The biggest thing is expecting i7 8700 running at 4.3GHz at 65W TDP. I mainly won't care about power, but lot of people do care in GPU section. There is a lot of people who will argue about P/W, but then they run i7 7700K or i7 8700K at 5GHz+. Kinda weird.

Back to ryzen and TDP. Well my Ryzen goes above TDP out of the box too. Well at stock it won't go exactly, but features that MB has (C6H) and overcloked ram to 3200MHz it will just use around ~75W HWINFO64 while clocked at 3.2GHz (AIDA64 - cache/fpu/core).

EDIT:
Anyway here are results for TDP reading hwinfo/aida64.

R7 1700 (definitely not golden)- ASUS C6H results (HWInfo64 reading)

Auto BIOS - performance boost ON (3.2GHz all core)
Prime95 small FFT :
- 65W package power
- 68W SOC+CPU

AIDA64 (cache/fpu/core) :
- 75W package power
- 78W SOC+CPU

BIOS - performance boost OFF (3GHz all core)
Prime95 small FFT :
- 55W package power
- 51W SOC+CPU

AIDA64 (cache/fpu/core) :
- 65W package power
- 50W SOC+Cores?

I would say impressive. Can you compare this to R7 2700X?

I am waiting for 7nm. AMD did good improvement in 1 year, as you can see they are NOT late, they are improving where they should and I like it.
 
Last edited:
  • Like
Reactions: Drazick

The Stilt

Golden Member
Dec 5, 2015
1,709
3,057
106
Doesn't ASUS have option for aouto OC on C7H at 4.5GHz ST and 4.3GHz for MT?

@The Stilt you said you had problem with TDP rating?
De8auer did power per MHz and with voltage and he came around that at 4050 all cores in CB R15 it will use around 105W.

Ryzen is very power efficient CPU, I don't know how people get that R7 2700X uses like 2x more power than i7 8700k.
Saying that i5 8400 is 65W TDP and uses same power as i5 7600 is also big thing. I have locked i5 8400 to 65W TDP on one PC and even at non AVX it started to throttle to ~3.5GHz... yeah you heard right! With AVX at full load it will go way down to 3.2GHz.

The biggest thing is expecting i7 8700 running at 4.3GHz at 65W TDP. I mainly won't care about power, but lot of people do care in GPU section. There is a lot of people who will argue about P/W, but then they run i7 7700K or i7 8700K at 5GHz+. Kinda weird.

Back to ryzen and TDP. Well my Ryzen goes above TDP out of the box too. Well at stock it won't go exactly, but features that MB has (C6H) and overcloked ram to 3200MHz it will just use around ~75W HWINFO64 while clocked at 3.2GHz (AIDA64 - cache/fpu/core).

EDIT:
Anyway here are results for TDP reading hwinfo/aida64.

R7 1700 (definitely not golden)- ASUS C6H results (HWInfo64 reading)

Auto BIOS - performance boost ON (3.2GHz all core)
Prime95 small FFT :
- 65W package power
- 68W SOC+CPU

AIDA64 (cache/fpu/core) :
- 75W package power
- 78W SOC+CPU

BIOS - performance boost OFF (3GHz all core)
Prime95 small FFT :
- 55W package power
- 51W SOC+CPU

AIDA64 (cache/fpu/core) :
- 65W package power
- 50W SOC+Cores?

I would say impressive. Can you compare this to R7 2700X?

I am waiting for 7nm. AMD did good improvement in 1 year, as you can see they are NOT late, they are improving where they should and I like it.

I don't know how (or with what) der8auer measured those readings, however if they're SMU reported figures then they're not comparable.
My measurements are based on controller telemetry (DCR).

Based on the tests I made today on different samples, the power consumption seems to vary quite a lot (up to 12%) between the specimens.
 

IRobot23

Senior member
Jul 3, 2017
601
183
76
I don't know how (or with what) der8auer measured those readings, however if they're SMU reported figures then they're not comparable.
My measurements are based on controller telemetry (DCR).

Based on the tests I made today on different samples, the power consumption seems to vary quite a lot (up to 12%) between the specimens.

Yeah that should explain it.. up to 12% you say. He hit 4.3GHz on all cores with LLC5 at 1.425V and he got 4.2GHz at 1.3V while reaching 4GHz @ 1.125V (LLC5).

So ...

Do you think that ZEN+/ZEN could hit higher clocks on different node?

For games and apps that needs really low latency L1/L2/L3 cache and DRAM latency AMD is still far behind with some optimization/OC you can hit on Intel ~40ns on ryzen ~58ns (yes even lower on both). AMD did improve it, but it is around 50% higher.

Which means even if AMD improves IF latency by ~10% and we could run 2000GHz on IMC it still still quite far from Intel.

When I heard that AMD mentioned HBM (or next gen. of fast ram on package) for CPUs I got excited. That could be real breakpoint for wider cores :).
 
Last edited:

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
I don't know how (or with what) der8auer measured those readings, however if they're SMU reported figures then they're not comparable.
My measurements are based on controller telemetry (DCR).

Based on the tests I made today on different samples, the power consumption seems to vary quite a lot (up to 12%) between the specimens.


So power consumption is based on quality of the silicon ?
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
SIDD (static leakage) mostly, it seems.


So Wich core with potential to clock better ?

And with widely different power consumption then it's not surprising that the review is all over the place, it's really are depends on cooler it's used.
 

Abwx

Lifer
Apr 2, 2011
10,926
3,414
136
OMF overclocking tests, 62W@3.7GHz with Handbrake as load...

Perf/watt improvement of 12nm vs 14LPP can be as high as 30% at 3.9 and above.

Ryzen-7-2700X-OC-Tension-Mesur%C3%A9e.png


Ryzen-7-2700X-OC-Consommation.png


Ryzen-7-2700X-Overclocking.png


https://www.overclockingmadeinfrance.com/test-amd-ryzen-7-2700x/15/
 
Last edited:
May 11, 2008
19,466
1,157
126
SIDD (static leakage) mostly, it seems.

With the release of the polaris gpu from amd, cpuz had this option about asic quality.
since both ryzen and polaris are made on the same process that got me wondering.
Is there such a number also for ryzen ?
I never really understood what asic quality meant.
Was it not :
The higher the asic quality number, the lower the leakage and the lower the maximum overclock ?
Was it not that more leakage (static power consumption) means more overclock ?
Maybe i have it mixed up. I am not sure.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
With the release of the polaris gpu from amd, cpuz had this option about asic quality.
since both ryzen and polaris are made on the same process that got me wondering.
Is there such a number also for ryzen ?
I never really understood what asic quality meant.
Was it not :
The higher the asic quality number, the lower the leakage and the lower the maximum overclock ?
Was it not that more leakage (static power consumption) means more overclock ?
Maybe i have it mixed up. I am not sure.


Yes, lower the leakage means lower overclock and vice versa,

AMD used to sell cpu with very high leakage and sold it under TWKR series, it's really cpu just for overclocker, but the drawback was with higher leakage, comes with higher temperature and power consumption.
 
  • Like
Reactions: William Gaatjes

Timur Born

Senior member
Feb 14, 2016
277
139
116
@The Stilt Thanks for the detailed review and write-up. Especially the IPC vs. IPC part allows us overclocking users to get a better perspective on performance differences. Well appreciated.

Going from a 3950 + 3333-CL14 overclocked 1800X to a 2700X doesn't seem like such a necessary investment. I will likely keep the cash and wait for Z390 to come around, especially since I am interested in testing its native USB 3.1 (G2) ports anyway.

Are the USB 3.1 ports on 2700X Gen2 or Gen1 (aka 3.0)?
 
  • Like
Reactions: CatMerc

Timur Born

Senior member
Feb 14, 2016
277
139
116
All benchmarks and reviews out there test single-core and all-core performance. In my experience a more practically useful benchmark would be two-core performance, because this is what is mostly used in daily operations (even when more cores are used their utilization often sums to only 2 cores worth). And when one process occupies a full single core, all the other low usage processes usually occupy at least part of a second core.

The times that true single-core XFR bins are active are usually short and seldom, unless you change the Windows power profile to aim for more aggressive core parking. I don't know about the latest Intel Turbo implementations, but in the past the same was true for Intel CPU (or rather how Windows profiles handled things).
 
Last edited:
Status
Not open for further replies.