AMD Ryzen (Summit Ridge) Benchmarks Thread (use new thread)

Page 214 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Crumpet

Senior member
Jan 15, 2017
745
539
96
So.. R9 390 is a completely different graphics card to the R9 290? I mean it's faster and has 8gb of Vram.
 
Mar 10, 2006
11,715
2,011
126
So.. R9 390 is a completely different graphics card to the R9 290? I mean it's faster and has 8gb of Vram.
AMD sure thought so, that's why they called it R9 390, instead of R9 290 Rev. 2. I don't even think the changes in Grenada compared to Hawaii were as significant as the changes from SKL -> KBL. Did the R9 390/390X clock better than their 290/290X counterparts?
 
  • Like
Reactions: Phynaz

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
AMD sure thought so, that's why they called it R9 390, instead of R9 290 Rev. 2. I don't even think the changes in Grenada compared to Hawaii were as significant as the changes from SKL -> KBL. Did the R9 390/390X clock better than their 290/290X counterparts?
They did actually. They also had lower power consumption when at the same clockspeed as 290x/290 due to updated power management. The difference was slight, but it was there.

Kaby Lake is no more an update than Grenada was.
 

bjt2

Senior member
Sep 11, 2016
784
180
86
This is why I asked for the table details.
There's always a table, and usually it will be accessible... Overwriteable

How often it is updated and what exact conditions it needs for an upshift/downshift are key.

K10Stat Dev needs to come out of hiding

Different VID/FIDs and power planes have been on AMD CPUs for a long time.

Typically, FPUs are littered with the highest number and most sensitive digital sensors, as it's a region that heats up quicker than any, hence a single point of failure that is well monitored.

But this region is also typically far hotter than the rest of the chip. So I'm wondering if anything has been done to mitigate this.

30W for a core should be perfectly allowable as it has been previously, providing leakage currents are well controlled.

There will always be a priority weighting for such table based schemas, and VID bandings are usually inflexible and strict, being too safe on the side of caution (causing more heat that necessay).

I'll have a full read of the papers now.

Sent from HTC 10
(Opinions are own)
I think that TDP, Power and Vmax offset knobs are all is needed for safe or unsafe overclocking. No need to overwrite the tables, because that are the minimum stable voltages. Increasing it, increase power and you lose stability. Lowering it you lose stability... So i don't see reasons to tweak the tables...

To go over specifications, just increase TDP, Temperature and Vmax limits and voilà...

For the single core turbo: 30W for a core can be a limit, but it seems that Zen has a soldered IHS and so coupled with a good cooler, maybe the heat can be removed to stay in safe temperature and the single core turbo can skyrocket...
 
Mar 10, 2006
11,715
2,011
126
They did actually. They also had lower power consumption when at the same clockspeed as 290x/290 due to updated power management. The difference was slight, but it was there.

Kaby Lake is no more an update than Grenada was.
Found this:

AMD is pleased to bring you the new R9 390 series which has been in development for a little over a year now. To clarify, the new R9 390 comes standard with 8GB of GDDR5 memory and outpaces the 290X.

Some of the areas AMD focused on are as follows:



1. Manufacturing process optimizations allowing AMD to increase the engine clock by 50MHz on both 390 and 390X while maintaining the same power envelope



2. New high density memory devices allow the memory interface to be re-tuned for faster performance and more bandwidth




- Memory clock increased from 1250MHz to 1500MHz on both 390 and 390X

- Memory bandwidth increased from 320GB/s to 384GB/s

- 8GB frame buffer is standard on ALL cards, not just the OC versions



3. Complete re-write of the GPUs power management micro-architecture



- Under “worse case” power virus applications, the 390 and 390X have a similar power envelope to 290X

- Under “typical” gaming loads, power is expected to be lower than 290X while performance is increased”

So, yes, R9 390/390X were new GPUs in my book :)
 
  • Like
Reactions: poofyhairguy

Agent-47

Senior member
Jan 17, 2017
290
249
76
So, yes, R9 390/390X were new GPUs in my book :)
what? o_O:eek:o_O

re-synthesising an architecture to achieve better timing and power characteristics is nothing but a plain recompile of a c++ code with a different optimization flag.

when synthesizing an architecture, you have three different optimization that can be taken: 1. minimize die area at the expense of performance. 2. better timing at the expense of power, 3. better power management at the expense of better timing.

SL was made with option 3. KL with 2. but the base code was the same.

both are rebrands. end of file!:mad:
 
Last edited:
  • Like
Reactions: Crumpet

KTE

Senior member
May 26, 2016
478
130
76
I think that TDP, Power and Vmax offset knobs are all is needed for safe or unsafe overclocking. No need to overwrite the tables, because that are the minimum stable voltages. Increasing it, increase power and you lose stability. Lowering it you lose stability... So i don't see reasons to tweak the tables...

To go over specifications, just increase TDP, Temperature and Vmax limits and voilà...

For the single core turbo: 30W for a core can be a limit, but it seems that Zen has a soldered IHS and so coupled with a good cooler, maybe the heat can be removed to stay in safe temperature and the single core turbo can skyrocket...
They won't be the minimum stable voltage... They'll be minimum+safety buffer.

In every live system where thorough stability testing cannot be performed, manufacturers have to provide this buffer.

Unless the tables are initially populated by calibration via factory testing of voltages...

Sent from HTC 10
(Opinions are own)
 

bjt2

Senior member
Sep 11, 2016
784
180
86
They won't be the minimum stable voltage... They'll be minimum+safety buffer.

In every live system where thorough stability testing cannot be performed, manufacturers have to provide this buffer.

Unless the tables are initially populated by calibration via factory testing of voltages...

Sent from HTC 10
(Opinions are own)
There is a n sigma safe margin on the gaussian. If n=3, then there is 1% probability of failure, etc... You can lower the factor at your risk (if it is configurable) because the sigma includes also sampling error, but you must try how much you can lower the factor "n"...
We'll see what will be configurable or not.
Even if there is "only" TDP and temperature, XFR hardly can be bested by manual overclock...

Another factor that can be configurable is the alpha coefficient for exponential smoothing of power drawn, to have longer peaks over the TDP...
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,574
126
This XFR deal is confusing.

If it's really good at what it does, it doesn't seem like manually overclocking the chips will gain us much.

On the other hand, if we can do a lot better than XFR with the clocks manually, then what's the point of XFR?

Also, it sort of sounds like XFR is just the speed the chip decides it can run under the given conditions, and really doesn't involve "overclocking" at all.

Has overclocking just been relegated to a marketing term?
 
  • Like
Reactions: The Stilt

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
Really all I want from XFR is to have optimal per-core control. I don't need all of my cores to operate at max clockspeed if I can have a single core clock a few hundred MHz higher at the expense of others in a single threaded application.

Applications load cores differently, so if the chip is smart enough to clock the chip according to that, then I'll be a happy camper.
 
  • Like
Reactions: Magic Hate Ball

itsmydamnation

Platinum Member
Feb 6, 2011
2,423
2,394
136
This XFR deal is confusing.

If it's really good at what it does, it doesn't seem like manually overclocking the chips will gain us much.

On the other hand, if we can do a lot better than XFR with the clocks manually, then what's the point of XFR?

Also, it sort of sounds like XFR is just the speed the chip decides it can run under the given conditions, and really doesn't involve "overclocking" at all.

Has overclocking just been relegated to a marketing term?
Now AI's are taking our overlocking as well! Wont someone please think of the children! It will be interesting to see what XFR controls we get, can we set how aggressive the guard bands are, control voltage limits etc. Can we push XFR right to the leading edge of instability, if we can then i can see XFR as being able to exceed user overclocking as it should adjust based on workload, something a manual overclock cant.
 

Erithan13

Senior member
Oct 25, 2015
218
79
66
I'd speculate that XFR is designed to automatically produce the maximum stable 'overclock', within certain limitations of power, vcore, heat etc. In other words it's an official feature that guarantees the CPU continues to operate accurately and can be relied upon for critical work/calculations. A further manual overclock may well be possible but then you run the risk of instability and glitches if you push it too far.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
Also, XFR, being an automatic feature of the CPU, shouldn't void your warranty like a manual overclock would.
 

ZipSpeed

Golden Member
Aug 13, 2007
1,302
169
106
If XFR works as it should, this would be a great feature for those who participate in Distributed Computing. Especially for those who find overclocking to be a daunting task. And if prices are as reasonable as it looks, I can finally retire some older rigs in my DC arsenal.
 
Status
Not open for further replies.

ASK THE COMMUNITY