Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 142 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

flash-gordon

Member
May 3, 2014
123
34
101
I am interested in AMD Raven Ridge, but is AMD really increasing the size of the integrated GPU that much? So far, AMD has gone from 400 to 512.

Code:
| Processor              | CPU cores                 | GPU shaders             | GFLOPS | Memory      |
|------------------------|---------------------------|-------------------------|--------|-------------|
| A8-3870K Llano         | 4 10h 3.0 GHz             | 400 TeraScale 2 600 MHz |    480 | 2 DDR3-1866 |
| A10-5800K Trinity      | 2 Piledriver 3.8-4.2 GHz  | 384 TeraScale 3 800 MHz |    614 | 2 DDR3-1866 |
| A10-6800K Richland     | 2 Piledriver 4.1-4.4 GHz  | 384 TeraScale 3 844 MHz |    648 | 2 DDR3-2133 |
| A10-7890K Kaveri       | 2 Steamroller 4.1-4.3 GHz | 512 GCN 2 866 MHz       |    887 | 2 DDR3-2133 |
| A12-9800 Bristol Ridge | 2 Excavator 3.8-4.2 GHz   | 512 GCN 3 1108 MHz      |   1135 | 2 DDR4-2400 |
There's a big difference between those 400 TeraScale2 to 512GCN3, and also, what from the first iteration to the last, AMD went just from 32nm to 28nm. If RR is made on the 14nm LPP from GloFo, it's a 55% improvement on size.

Also, with scales from the better overall CPU, they can now develop two RR with diferent CU and cores count. One mid to low end size with lower core and CU count, and another bigger.

The bigger one is where is HBM, I believe.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
AMD doesnt need to create a High-End HBM2 APU for 2017, all they need to do is sell 35W TDP 6-8 Core Ryzen chips, paired with 35-50W TDP Polaris 11 dGPUs and completely destroy any Intel 4C 8T + iGPU + eDRAM chips. Zero money, time and any resources spend, they already have both the CPU and dGPU ready and on the market. We could have 35-45W TDP 6-8Core Ryzen laptops with X300 chipset + 35W TDP Polaris 11 dGPU before summer.

And then upgrade the Laptop designs with 35-45W TDP Vegas for High-End Gaming Laptops. No Intel answer to that (8Core 16T), not even in 2018 that Intel will have a 6+2 Mobile part.
 

Dygaza

Member
Oct 16, 2015
176
34
101
I would really love to see more testing done with specific affinities. And I don't even mean general affinity, but thread specific affinity. Run all 8cores and 16 threads. But bind threads to specific cores (especially critical threads like game main thread and driver thread.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
Joker, the guy that had a review showing 1700 @ 3.9Ghz ~ 7700k @ 5GHz 1080p ultra, testing SMT on and off.

 
Last edited:
  • Like
Reactions: Malogeek

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
35W 4C/8T+11 CU Mobile mobile Eng Sample(12 CU design, but one CU disabled) has 3.0/3.3 GHz core clocks, on CPU.
Checks out so far.
Polaris 11(full, 1024 GCN cores) uses 18W of power, at 907 MHz(just for the GPU).
Yes, it checks out too.
So this kinda puts this for you all in a little picture.
Yes, power consumption with Polaris quickly explodes with frequency, because just like Ryzen, 14nm LPP is pushed way above it's comfort zone in retail Polaris cards. You suggest they will push it even higher in Vega and somehow reduce power consumption in the same time. Do you mind if i call it for what it is, a dream. OTOH something's gotta give for ridiculous size of Vega die.
AMD doesnt need to create a High-End HBM2 APU for 2017, all they need to do is sell 35W TDP 6-8 Core Ryzen chips, paired with 35-50W TDP Polaris 11 dGPUs and completely destroy any Intel 4C 8T + iGPU + eDRAM chips.
The only eDRAM parts Intel produces are 15W and 28W, literally no one was interested in GT4e parts because having GT2 part and 35-50W dGPU beats it. in most metrics You suggest AMD competes with it with 35W CPU without iGPU and 30-50W dGPU. Do i need to spell out how will it work in real life? Yes, so bad, i would rather have HBM2 APU over it in a laptop.
 
Last edited:

french toast

Senior member
Feb 22, 2017
988
825
136
We are talking about Gaming. Not situations where Memory Bandwidth REALLY matters. Machine learning, and data analysis. It is completely different scenario. We still do not know also how much memory bandwidth is required for Vega to excel in gaming.

Even tho it may look like a waste, it genuinely depends how much improved is the throughput of the GPU, and having overkill memory bandwidth in situation where it does not cost that much, may not be a bad thing. For example having more than 512 GB/s of memory bandwidth from GDDR5 on Polaris 10 can be huge waste, because it costs so much power. Having the same amount of memory bandwidth, while using an ounce of power, even on 16 CU may not be wasted. Everything depends on the picture we are looking at, and what scenario is on it.
Exactly its gaming and we roughly know what a similar size/efficiency gpu needs to run optimally, its no where near 256gb/s (112GBs 1050ti ) for gaming.
Also AMDs 470mm2 flagship gpu only has 8GB 512gbs HBM, so we have to get things into perspective here and keep expectations in check, i can guarantee you if we get HBM with RR we wont be seeing more than 1-2GB HBM running no faster than 128GB/S, you can hold me to that.
Expense, Die size, and especially power consumption, i think you are grossly underestimating the power consumption of HBM, as stated in low quantities its power is small, but at high speeds and large volumes it would easily blow a 35w APU power budget, such an APU Couldn't afford anymore than 5w for vram (maybe less) above a certain point its more efficient to have more execution units rather bandwidth, as you know scaling is far from linear. I would speculate 100GBs would be that point for 11CUs with diminishing returns higher than that.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,704
4,548
136
Yes, power consumption with Polaris quickly explodes with frequency, because just like Ryzen, 14nm LPP is pushed way above it's comfort zone in retail Polaris cards. You suggest they will push it even higher in Vega and somehow reduce power consumption in the same time. Do you mind if i call it for what it is, a dream. OTOH something's gotta give for ridiculous size of Vega die.
The thing that let down previous versions of GCN both on power efficiency and core clocks was how the Handled Registry Files to CU's. Currently AMD appears to have changed this, because if the information they gave on Vega is correct it clocks over 1.5 GHz, which previous versions, even in ridiculous cooling scenarios were not able to do so. I assume that this is the thing they have managed to change to increase the core clocks, at the same thermal envelope.
Exactly its gaming and we know roughly what that size an efficient gpu needs to run optimal, its no where near 256gb/s (112GBs 1050ti ) for gaming.
Also AMDs 470mm2 flagship gpu only has 8GB 512gbs HBM, so we have to get things into perspective here and keep expectations in check, i can guarantee you if we get HBM with RR we wont be seeing more than 1-2GB HBM running no faster than 128GB/S, you can hold me to that.
Expense, Die size, and especially power consumption, i think you are grossly underestimating the power consumption of HBM, as stated in low quantities its power is small, but at high speeds and large volumes it would easily blow a 35w APU power budget, such an APU Couldn't afford anymore than 5w for vram (maybe less) above a certain point its more efficient to have more execution units rather bandwidth, as you know scaling is far from linear. I would speculate 100GBs would be that point for 11CUs with diminishing returns higher than that.
It is low clocked memory compared to GDDR5, and also it uses lower voltage. Do the maths ;). GDDR5 memory cells at lowest use 1.35V, HBM uses 1.2V. And you have to bare in mind that the higher core clocks, the higher voltage you need to have.

Overall there should be no change in power consumption between GDDR5 memory cell vs HBM stack. So if 80 GB/s GDDR5 memory made from 4 cells consumes around 16W, that is around 4W of power per memory cell. HBM2 stack should consume similar amount of power.
 
  • Like
Reactions: Drazick

french toast

Senior member
Feb 22, 2017
988
825
136
The thing that let down previous versions of GCN both on power efficiency and core clocks was how the Handled Registry Files to CU's. Currently AMD appears to have changed this, because if the information they gave on Vega is correct it clocks over 1.5 GHz, which previous versions, even in ridiculous cooling scenarios were not able to do so. I assume that this is the thing they have managed to change to increase the core clocks, at the same thermal envelope.

It is low clocked memory compared to GDDR5, and also it uses lower voltage. Do the maths ;). GDDR5 memory cells at lowest use 1.35V, HBM uses 1.2V. And you have to bare in mind that the higher core clocks, the higher voltage you need to have.

Overall there should be no change in power consumption between GDDR5 memory cell vs HBM stack. So if 80 GB/s GDDR5 memory made from 4 cells consumes around 16W, that is around 4W of power per memory cell. HBM2 stack should consume similar amount of power.
So you are saying a single 4 high stack (4GB) @ 256gbs consumes just 4w?
 

Joric

Junior Member
Mar 4, 2017
14
6
16
He made a mistake there.. but for the most part his results match computerbase.de which is one of my gotos for trusted reviews (next to Anandtech and PcPer).

I like the charts over at computerbase.de, but I don't agree that their results were the same as Joker's (at least not in his 1700 vs 7700K videos). They were testing at stock frequencies, whereas he was supposedly testing at max obtainable overclocks.
 
  • Like
Reactions: godihatework

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
I still don't see such an APU as you describe. Yes AMD could make it but I don't think they will.

Building a 16 CU APU with 1 Ryzen CCX is going to be as big as Ryzen itself and then you need to add in the HBM and interposed cost and suddenly you have an APU with a larger manufacturing cost that you can't reasonably sell for more than Ryzen parts, so less profits.

On the other hand you have a CPU architecture that screams server market. If you want to make large dies you make server chips and a lot of money because here's where Zen appears to be most competitive.

That means you want small easy to fab APUs because you have limited wafers and more profits come from using them for the server market.

I think APUs go HBM with Navi which has been rumored to be designed for that modular approach. Maybe they make Zen+ that way too and server chips are on an interposed as well. I definitely see it happening by the time they move to the 7 mm node because yields will likely be a mess and you'll need small chips to be able to make it economical.
 

Joric

Junior Member
Mar 4, 2017
14
6
16
Actually Hardware.fr noted in their review that SMT off yielded 10% more performance in 1080p games with 1800X (all other settings stock, benched with GTX1080).

Yes, I was commenting on the idea that SMT on "crushes" SMT off in Tomb Raider.
 

sirmo

Golden Member
Oct 10, 2011
1,012
384
136
I like the charts over at computerbase.de, but I don't agree that their results were the same as Joker's (at least not in his 1700 vs 7700K videos). They were testing at stock frequencies, whereas he was supposedly testing at max obtainable overclocks.
His 1700 is overclocked but look at computerbase.de 1800X benches (which should be close to Joker's 1700 OC).. they are showing very similar performance to what Joker found.
 

Joric

Junior Member
Mar 4, 2017
14
6
16
His 1700 is overclocked but look at computerbase.de 1800X benches (which should be close to Joker's 1700 OC).. they are showing very similar performance to what Joker found.

Yes, but computerbase.de was comparing their 1800X to a 7700K at STOCK clocks, whereas Joker was supposedly comparing to a 7700K@5ghz.
So the fact that their results look the same say's it all about Joker's results.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96

Glo.

Diamond Member
Apr 25, 2015
5,704
4,548
136
I still don't see such an APU as you describe. Yes AMD could make it but I don't think they will.

Building a 16 CU APU with 1 Ryzen CCX is going to be as big as Ryzen itself and then you need to add in the HBM and interposed cost and suddenly you have an APU with a larger manufacturing cost that you can't reasonably sell for more than Ryzen parts, so less profits.

On the other hand you have a CPU architecture that screams server market. If you want to make large dies you make server chips and a lot of money because here's where Zen appears to be most competitive.

That means you want small easy to fab APUs because you have limited wafers and more profits come from using them for the server market.

I think APUs go HBM with Navi which has been rumored to be designed for that modular approach. Maybe they make Zen+ that way too and server chips are on an interposed as well. I definitely see it happening by the time they move to the 7 mm node because yields will likely be a mess and you'll need small chips to be able to make it economical.
Even if it costs manufacture more than Ryzen CPU, it will cost less to manufacture than Polaris 10 GPU. PCB, GDDR5, shroud vs CPU die, package Interposer and HBM2. Lets say that 2304 Polaris 10 costs 199$ and HBM2 APU costs 199$. The manufacturing costs for APU will be lower, and the market is bigger for the APU. MUCH, MUCH bigger.
 
  • Like
Reactions: Drazick

Head1985

Golden Member
Jul 8, 2014
1,864
686
136
OMG yes, he accidentally forgot to hit a checkbox. Clearly untrustworthy!

It did in that once scene, but that one scene was an outlier.
His results are way off.I watched his 720p low detail video test and tried tomb raider on mine 6700k 4.5Ghz and i have 50-100fps more than he with 7700k 5ghz.

his 7700k underperforming pretty badly.He have something wrong with it.
 
Last edited:

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
His results are way off.I watched his 720p low detail video test and tried tomb raider on mine 6700k 4.5Ghz and i have 50-100fps more than he with 7700k 5ghz.

his 7700k underperforming pretty badly.He have something wrong with it.
He also did a 6800k at 1080p ultra and got the same results as with the 7700k.

If anything those aussie "tech reviewers" aren't the ones to be trusted.

EDIT: Joker might have some settings changed in Nvidia control panel (w/e it's called) or you could be testing different areas. Either way, it doesn't really invalidate his results.

More reviews showing good gaming results,


 
Last edited:
  • Like
Reactions: sirmo

french toast

Senior member
Feb 22, 2017
988
825
136
Guys, read before you post stuff. This same link already appeared in other threads. Read the text and chart All CPUs are running at 4 Ghz in that test including the 7700k which therefore is downclocked. Normally you would be running it at 4.8 ghz or 20% higher clock and performance...
Am i missing something? He linked a review, he didnt state they were not running at 4ghz did he?
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Am i missing something? He linked a review, he didnt state they were not running at 4ghz did he?

He did not must most users will just go and look and see oh 1700 and 7700k perform the same. Most won't really see the 4 ghz thing. The 7700k can easily get 20% more performance due to OC in that chart.