Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 151 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Head1985

Golden Member
Jul 8, 2014
1,863
685
136
I have posted in several threads about possibility of raising UCLK on ryzen(it is listed in OC & ryzen master section of AMDS official website.) However, I have yet to hear a response if its hidden, or not currently present in AMD bios options for mobos.

I believe these 10% deficits after SMT modifications can be reduced to zero in comparison to the 6900k if UCLK can be raised to at least 2.8ghz...if not higher to say 3.2 or 3.3(which is the highest frequency achieveable on normal core before voltage bumps.)

Currently ryzen with 2400mhz memory the IMC is 1200mhz...with 3200mhz memory IMC is 1600...Highest i saw on non LN2 world records was 4.1allcore with 1800mhz uncore....thus they mustve had 3600mhz memory. If this can be decoupled, or at least forced into 2x mode(aka full memory speed rather than half) I honestly believe all of ryzens deficits sub AVX differences(Deficits of which im not sure even exist in benchmarks despite their smaller width) will disappear completely. In fact, due to its faster memory fabric, better smt, lower loss data conversions, and high clocks on base, turbo and with xfr for 1800x, I would expect it to outperform the 6900k by around at least 5% when all bios/windows updates are solved.

However, no one has responded to my comments, so either someone has read my messages and is currently testing, or my messages are being ignored.
it will be great, but why it runs at half speed in first place?They must have reason to do this.
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
Question:
If I want a Ryzen just for gaming and do what the reviewers say, turn off SMT and overclock it to 3.9GHz, will that make it into a true 8 core (8 thread) chip? Because if it does I would assume there would be less gaming issues and lower heat as a bonus (HT was always hotter and worse at gaming). My point is I would rather have 8 real cores than 16 fake ones that are really 8 pretending to be 16. So for the gamer, 8 real cores would seem to be more than enough and the way to go with Ryzen/SMT off, and then in a year or two when games actually use more than 8 cores/threads turn SMT back on. Seems like we can have the best of both worlds now and later. Only rendering and ripping guys need the fake 16 cores over real 8 cores it seems to me. And 8 real Ryzen cores with SMT off is better than 8 fake cores on an i7-7700 since only 4 are real and the other 4 are hyperthreaded.
Show me where disabling smt gives a tangible advantage in a real world situation now?

My last cpu was a 3570 and it was the same smt talk then. But It was damn stupid not to get the 3770. I have regretted that many times.
It cost me several hundred lives.

This time its 8c/16t. I know where its heading and i get the benefit over a 7700 in bf1 several times a week already. Imo its a nobrainer even for gaming if you intend to have your cpu more than 2 years.
 
Last edited:

sirmo

Golden Member
Oct 10, 2011
1,012
384
136
I don't care what anyone says. Ryzen launch was the best product launch ever! I've never seen this much drama with reviewers going at each other and being schooled left and right. It's been glorious entertainment.

This is why we need AMD in CPU (and GPU) space!
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
HBC helps with capacity restricted situations, not bandwidth restricted. They utilize the high bandwidth to do that, but if you have low bandwidth it won't help.

The 50%/100% result was from testing Vega being artifically restricted to 2GB of VRAM, vs the full capacity of Vega.

With Vega they're switching to a tiled rasterizer, which should help immensly with bandwidth.

HBC, like all caches, can help in both situations.

It smooths out valleys in performance by handling the peaks of bandwidth requirements.
 
  • Like
Reactions: Drazick

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Thanks! I have a similar CPU as you (i7-2700K @ 4.4GHz) Will be very interesting to see the results. I thought I would just shamelessly request that you test several other games as well (GTA5, Just Cause 3, Fallout 4, Skyrim SE) if you have any of those. :)

Admittedly I hadn't been following the Ryzen journey very closely before release. Just heard "competitive with Intel" and "AMD is back". So my slight disappointment in the gaming benches was all on me really. But the prospect of smoother frame rates and devs starting to optimize for Ryzen is exciting. I think I will hold on to my setup for now and see how things pan out during the year.

I don't own a single one of those games - and I've already spent far too much money on this endeavor.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
HBC, like all caches, can help in both situations.

It smooths out valleys in performance by handling the peaks of bandwidth requirements.
But the HBC is not an actual cache, at least not in the on-die traditional sense...

HBC is just what AMD's marketing renamed VRAM into, while the HBCC (High Bandwidth Cache Controller) is what is reponsible for those performance gains.
It's basically a smarter, programmable, hardware controlled way, of managing data sets. It clears up the existing waste in memory usage that is there due to the nature of legacy GPU memory management. And the bigger reason it exists, is to handle the absolutely gigantic data sets that you see in HPC these days.

HBC = HBM2 in Vega
HBCC = What makes HBM2 more effective
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
But the HBC is not an actual cache, at least not in the on-die traditional sense...

HBC is just what AMD's marketing renamed VRAM into, while the HBCC (High Bandwidth Cache Controller) is what is reponsible for those performance gains.
It's basically a smarter, programmable, hardware controlled way, of managing data sets. It clears up the existing waste in memory usage that is there due to the nature of legacy GPU memory management. And the bigger reason it exists, is to handle the absolutely gigantic data sets that you see in HPC these days.

HBC = HBM2 in Vega
HBCC = What makes HBM2 more effective


NO, it's an actual cache (SRAM on die or on package):

slides-16.jpg
 
  • Like
Reactions: Drazick

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
it will be great, but why it runs at half speed in first place?They must have reason to do this.
Why not run the entire processor at 8GHz in the first place and make it twice as fast as a 6900?
They must have reason to do this.

J/k.
 

CatMerc

Golden Member
Jul 16, 2016
1,114
1,149
136
NO, it's an actual cache (SRAM on die or on package):

slides-16.jpg
From Anandtech:
Anandtech said:
Meanwhile it’s very interesting to note that with Vega, AMD is calling their on-package HBM stacks “high-bandwidth cache” rather than “VRAM” or similar terms as was the case with Fiji products.
This is a term that can easily be misread – and it’s the one area where perhaps it’s too much of a tease – but despite the name, there are no signals from AMD right now that it’s going to be used as a cache in the pure, traditional sense. Rather, because AMD has already announced that they’re looking into other ideas such as on-card NAND storage (the Radeon Pro SSG), they are looking at memory more broadly.

From Pcper
Pcper said:
Bold enough to claim a direct nomenclature change, Vega 10 will feature a HBM2 based high bandwidth cache (HBC) along with a new memory hierarchy to call it into play. This HBC will be a collection of memory on the GPU package just like we saw on Fiji with the first HBM implementation and will be measured in gigabytes. Why the move to calling it a cache will be covered below. (But can’t we call get behind the removal of the term “frame buffer”?) Interestingly, this HBC doesn’t have to be HBM2 and in fact I was told that you could expect to see other memory systems on lower cost products going forward; cards that integrate this new memory topology with GDDR5X or some equivalent seem assured.

And while I won't hunt the timestamp now, either Raja or Scott Wasson basically explained that they're attempting to change consumer's views on what's important when purchasing a card. Instead of people looking at 16GB and thinking that's higher than 8GB, they want to print the bandwidth on the box because with the HBCC, that's what matters most.

I understand your confusion, it took my a while to wrap my head around exactly what AMD is doing here. The reason being that slide is quite confusing. It's supposed to represent the GPU package as a whole, rather than the die.
 
  • Like
Reactions: french toast

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
I think although AdoredTV did convey the right message, if I may nitpick on the percentage differences, I have to say that he should have pinned the 2500K or the FX8350 as the baseline.
The problem is having the latest CPU as the baseline is that the fps value is much higher so a small percentage of this big value would have resulted in a similar fps value differences considering the older tests with higher percentage differences but a much lower baseline fps value.

But of course the 2017 test hits the nail in the coffin. And that was his point.
If you pin FX8370 2500K is 10 % slower, if you pin 2500K FX8370 is 11 % faster

I'm getting a 1700 anyways, I need the cores for my VMs and compiles.
 
Last edited:
  • Like
Reactions: sirmo

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
My experience moving from 2C/4T to 4C/8T for VMs and compiles was so awesome I felt super productive. I hope I get the same experience moving to 8C/16T, I am literally drooling.
I run multiple VMs to build and develop my SW with different tool chains and target uarch.

Anyone having a 1700 with experience with 6900K and running VMs could chime in? I always dream of having 6900K but I guess 1700 would be fine.
 
  • Like
Reactions: sirmo

guachi

Senior member
Nov 16, 2010
761
415
136
I feel good after watching that video by AdoredTV, whom I've never seen before.

I've looked at the exact stuff he did. Gone through the computerbase.de website (because it's good) and had done the same comparisons a few days ago.

It's just I don't have a web channel or an awesome accent.

I'd also consider myself an AMD fan, so I couldn't tell if I was reading things into the data that weren't there.

He's far harsher than I was about other reviewers. Probably because he knows of what he speaks and I'm just some guy on the internet.

But the more I read the more I'm convinced that Intel has no CPU worth buying over $300 for a desktop. Intel is priced out of the market and is nothing but a budget brand (that cheap Pentium is a steal!)
 
  • Like
Reactions: sirmo and Crumpet

thepaleobiker

Member
Feb 22, 2017
149
45
61
mine is freesync 144mhz 1440p 27" monitor. love it.
Ddint realize how big the 24in would be (TBH) but its great now, after the adjustment from 15" 1080p on my laptop (previous main PC for 2 years...).

I kinda wish Ryzen 3/5 were out so I could have built my PC around it, instead of the Pentium G4560 (longevity with more cores) but I'm keeping my fingers crossed that Ryzen 3/5 will bring down KL i7 prices (I'd pay $225-$250 for a non-K i7-7700). Worst case I'll sell my m-ITX mobo & CPU on E-Bay when Ryzen 5 comes into the $200 mark (Ryzen 5 1400X : 4 cores, 8 threads)

Go Ryzen! :) And go Intel! (reduce those MSRPs!!)

Regards,
Vishnu