Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 159 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dante80

Junior Member
Jun 3, 2009
8
14
81
Ryzen Platform Affected by RTC Bias; W8/8.1/10 Not Allowed on Select Benchmarks

In a statement issued by the Head of Moderation Christian Ney, we have confirmed that the AM4 platform is affected by the Windows 8/8.1/10 RTC bias. The bias occurs when adjusting the reference clock at run-time and will affect the Windows timer, causing benchmarks to perceive time slower (or faster) than it really is. This results in benchmark scores affected in such a way that the benchmark scores reported do not reflect real performance.

The RTC bias is referenced in the ROG Crosshair VI Hero Extreme Overclocking guide available on Overclocking.guide: "Timer is skewed when changing REFCLK in Windows 8+. Additionally the default systimer has issues with OS ratio changes unless HPET is enabled. To summarize, always enable HPET on this platform."

We described the behavior of the RTC bias in an article published on August 18, 2013 (see below). We also issued rules updates for the Skylake platform in an article published on November 5, 2015.

The concept of time on a PC configuration is, if not synced via network or internet, an arbitrarily defined constant designed to ensure that the configuration is running in sync with the real world. In other words: hardware and software engineers ensure that ?one second? on your PC equals ?one second? in real time. One of the reasons why it?s so important to have the PC?s timer line up with the real world time is to ensure that your PC can produce accurate measurements and predictions.? The points we brought up in that editorial are relevant again. To ensure that the arbitrarily defined constant of ?time? is the same on everyone?s benchmark system, we rely on the OS and hardware. This worked quite well, until Windows8 came around.

The problem builds on the problems we faced with Heaven. When downclocking the system under Windows8, the Windows RTC is affected as well. The biggest difference between Windows7 and Windows8 is that now all benchmarks (no exception) are affected.

Let us make this more practical. On our Haswell test system we downclocked the BCLK frequency by about 6% from 130 MHz to 122MHz. Using a CPU ratio of respectively 32x and 34x, the resulting CPU frequency remains 4160MHz. Then we ran comparison benchmarks.


With immediate effect, we no longer accept AM4-based overclocking result submissions with Windows 8/8.1/10-based Operating Systems for benchmarks listed in the General Rules, Section 1.6. You are allowed to use those operating systems with approved benchmarks such as the entire 3DMark suite, GPUPI, HWBOT X265 Benchmark, Y-Cruncher, Realbench and CPU-Z. We will keep you updated on any changes to this list.

For reliable performance measurements with at run-time overclocking, we recommend enabling the High Performance Event Timer (HPET). Alternatively you can opt to use a Windows 7 based operating system. Note that at run-time overclocking using the CPU multiplier will not result in RTC bias (under investigation).

We are also investigating the impact of the "Ryzen Sleep bug" possibly affecting benchmark integrity.

The HWBOT Staff.
 
  • Like
Reactions: lightmanek

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136
I was only saying that the gaming performance relative to Intel isn't what is the problem with Ryzen - it's the gaming performance relative to how it performs in practically everything else.

The nature of that disparity gives us insight into what is likely to be causing it.

We only compare the relative performance that Intel gives in gaming versus other tasks to the same relative performance Ryzen gives in gaming.

Ryzen can match or beat the 6900k clock-for-clock, core-for-core, watt-for-watt... sometimes all three at the same time. It would then be assumed that gaming tasks would follow suit - the fact that they don't is what is of interest.

It basically comes down to gaming being unusually sensitive to the very areas where Ryzen is most different / weakest than its Intel counterpart. Those weaknesses can be overcome, in theory, fully through software. But it will require more than just a simple kernel scheduler patch. Game patches, AGESA code updates, and system library updates are all going to need to become aware of how Ryzen behaves. Tall order, to be fair.

Why we always have to change to whole world in order to use AMD products properly? it should be the other way around.

The problem here is that Ryzen is a server design, working on mainstream desktop software, game paches nothing, games today launch like 100 threads, or more, its the scheduler job to know were to place them. MT is not the problem, Main thread IS the problem, there is no easy way to fix that, maybe in a few years. Period.
And btw, when that happens, i fully expect it to benefict BDW-E more than Ryzen.

The second problem is that game threads arent as linear as producction software threads, no way to fix that. Some game threads and system api library threads, like DX12/Vulkan will tend to want to exchange information each other, no good idea to separate them.

So for Ryzen the most efficient gaming thing will be to place almost everything on one module, incluiding all game threads and system side threads exchaging info with it, and the 2nd module for background tasks. Again no way to workaround that.

On the lower end, a 6C 3+3 could get really bad because of this, and if the 4C is 2+2 oh man, i really hope that one is 1 module.

This is Ryzen, a modular server design with pro and cons, when the scheduler is fixed things will get better (or worse on some cases), but dont expect much more than that for gaming, games arent production software the MT and programing work is not remotely the same. Also if games start to use AVX more that will just be worse for Ryzen. That is also called "optimizing" btw,

But knowing AMD, i expect them to use Stardock to make some ridiculous unrealistic benchmark/demo to show how good Ryzen is for games.
 
Last edited:
  • Like
Reactions: CHADBOGA and Sweepr

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
The CPU load differs form low res to high res, especially when you have diff numbers of cores.
With a GTX 1080 at 1080p , 7700k vs 6900k , you have the 7700k ahead 9%. At 1440p it's 7% but if you go to 4k, the 6900k takes the lead.
Low res favors clocks over cores right now that's the problem.
The difference is that as the resolution increases, the FPS go lower, removing any bottleneck that may have existed prior. At that point, the become nearly the same.
 

piesquared

Golden Member
Oct 16, 2006
1,651
473
136
Why we always have to change to whole world in order to use AMD products properly? it should be the other way around.

The problem here is that Ryzen is a server design, working on mainstream desktop software, game paches nothing, games today launch like 100 threads, or more, its the scheduler job to know were to place them. MT is not the problem, Main thread IS the problem, there is no easy way to fix that, maybe in a few years. Period.
And btw, when that happens, i fully expect it to benefict BDW-E more than Ryzen.

The second problem is that game threads arent as linear as producction software threads, no way to fix that. Some game threads and system api library threads, like DX12/Vulkan will tend to want to exchange information each other, no good idea to separate them.

So for Ryzen the most efficient gaming thing will be to place almost everything on one module, incluiding all game threads and system side threads exchaging info with it, and the 2nd module for background tasks. Again no way to workaround that.

On the lower end, a 6C 3+3 could get really bad because of this, and if the 4C is 2+2 oh man, i really hope that one is 1 module.

This is Ryzen, a modular server design with pro and cons, when the scheduler is fixed things will get better (or worse on some cases), but dont expect much more than that for gaming, games arent production software the MT and programing work is not remotely the same. Also if games start to use AVX more that will just be worse for Ryzen. That is also called "optimizing" btw,

But knowing AMD, i expect them to use Stardock to make some ridiculous unrealistic benchmark/demo to show how good Ryzen is for games.

Of course optimizations are coming for Ryzen. Just like they have to for other hardware. And since AMD has worked so closely with developers on consoles, theyll come rather quick. Personally, im perfectly satisfied with the performance in games. More than good enough for me considering the massive performance in multithreading. :)
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
I gave up on trying to guess and will just wait for more details.
Do want to ask again, when do you expect to have some Win 7 results with SMT on and off in games where SMT is a penalty under Win 10.
Since seeing that result for a single game, i am very eager to see more.

The Fury arrives tomorrow, the motherboard arrives Friday - I ordered a new Samsung 960 Evo as well, and that should be here Thursday. I have the image setup (with ALL Ryzen and NVMe drivers and patches slipped into Windows 7) as well as all the bench-marking tools and what-not ready to go.

Tomorrow I will install the Fury into my machine and do a whole bunch of benchmarks - including gaming in multiplayer with BF4 and BF1. I will also install my RX 480 into the wife's machine (since she's going to be keeping it - not that she knows that :p) and do a bunch of testing with the Phenom II X4 955 @ stock. You know, to see what this CPU bottle-necking is all about ;-) I don't have to do CPU benchmarks on either of these machines - I have those results from clean images from last year.

That's probably all for tomorrow, but Thursday will see me running a couple game benchmarks with the Fury in the wife's computer... in case I didn't have enough CPU bottlenecks to experience... and then reinstalling the Fury into my case and backing up her data for migration to a clean Windows 7 install with Ryzen (she wouldn't dare have me upgrade the HDTV or her system to Windows 10 - she actually likes Debian, so I'll have to dual boot her eventually :p).

Friday, I assemble the Ryzen 1700X and the AB350 motherboard into the wife's computer, install Windows 7 on the NVMe drive, configure Steam and Origin to use my own game SSD (which I'll need to backup before getting too crazy...), do stability testing, and then begin pumping out Windows 7 results with the RX 480 and then the Fury again. That will probably spill over into Saturday given the many different modes I need to test and the teething issues I expect... maybe even Sunday.

Windows 10 results won't come for comparison until my Asus C6H arrives, which will hopefully be early next week. Inventory was originally expected in March 9th at a couple vendors, but I have no confirmation of that. If it looks like it will take too long, I'll just install Windows 10 on the NVMe drive and run the tests on the wife's computer (with the Ryzen 1700X and AB350 board, both video cards, numerous RAM configurations, etc...).

Gaming results should be available end of next week, then, at the latest... Wish I could do better.

Imagine if I had a job to go to :eek:
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Why we always have to change to whole world in order to use AMD products properly? it should be the other way around.

The problem here is that Ryzen is a server design, working on mainstream desktop software, game paches nothing, games today launch like 100 threads, or more, its the scheduler job to know were to place them. MT is not the problem, Main thread IS the problem, there is no easy way to fix that, maybe in a few years. Period.
And btw, when that happens, i fully expect it to benefict BDW-E more than Ryzen.

The second problem is that game threads arent as linear as producction software threads, no way to fix that. Some game threads and system api library threads, like DX12/Vulkan will tend to want to exchange information each other, no good idea to separate them.

So for Ryzen the most efficient gaming thing will be to place almost everything on one module, incluiding all game threads and system side threads exchaging info with it, and the 2nd module for background tasks. Again no way to workaround that.

On the lower end, a 6C 3+3 could get really bad because of this, and if the 4C is 2+2 oh man, i really hope that one is 1 module.

This is Ryzen, a modular server design with pro and cons, when the scheduler is fixed things will get better (or worse on some cases), but dont expect much more than that for gaming, games arent production software the MT and programing work is not remotely the same. Also if games start to use AVX more that will just be worse for Ryzen. That is also called "optimizing" btw,

But knowing AMD, i expect them to use Stardock to make some ridiculous unrealistic benchmark/demo to show how good Ryzen is for games.

We always have to change the whole world for how Intel wants to do things. Now it's AMD's turn :p

The main thread of a game (or, really, the heavy thread(s)) are hurting on Ryzen because their data, in effect, gets erased between half of the context switches and needs to get fetched anew (Windows problem) - the cache-aware algorithms are treating the cache as a monolithic whole (game code problem) - and memory latency is strange (AMD/BIOS problem).

I don't share your concerns about CCX configurations of 2+2 or 3+3. The extra L3 cache that would presumably be available per core should help hide some of the costs - and all the fixed for Ryzen will, ultimately, erase the issue entirely.

What I do know is that locking performance is somehow fantastic, leading to great MT scaling. AMD must be using some kind of magic for that. Then again, I haven't had a chance to directly test this... it might just not be bad enough to cause a noticeable problem in benchmarks.
 
  • Like
Reactions: Drazick and CatMerc

french toast

Senior member
Feb 22, 2017
988
825
136
AMD Pinnacle Ridge could very well launch in Q1 2018, roughly around this time next year. I am just curious to know if AMD and GF have an improved process for their 2018 CPUs. 14LPP and the current Summit Ridge physical design is just not capable of 4+ Ghz operation at safe voltages for 24x7 use. I would like to see a Zen+ design optimized for 4+ Ghz and which can easily overclock to 4.5+ Ghz. I think AMD and GF need to significantly improve the process with custom tweaks and improve the physical design to make that happen.
Everybody is forgetting that Samsung has had 14nm LPU online already for months, this 4th generation finfet process is a HP process optimised for high frequency designs, AMD has the option with the WSA agreement to use Samsung, more likely though is global foundries updates their tooling for this better process.

Honestly I can't see why Samsung would even bother developing this process if it didn't have a large customer for it, I feel that large customer in need of a HP finfet process is AMD, probably through globalfoundries.

"interesting news concerning a fourth-generation 14nm product, 14LPU. For those keeping score at home,Samsung released 14nm Low Power Early (14LPE) first, followed by 14nm Low Power Plus (14LPP), which was broader ramp with more customers and up to 10% improved performance. Earlier this year, the company announced it would build a lower-cost variant of 14nm that didn’t sacrifice on power or performance, 14LPC. This fourth-generation 14LPU is meant explicitly for customers who are building “high performance, compute-intensive” applications. 14LPU is said to offer better performance than 14LPC."
https://www.extremetech.com/extreme...ans-10nm-improvements-shows-off-7nm-euv-wafer
 
  • Like
Reactions: lightmanek

imported_jjj

Senior member
Feb 14, 2009
660
430
136
The Fury arrives tomorrow, the motherboard arrives Friday - I ordered a new Samsung 960 Evo as well, and that should be here Thursday. I have the image setup (with ALL Ryzen and NVMe drivers and patches slipped into Windows 7) as well as all the bench-marking tools and what-not ready to go.

Tomorrow I will install the Fury into my machine and do a whole bunch of benchmarks - including gaming in multiplayer with BF4 and BF1. I will also install my RX 480 into the wife's machine (since she's going to be keeping it - not that she knows that :p) and do a bunch of testing with the Phenom II X4 955 @ stock. You know, to see what this CPU bottle-necking is all about ;-) I don't have to do CPU benchmarks on either of these machines - I have those results from clean images from last year.

That's probably all for tomorrow, but Thursday will see me running a couple game benchmarks with the Fury in the wife's computer... in case I didn't have enough CPU bottlenecks to experience... and then reinstalling the Fury into my case and backing up her data for migration to a clean Windows 7 install with Ryzen (she wouldn't dare have me upgrade the HDTV or her system to Windows 10 - she actually likes Debian, so I'll have to dual boot her eventually :p).

Friday, I assemble the Ryzen 1700X and the AB350 motherboard into the wife's computer, install Windows 7 on the NVMe drive, configure Steam and Origin to use my own game SSD (which I'll need to backup before getting too crazy...), do stability testing, and then begin pumping out Windows 7 results with the RX 480 and then the Fury again. That will probably spill over into Saturday given the many different modes I need to test and the teething issues I expect... maybe even Sunday.

Windows 10 results won't come for comparison until my Asus C6H arrives, which will hopefully be early next week. Inventory was originally expected in March 9th at a couple vendors, but I have no confirmation of that. If it looks like it will take too long, I'll just install Windows 10 on the NVMe drive and run the tests on the wife's computer (with the Ryzen 1700X and AB350 board, both video cards, numerous RAM configurations, etc...).

Gaming results should be available end of next week, then, at the latest... Wish I could do better.

Imagine if I had a job to go to :eek:


Then you must leak some Win 7 SMT on and off results before everything else is done.

Seen that reddit rumor the other day with a 4.4GHz Zen?
The weirdest part made me wonder if it's not true.
The guy claimed a larger cache and that doesn't make any sense but what if something got lost in translation and it was larger area wise. If they would use high perf SRAM instead of the current high density and decouple it from the core ,it might be interesting. That and a few smaller changes could work but does seem too much of an investment for a stop gap solution so maybe it's more about Pinnacle Ridge and Zen+.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Everybody is forgetting that Samsung has had 14nm LPU online already for months, this 4th generation finfet process is a HP process optimised for high frequency designs, AMD has the option with the WSA agreement to use Samsung, more likely though is global foundries updates their tooling for this better process.

Honestly I can't see why Samsung would even bother developing this process if it didn't have a large customer for it, I feel that large customer in need of a HP finfet process is AMD, probably through globalfoundries.

"interesting news concerning a fourth-generation 14nm product, 14LPU. For those keeping score at home,Samsung released 14nm Low Power Early (14LPE) first, followed by 14nm Low Power Plus (14LPP), which was broader ramp with more customers and up to 10% improved performance. Earlier this year, the company announced it would build a lower-cost variant of 14nm that didn’t sacrifice on power or performance, 14LPC. This fourth-generation 14LPU is meant explicitly for customers who are building “high performance, compute-intensive” applications. 14LPU is said to offer better performance than 14LPC."
https://www.extremetech.com/extreme...ans-10nm-improvements-shows-off-7nm-euv-wafer

Do we even have a genuine confirmation as to which process AMD is using? I know we've been assuming 14nm LPP for over a year, but I don't think it has ever been officially confirmed by either AMD or anyone else.

With both 14nm LPP fabs being in the U.S., "Diffused in U.S.A." is not helpful.

I would be curios to learn about any difference discovered between the "Made in China" and "Made in Malaysia" Ryzen CPUs. My 1700X is a Malaysian part. Back in the day, you always wanted the "Malay" CPUs for optimal sub-ambient overclocking... oh, how I miss seeing -40C on my CPU core temp. Ran that way for probably two years - with a ~100% overclock. Just a Celeron... but, still :p
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Then you must leak some Win 7 SMT on and off results before everything else is done.

Seen that reddit rumor the other day with a 4.4GHz Zen?
The weirdest part made me wonder if it's not true.
The guy claimed a larger cache and that doesn't make any sense but what if something got lost in translation and it was larger area wise. If they would use high perf SRAM instead of the current high density and decouple it from the core ,it might be interesting. That and a few smaller changes could work but does seem too much of an investment for a stop gap solution so maybe it's more about Pinnacle Ridge and Zen+.

I'll probably release some data - have to get the website data going, anyway. Guess I'll make it live with slow updates... pretty much what I did with the Excavator Interrogation. Will just have to include Windows 10 results in various updates.

Might have to refactor the website some, I didn't expect OS choice to be a consideration :p

http://zen.looncraz.net/

My host has been undergoing upgrades lately - so it may be unreachable at times.
 
  • Like
Reactions: Drazick

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
I'll probably release some data - have to get the website data going, anyway. Guess I'll make it live with slow updates... pretty much what I did with the Excavator Interrogation. Will just have to include Windows 10 results in various updates.

Might have to refactor the website some, I didn't expect OS choice to be a consideration :p

http://zen.looncraz.net/

My host has been undergoing upgrades lately - so it may be unreachable at times.


can you please try overclocking with multiplier vs bclk ? and set to same clock and run game benchmark with it, I'm really curious wich one will benefit the most for games.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
can you please try overclocking with multiplier vs bclk ? and set to same clock and run game benchmark with it, I'm really curious wich one will benefit the most for games.

I already have a section dedicated to just that purpose :p
 
  • Like
Reactions: Drazick

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
Does this RTC Bias invalidate pretty much every benchmark up until now?
If it does, it also seems to invalidate the argument that Ryzen gaming is not living up to its suggested potential, such that current gaming performance is possibly indicative of overall performance in actuality.
#spannerintheworks

Still, there are questions about why W7 seemingly offers greater performance than W10.
 

zinfamous

No Lifer
Jul 12, 2006
110,512
29,098
146
Why we always have to change to whole world in order to use AMD products properly? it should be the other way around.

The problem here is that Ryzen is a server design, working on mainstream desktop software, game paches nothing, games today launch like 100 threads, or more, its the scheduler job to know were to place them. MT is not the problem, Main thread IS the problem, there is no easy way to fix that, maybe in a few years. Period.
And btw, when that happens, i fully expect it to benefict BDW-E more than Ryzen.

The second problem is that game threads arent as linear as producction software threads, no way to fix that. Some game threads and system api library threads, like DX12/Vulkan will tend to want to exchange information each other, no good idea to separate them.

So for Ryzen the most efficient gaming thing will be to place almost everything on one module, incluiding all game threads and system side threads exchaging info with it, and the 2nd module for background tasks. Again no way to workaround that.

On the lower end, a 6C 3+3 could get really bad because of this, and if the 4C is 2+2 oh man, i really hope that one is 1 module.

This is Ryzen, a modular server design with pro and cons, when the scheduler is fixed things will get better (or worse on some cases), but dont expect much more than that for gaming, games arent production software the MT and programing work is not remotely the same. Also if games start to use AVX more that will just be worse for Ryzen. That is also called "optimizing" btw,

But knowing AMD, i expect them to use Stardock to make some ridiculous unrealistic benchmark/demo to show how good Ryzen is for games.

Intel never required "the whole world to change" in order to use their products properly. Good thing we have Intel always looking out for our interests.
 

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
Imo 1700 is for those that oc. 1700x and 1800x is for those that dont want to oc. As benefit they keep the nice efficiency features that us oc ers lose.

I bought the 1800x as a gift to AMD for launching this chip. My hope was that it would wind up being the best bin available, but . . . 1700 might be better, at least for speeds up to 3.9 GHz anyway.

Finally got my chip in, just no board, so . . . hey, nice box.

Good thing we have Intel always looking out for our interests.

They ain't doin me no favors.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,248
136
Does this RTC Bias invalidate pretty much every benchmark up until now?
If it does, it also seems to invalidate the argument that Ryzen gaming is not living up to its suggested potential, such that current gaming performance is possibly indicative of overall performance in actuality.
#spannerintheworks

Still, there are questions about why W7 seemingly offers greater performance than W10.

Looks like the same applies to intel as far as benchmarks go on hwbot.

I wouldn't say it discredits all benchmarks so far.

On another note my record still stands. Best OC chip I've ever had....So far.

http://hwbot.org/submission/2294151_kenmitch_cpu_frequency_core_i5_2550k_5604.37_mhz
 
  • Like
Reactions: lightmanek

Udgnim

Diamond Member
Apr 16, 2008
3,662
104
106
Does this RTC Bias invalidate pretty much every benchmark up until now?
If it does, it also seems to invalidate the argument that Ryzen gaming is not living up to its suggested potential, such that current gaming performance is possibly indicative of overall performance in actuality.
#spannerintheworks

Still, there are questions about why W7 seemingly offers greater performance than W10.

the W10 scheduler is supposed to be more sophisticated & superior to the W7 scheduler

it might currently be the case that more is less with Ryzen until Microsoft releases the Ryzen scheduler update for W10
 

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
... The benchmarks at low resolutions historically were NEVER about actual realistic gaming performance. A good logical writer might be able to draw some conclusions from the data and say that if GPUs gained x amount of power in the next year, or the next 2 or 3 years, that your CPU would still be amazing in THAT game.
...
This tells me that you do not understand the FX8350 and i5-2500 example.
It was shown in the video that when GPUs gained power FX8350 became a better gaming CPU.
Why? Because the GPUs were gaining power also through the addition of the multi-threading.
This is the reason why comparing gaming of 8C and 4C CPUs at low resolution is useless. It fails to account for the GPU development.
 
Last edited:
  • Like
Reactions: guachi and inf64

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
This tells me that you do not understand the FX8350 and i5-2500 example.
It was shown in the video that when GPUs gained power FX8350 became a better gaming CPU.
Why? Because the GPUs were gaining power also through the addition of the multi-threading.
This is the reason why comparing gaming of 8C and 4C CPUs at low resolution is useless. It fails to account for the GPU development.
Edit:

I reread your post, and there is some right and wrong here.

First off, the video shows the gap being almost the same for several years with small increments, then instantly, it switched massively in favor of the FX8350. It would seem something changed at that point.

2nd of all, GPU's do not use CPU threads in any direct way. New GPU's do not use multithreading better, as they don't use it at all. The CPU's that are feeding GPU's use multithreading to supply information to the GPU faster.

I'm not sure what is the change. I'm going to guess driver improvements made the big difference, or maybe Windows improvements. He's making a fairly large leap in his video on figuring out what is the reason, but he's right about one thing, we don't know what will happen in 4 years.

But the differences matter today, unlike his video claims. Those benchmarks are showing non CPU bound areas of games which have CPU bound areas.

Edit: I just noticed that the big leap came when several new games were added, which simply show that multithreading in newer games helped the CPU with more cores. That still does not invalidate low res testing, unless used as a predictor of the future. People use low rest testing for gaming today.
 
Last edited:

Janooo

Golden Member
Aug 22, 2005
1,067
13
81
I think you misunderstand. Multithreading helps gaming equally at low and high resolution. If anything, it helps low resolutions more, as it is another way to remove CPU bottlenecks. The difference is simply a matter of newer games being more multithreaded than older games. The GPU has nothing to do with it.

The question is whether in 5 years games will be better threaded on average, than to day (probably).

The real problem is games are bottlenecked by CPU's now, at higher resolutions, just not in the benchmarks they use. Though if you don't game past 60hz, you probably won't run into it much, but as an 80+ FPS gamer, it happens constantly.
Well, check the video.
The point is: the same game on newer GPU after a couple of years and FX8350 is faster gaming CPU. No other changes. The only changed factor is the GPU.
The new games multi-threading is an icing on the cake that emphasizes the point that 8C and 4C CPU testing at low resolution is useless.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Well, check the video.
The point is: the same game on newer GPU after a couple of years and FX8350 is faster gaming CPU. No other changes. The only changed factor is the GPU.
The new games multi-threading is an icing on the cake that emphasizes the point that 8C and 4C CPU testing at low resolution is useless.
Sorry, I was in a hurry to type and messed up some thing. Reread the edit. As far as the video goes, the author is making a few assumptions which doubt are true. The reason for the big leap is more likely drivers or the OS (edit: it seems they just added a bunch of new multithreaded games). For years the difference only made very small changes in favor in the FX, until one year later, it was a massive leap forward. That seems more like something else is going on. And multithreading has nothing to do with what GPU is used, unless we are talking about compute, which is more of an issue on new games. (edit: seems they added a bunch of new games that made the difference).
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,835
1,514
136
Intel never required "the whole world to change" in order to use their products properly. Good thing we have Intel always looking out for our interests.

Actually they could have use Broadwell-EP "ring modules" instead on Broadwell-E for the 6800 and 6900. But that would have resulted on the same problems Ryzen has today.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Edit: I just noticed that the big leap came when several new games were added, which simply show that multithreading in newer games helped the CPU with more cores. That still does not invalidate low res testing, unless used as a predictor of the future. People use low rest testing for gaming today.

But that is what it is being used for. People in just about every single type of gameplay, play with a system with a GPU bottleneck. People want the best graphics they can get at playable settings on their hardware. I have mentioned this before but people don't buy a new card two years later just to get more frames in the game they were playing when they first bought the system unless it was already performing below their comfort level and eye candy setting. It takes shifts in software to drive hardware sales. Knowing that it's a useless predictor as that video showed. It didn't get worse with better video cards and it got better with newer games. Not just as a CPU for these games but actually became a better CPU than its price competitor back in the day. That's in CPU bottlenecked area's.

Even in a situation where people are buying new video cards for new games, but still play the old games, the baseline performance is going to be that of the old card. So worse case scenario in the future it doesn't get better, what probably happens is that future game development keeps down this same path and that by the nature of the design the R7 has the core advantage to lengthen its usefulness. Best case (and highly unlikely) is that due to the popularity and relationship due to the consoles (especially if the Xbox gets an 8c Zen/vega chip) people write their code Ryzen and it's CCX and it pulls within 5% of the 7700k for now and takes the lead as MT usage grows.
 

french toast

Senior member
Feb 22, 2017
988
825
136
Do we even have a genuine confirmation as to which process AMD is using? I know we've been assuming 14nm LPP for over a year, but I don't think it has ever been officially confirmed by either AMD or anyone else.

With both 14nm LPP fabs being in the U.S., "Diffused in U.S.A." is not helpful.

I would be curios to learn about any difference discovered between the "Made in China" and "Made in Malaysia" Ryzen CPUs. My 1700X is a Malaysian part. Back in the day, you always wanted the "Malay" CPUs for optimal sub-ambient overclocking... oh, how I miss seeing -40C on my CPU core temp. Ran that way for probably two years - with a ~100% overclock. Just a Celeron... but, still :p
Well its not LPE, its not LPU, so that leaves LPP and LPC.
Im certain its LPP.
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
Intel never required "the whole world to change" in order to use their products properly. Good thing we have Intel always looking out for our interests.

Hyper-threading, MCM dual core, MCM quad core, MMX, IA64 (failed), RDRAM (failed), socket after socket, 0.5mm mounting hole changes every generation, Broadwell L4, ... the list is endless.