Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 144 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DrMrLordX

Lifer
Apr 27, 2000
21,627
10,841
136
Wait wait wait, if i put a 3200 stick in Ryzen rig, memory runs at 1600Mhz. Does memory controller/fabric run at 800Mhz, then?

Um

First off, no:

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/page-11#post-38776985

Secondly, DDR4 runs much lower clockspeeds than you would imagine:

https://en.wikipedia.org/wiki/DDR_SDRAM

https://en.wikipedia.org/wiki/DDR2_SDRAM#Chips_and_modules

https://en.wikipedia.org/wiki/DDR3_SDRAM#Modules

https://en.wikipedia.org/wiki/DDR4_SDRAM#JEDEC_standard_DDR4_module.5B55.5D.5B56.5D

DDR = 2x IC clockspeed
DDR2 = 4x IC clockspeed
DDR3/4 = 8x IC clockspeed

That's why we keep getting bandwidth improvements without major improvements to latency. If we had DDR4 IC clockspeeds at 1/2 their rating, latency would be a lot better.
 

Dygaza

Member
Oct 16, 2015
176
34
101
Games generally do not do such stuff, since they are not exactly parallel workloads.

Atleast ashes of singularity does. Run it with 4C/8T cpu and the game runs with 8 threads. Run it with 12C/24T Xeon and the game runs with 16 threads.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
First off, no:
Yeah, that's what i thought, because DDR4 clocks are confusing mess.
Secondly, DDR4 runs much lower clockspeeds than you would imagine:
Well, I/O bus runs at clocks in question, i guess i really did mean the bus clock.
That's why we keep getting bandwidth improvements without major improvements to latency. If we had DDR4 IC clockspeeds at 1/2 their rating, latency would be a lot better.
And here i was thinking is that latency is approaching the physical limit, that's why. I mean, how fast signal can travel those meters of wiring.
Atleast ashes of singularity does. Run it with 4C/8T cpu and the game runs with 8 threads. Run it with 12C/24T Xeon and the game runs with 16 threads.
Well, it does have some parallel stuff in it, that's why it is a benchmark, not a game :p
 

looncraz

Senior member
Sep 12, 2011
722
1,651
136
I am interested in AMD Raven Ridge, but is AMD really increasing the size of the integrated GPU that much? So far, AMD has gone from 400 to 512.

Code:
| Processor              | CPU cores                 | GPU shaders             | GFLOPS | Memory      |
|------------------------|---------------------------|-------------------------|--------|-------------|
| A8-3870K Llano         | 4 10h 3.0 GHz             | 400 TeraScale 2 600 MHz |    480 | 2 DDR3-1866 |
| A10-5800K Trinity      | 2 Piledriver 3.8-4.2 GHz  | 384 TeraScale 3 800 MHz |    614 | 2 DDR3-1866 |
| A10-6800K Richland     | 2 Piledriver 4.1-4.4 GHz  | 384 TeraScale 3 844 MHz |    648 | 2 DDR3-2133 |
| A10-7890K Kaveri       | 2 Steamroller 4.1-4.3 GHz | 512 GCN 2 866 MHz       |    887 | 2 DDR3-2133 |
| A12-9800 Bristol Ridge | 2 Excavator 3.8-4.2 GHz   | 512 GCN 3 1108 MHz      |   1135 | 2 DDR4-2400 |

It would be unwise for AMD to use a lot of CUs/SPs in an APU... they're already fully limited by memory bandwidth - DDR4 will help, but is not a magic bullet.

Vega has 20%+ higher IPC than the GCN used in Carrizo and has even higher memory bandwidth requirements and higher frequencies.

That means it makes more sense to use fewer CUs - perhaps only four (256 SPs) and include the high-bandwidth cache (or just a very large L2). I'm sure they could fit that into 44mm^2. Running at 1.3Ghz with 20%+ better performance per SP would be akin to having ~640SPs on the old APU, but the additional bandwidth afforded will result in higher net performance than 640SPs of GCN power.
 
  • Like
Reactions: Drazick

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
That word does not mean what you think it means. Duplicating results would be literally validating them. You are just claiming that nobody has validated his results.

I meant that you would have to duplicate the parameters of his experiment to validate or invalidate them. Nice logical fallacy troll jab...

4k is as relevant in CPU review as 480p in modern GPU review. Would you read a GPU review that tested games at 480p in 2017? Answer honestly.
Nonsense argument.. You didn't even address my point of the relevance of 720p testing.

I never said that it was a good CPU benchmark test for games. I said that it is relevant to someone playing 4k games. Notice he included 1080p as well, so there is no justifiable reason to complain that he also included 4k. You want to dismiss him as a valid source, so you can dismiss his results.

It's not the same thing, because games are almost always GPU bound. Also, that is a totally unrealistic setting.

Just like 4k test is a GPU benchmark that tells you nothing about current and future gaming performance. For all you could conclude from it is that in 2-3 years on this CPU the game will stutter like hell with 100+ fps. And who will you have to blame? Yourself for getting the wrong CPU, like all those folks that bought i5s for their high end gaming rigs when BattleField 1 came out.
What? Again, it's relevant to people that play at 4k....

It's odd you understand the limitations of the 4C i5, yet don't see the merit in the additional cores from the R7.


In a single moment, and at this point we can't even be sure if he ran 7700k at 5Ghz at all. For all we know he ran it at 4Ghz.
That's conspiratorial... Any proof of this, no, I didn't think so.

Again, others have similar results... Are they all conspiring together?

You know what is the actual issue with all those reviews, however? Only 1 or 2 of them even bothered to test games that may actually be CPU limited on reasonable hardware configs. Guess what, Ryzen got rolled in those. You could argue all day if results it got were bad (mostly not), but it got rolled in those few, straightforward as that.

That's nonsense... All the reviewers got wildly different results. The bad results that you want to cherry pick, clutch in your fist, and shake in everyone's face are the least reliable due to the extreme lack of stability of the platform right now.

What game do you want tested? Starcraft2 & World of Warcraft?
 
Last edited:

Agent-47

Senior member
Jan 17, 2017
290
249
76
4k is as relevant in CPU review as 480p in modern GPU review. Would you read a GPU review that tested games at 480p in 2017? Answer honestly.

No i would not read a gpu review at 480p. But i also will not read a 720p/1080p benchmark on CPU with high end GPU either with a combined cost of 1000 dollars.

Answer honestly: why is it relevant when I will never game in less than 1440p on a 1000 dollar CPU+GPU?

if you say that lower score in 720p means that CPU will hit bottleneck at 4K before 7700k in the future, well you donot know that at all given that games are becoming increasingly multithreaded and 7700k is already hitting 100% usage at 1080p, sometimes even in 1440p with BF1 multiplayer mode.

stop banging on about it like a broken radio please :)
 
Last edited:

hotstocks

Member
Jun 20, 2008
81
26
91
A lot of you are not realistic. A TON of people game at 1080p, like almost everyone, look at Steam stats. For one, if you don't have a lot of money and want a big monitor, a lot of people go with a 32" 1080p HDTV, and there are billions of those out there. Gaming is done on those monitors/tvs regardless of cpu and gpu. I have a 4.7gh i5 that has seen 4 graphics cards and am now on a Nvidia 1080 on the same 1080p monitor. And unless you are talking about competition gaming, most people are gaming at 60 hz or 60 fps. So when I see a Ryzen/1080 review at 1080p, it is VERY relevant to me and most gamers that want all the eye candy from their gpu at 1080p. Now my i5 beats Ryzen in most games at avg and max fps, but Ryzen usually wins in MINIMUM fps, which is more important. Because it doesn't matter if you are over 60fps, but dips to 37fps suck. Now a few games both Ryzen and i7700 overclocked still dip below 60 fps on 1080p with a 1080 BELIEVE IT OR NOT. So neither cpu is "good" enough for 60fps or above gaming with NEVER going below 60 fps. I think in the near future Ryzen may be able to maintain over 60 fps in every game once the kinks are worked out and mobo/windows/games use all cores. I think the next Intel chip out will also be able to do it, whether 4 or 8 core with HT. So right now I have a hard decision because to me a Ryzen would be a sidegrade for gaming and of course an upgrade for media / server stuff which I never do. But I do leave browsers, chat apps, antivirus, etc. stuff running while I game, so I am thinking Ryzen overclocked to 3.9ghz still may end up being better for me right now, and I might build a system. But I want 32Gb ram and I want it 3200mhz low latency and to work. I'm not spending $350 on a ddr4 kit and then having problems with mobos. So I will wait till AMD and Asus get their shit together and everyone can reproduce the same results on multiple stable mobos and ram, as well as a window fix for the scheduler and then I will build it. Anyone building now (and believe me I have been there many times) is going to be frustrated and have to deal with beta testing every component and windows for AMD. AMD dropped the f#cking ball because they should have delayed Ryzen a month, given mobo and ram vendors as well as Microsoft time to get their shit working right. But no, they were too scared of Intel's consumer 6 core coming out soon, and they sure should be with what is going on in this shitshow. If Ryzen doesn't get all this fixed within a month, no one is gonna care as they will wait another month for Intel and not have ANY problems for the same price.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
I meant that you would have to duplicate the parameters of his experiment to validate or invalidate them. Nice logical fallacy troll jab...
Then use proper language, next.
Nonsense argument.. You didn't even address my point of the relevance of 720p testing.
No, i addressed it perfectly fine, 720p testing of CPUs is like GPU testing at 4k: you remove other parameters out of equation and leave the only relevant parameters to test.
I said that it is relevant to someone playing 4k games
Actually, no, it is about as relevant to someone playing 4k games as 720p benchmark. In both cases you know that your performance in 4k at the time will depend on GPU now and on CPU 10 years later. But in one case you waste time, and in another you do not. Now, if your implication is that reviewers are to play babysitters and spoonfeed viewers information not relying on their cognitive abilities to make conclusions from the numbers they are seeing... Well, i concede, you are right, most of viewers are like that, as can be seen by the existence of this debate.
Notice he included 1080p as well
I did, but effect of seeing 4k benchmark in CPU review was enough for me to equate that channel with LTT.
It's odd you understand the limitations of the 4C i5, yet don't see the merit in the additional cores from the R7.
Because there is no merit in cores, careless usage of which can lead to performance penalty. Didn't we establish already that R7 is best treated as dual quad core, not an eight core?
That's conspiratorial... Any proof of this, no, I didn't think so.
Aussies' word against a word of man who compared Dx11 against Dx12 after same exact mistake was pointed out to him. Frankly, i will wait for third party to chime in, because so far it is "Pick who you believe more", and we both have bias as much as we like to pretend otherwise.
Again, others have similar results... Are they all conspiring together?
Show me other reviewers who ran 720p low settings in games with built-in benchmark and got similar results. A single one will do.
All the reviewers got wildly different results.
Did they? Guru3D, HWC and GamersNexus certainly mostly got matching results. So did PCGN.de. [H]ardOCP did too, but their results in gaming were most exaggerated since Kyle was the only man to run canned gaming benchmarks the right way. Computerbase.de is slightly different, but I did not manage to compare the methodology thoroughly. The only clear outliers i have seen were benches that were too obviously GPU bottlenecked like that pcgamer review where all the Intel CPUs posted same FPS down to single digit, high res benches in Guru3D and GN's tests and of course that video.
What game do you want tested? Starcraft2 & World of Warcraft?
Why, yes. You want real-world testing, I and every person caring about CPU performance wants CPU testing. The only valid solution for both is to end up using games that are played in a manner where CPU bottlenecks will happen by design, not by deliberate set up.

Answer honestly: why is it relevant when I will never game in less than 1440p on a 1000 dollar CPU+GPU?
Do you upgrade CPU and GPU in the same time? Do you only play at highest settings possible? Answer these two, perform a peirce's arrow, and you will have answer relevant to you.

well you donot know that at all given that games are becoming increasingly multithreaded and 7700k is already hitting 100% usage at 1080p, sometimes even in 1440p with BF1 multiplayer mode.
I do not even compare it to 7700k most of the time. I compare it to similarly clocked Broadwell-E, that if we are to believe CB.de is already beating 7700k in modern games. Ryzen is damn far behind it. Besides, Ryzen hits 100% usage on one of cores in BF1 mutliplayer as well on regular basis. And each time it does, it's a CPU bottleneck in action.

stop banging on about it like a broken radio please :)
What else there is to discuss? That Ryzen's AVX design choice may backfire in 2 years time for everyone who got it for media production purposes? Intel would not even need to cheat with compiler, benchmarks or OEMs this time. Just submitting the right patches to right software will make their CPUs as old as 2014's better offer for productivity to people who are not constrained by upfront cost.

Trolling is not allowed
You are also treating people with NO respect and talking down to them, not allowed.
Markfw
Anandtech Moderator
 
Last edited by a moderator:
  • Like
Reactions: CHADBOGA

Trender

Junior Member
Mar 4, 2017
23
1
16
A lot of you are not realistic. A TON of people game at 1080p, like almost everyone, look at Steam stats. For one, if you don't have a lot of money and want a big monitor, a lot of people go with a 32" 1080p HDTV, and there are billions of those out there. Gaming is done on those monitors/tvs regardless of cpu and gpu. I have a 4.7gh i5 that has seen 4 graphics cards and am now on a Nvidia 1080 on the same 1080p monitor. And unless you are talking about competition gaming, most people are gaming at 60 hz or 60 fps. So when I see a Ryzen/1080 review at 1080p, it is VERY relevant to me and most gamers that want all the eye candy from their gpu at 1080p. Now my i5 beats Ryzen in most games at avg and max fps, but Ryzen usually wins in MINIMUM fps, which is more important. Because it doesn't matter if you are over 60fps, but dips to 37fps suck. Now a few games both Ryzen and i7700 overclocked still dip below 60 fps on 1080p with a 1080 BELIEVE IT OR NOT. So neither cpu is "good" enough for 60fps or above gaming with NEVER going below 60 fps. I think in the near future Ryzen may be able to maintain over 60 fps in every game once the kinks are worked out and mobo/windows/games use all cores. I think the next Intel chip out will also be able to do it, whether 4 or 8 core with HT. So right now I have a hard decision because to me a Ryzen would be a sidegrade for gaming and of course an upgrade for media / server stuff which I never do. But I do leave browsers, chat apps, antivirus, etc. stuff running while I game, so I am thinking Ryzen overclocked to 3.9ghz still may end up being better for me right now, and I might build a system. But I want 32Gb ram and I want it 3200mhz low latency and to work. I'm not spending $350 on a ddr4 kit and then having problems with mobos. So I will wait till AMD and Asus get their shit together and everyone can reproduce the same results on multiple stable mobos and ram, as well as a window fix for the scheduler and then I will build it. Anyone building now (and believe me I have been there many times) is going to be frustrated and have to deal with beta testing every component and windows for AMD. AMD dropped the f#cking ball because they should have delayed Ryzen a month, given mobo and ram vendors as well as Microsoft time to get their shit working right. But no, they were too scared of Intel's consumer 6 core coming out soon, and they sure should be with what is going on in this shitshow. If Ryzen doesn't get all this fixed within a month, no one is gonna care as they will wait another month for Intel and not have ANY problems for the same price.
So is that real? I mean if Intel is going to release a 6 Core for the 7700K price I can wait til August or so
 

sirmo

Golden Member
Oct 10, 2011
1,012
384
136
It's hipycritical to say "1080p on low settings" is a end all be all CPU gaming test, without also aknowledging the advantage when it comes to low frames and being more future proof by having more cores. Because that's what you're simulating, how future proof the CPU is. How can you only look at draw call rate and not consider the core advantage?

These aren't some esoteric concepts. These things are happening. Games are multithreaded already, Ryzen show less dips in frames consistently over variety of games.

To say 1080p on low settings is THE test is myopic at best. Sure if you're a CS:GO pro player on a 240hz monitor then yeah get the 7700k, but for most other uses R7 1700 has a lot going for itself.
 
Last edited:

Edgemeal

Senior member
Dec 8, 2007
211
57
101
Ryzen is looking very impressive! I may just jump ship again!

Not sure this has been posted, but thought it was interesting,
http://www.pcworld.com/article/3172...-preview-ryzen-7-outperforms-intels-best.html

AMD actually met with the MAXON developers at the end of last year. One of the changes to CineBench 15.038 was a result of this meeting.”
Maxon declined to say what exactly the changes were with the new version, but the spokeswoman did say performance is comparable.

Interestingly, demonstrations by AMD used a slightly older version of CineBench, which would not contain the fixes in the program.

Hmmn, I'm seeing 9~12 point differences between v15.037 and .038 on i7-3770 @ 4GHz. <shrug>

https://forums.anandtech.com/threads/cinebench-r15-benchmark-thread.2345574/page-11#post-38775737
 

Agent-47

Senior member
Jan 17, 2017
290
249
76
1. Do you upgrade CPU and GPU in the same time? Do you only play at highest settings possible? Answer these two, perform a peirce's arrow, and you will have answer relevant to you.


2. I do not even compare it to 7700k most of the time. I compare it to similarly clocked Broadwell-E, that if we are to believe CB.de is already beating 7700k in modern games. Ryzen is damn far behind it. Besides, Ryzen hits 100% usage on one of cores in BF1 mutliplayer as well on regular basis. And each time it does, it's a CPU bottleneck in action.


3. What else there is to discuss? That Ryzen's AVX design choice may backfire in 2 years time for everyone who got it for media production purposes? Intel would not even need to cheat with compiler, benchmarks or OEMs this time. Just submitting the right patches to right software will make their CPUs as old as 2014's better offer for productivity to people who are not constrained by upfront cost.

1. wow! you totally avoided my questions. But just to entertain you, I update GPU more often, and ryzen with its 8c is a better deal. and i do play with highest settings on FX6300 + 470 at 1080p

2. did u know that AMD requested reviewers to turn off some settings on intel, which the reviewers did not honor, understandable so. one of those settings enables a full core turbo of 4GHz on Intel. so no. you did not compare a similarly clocked BWE. Ryzen 1800x stock is 5.5% behind BWE Stock with all core turbo to 4Ghz and SMT on in BF1. according to GN. guess what was the difference in clock is: .3/4*100=7.5%

3. AVX? Now we are jumping boats? But AVX1 is yet to be accepted, im sure by the time AVX2/3 gain tractions in mid-2020s AMD and us consumers will have moved on

EDIT: corrected BF1 data, which i miss read. conclusion still the same
 
Last edited:

Bouowmx

Golden Member
Nov 13, 2016
1,138
550
146
So is that real? I mean if Intel is going to release a 6 Core for the 7700K price I can wait til August or so
There is Intel Skylake-X (6-10 (12?) cores) for socket R4, coming Q3, and Coffee Lake (max. 6 cores) for socket H(4?), coming H2, probably Q4. No confirmation of pricing yet.
See also:
https://forums.anandtech.com/threads/intel-skylake-kaby-lake-thread.2428363/page-371#post-38770215 (not exactly the charts, but that post is the first closest to AMD Ryzen's launch)
https://forums.anandtech.com/threads/coffelake-thread-rumors-and-specs.2500407/
 

Spartak

Senior member
Jul 4, 2015
353
266
136
35W 4C/8T+11 CU Mobile mobile Eng Sample(12 CU design, but one CU disabled) has 3.0/3.3 GHz core clocks, on CPU. Polaris 11(full, 1024 GCN cores) uses 18W of power, at 907 MHz(just for the GPU). We have tested this, Adored TV, have tested this with Polaris 11(14 CU design) declocked to 850 Mhz and the numbers are pretty in line.

Is this a known engineering sample? what's the source?

Does this mean the CPU part of the APU will use only ~17W, or does it mean the CPU may use more and the GPU will throttle when the CPU runs 100%?

edit: wait, you deduct from Polaris 11 that the GPU part will use about 11-12W IIUC?
 
Last edited:

Trender

Junior Member
Mar 4, 2017
23
1
16
There is Intel Skylake-X (6-10 (12?) cores) for socket R4, coming Q3, and Coffee Lake (max. 6 cores) for socket H(4?), coming H2, probably Q4. No confirmation of pricing yet.
See also:
https://forums.anandtech.com/threads/intel-skylake-kaby-lake-thread.2428363/page-371#post-38770215 (not exactly the charts, but that post is the first closest to AMD Ryzen's launch)
https://forums.anandtech.com/threads/coffelake-thread-rumors-and-specs.2500407/
lol I've been waiting like 6 years to change the CPU and now that Ive saved(im studying) Im not going to buy the 7700K for getting 6 cores for the same price in 3 months or getting stuck with the same performance (no improvements on 8 Cores of Ryzen) I think Ill just wait till summer :/
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
It would be unwise for AMD to use a lot of CUs/SPs in an APU... they're already fully limited by memory bandwidth - DDR4 will help, but is not a magic bullet.

Vega has 20%+ higher IPC than the GCN used in Carrizo and has even higher memory bandwidth requirements and higher frequencies.

That means it makes more sense to use fewer CUs - perhaps only four (256 SPs) and include the high-bandwidth cache (or just a very large L2). I'm sure they could fit that into 44mm^2. Running at 1.3Ghz with 20%+ better performance per SP would be akin to having ~640SPs on the old APU, but the additional bandwidth afforded will result in higher net performance than 640SPs of GCN power.

Don't you start too lol, the HBM APU is absurd.
Some folks want a pink elephant to keep in their bathroom and refuse to accept that it won't work.

Take a 11CU APU towards 2TFLOPS at X TDP vs a 16CU APU +HBM at same X TDP.
The memory BW is not that limiting, especially if Vega saves on that so in practice you go wider but lower clocks and maybe you gain some perf on memory but overall the gain is relatively small. I do expect that you won't agree on how limiting the BW is.
The comparison gets worse from a perf perspective if both are 16CU.
From a cost perspective you more than double the manu costs and you got very high dev costs.

For an APU with a large GPU, the memory BW is the barrier so you would think a monolithic APU with HBM would work but only if you don't consider alternatives.
They could just use a traditional MCP to pair an existing CPU die and a Vega 11 with minimal costs Higher latency between CPU and GPU but they compete with discrete so the latency here is much lower.
It would have a TDP penalty with both CPU and GPU in the same package but there are form factor and design costs advantages for OEMs

In high end GPU it works from the classical perspective of perf power and cost.but , to use in any other area, you need a good reason.
The chiplets angle for lower costs and increased flexibility works if you have cheap enough packaging and preferably relatively affordable memory (the OS being able to take advantage of it and save on DRAM costs would help too). Si interposer and HBM are not that and scale doesn't help much.

Aside from that , AMD can't afford to do different implementations in high end desktop and server, can't do a native quad in desktop or a native 16 cores in server. Can't do a HBM APU in server either.
They are not gonna invest in a consumer APU that makes no sense.

When they gain access to packaging solutions that make financial sense, they'll use them to pair CPUs, GPUs, memory and so on- the chiplets angle.
 

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96
A lot of you are not realistic. A TON of people game at 1080p, like almost everyone, look at Steam stats. For one, if you don't have a lot of money and want a big monitor, a lot of people go with a 32" 1080p HDTV, and there are billions of those out there. Gaming is done on those monitors/tvs regardless of cpu and gpu. I have a 4.7gh i5 that has seen 4 graphics cards and am now on a Nvidia 1080 on the same 1080p monitor. And unless you are talking about competition gaming, most people are gaming at 60 hz or 60 fps. So when I see a Ryzen/1080 review at 1080p, it is VERY relevant to me and most gamers that want all the eye candy from their gpu at 1080p. Now my i5 beats Ryzen in most games at avg and max fps, but Ryzen usually wins in MINIMUM fps, which is more important. Because it doesn't matter if you are over 60fps, but dips to 37fps suck. Now a few games both Ryzen and i7700 overclocked still dip below 60 fps on 1080p with a 1080 BELIEVE IT OR NOT. So neither cpu is "good" enough for 60fps or above gaming with NEVER going below 60 fps. I think in the near future Ryzen may be able to maintain over 60 fps in every game once the kinks are worked out and mobo/windows/games use all cores. I think the next Intel chip out will also be able to do it, whether 4 or 8 core with HT. So right now I have a hard decision because to me a Ryzen would be a sidegrade for gaming and of course an upgrade for media / server stuff which I never do. But I do leave browsers, chat apps, antivirus, etc. stuff running while I game, so I am thinking Ryzen overclocked to 3.9ghz still may end up being better for me right now, and I might build a system. But I want 32Gb ram and I want it 3200mhz low latency and to work. I'm not spending $350 on a ddr4 kit and then having problems with mobos. So I will wait till AMD and Asus get their shit together and everyone can reproduce the same results on multiple stable mobos and ram, as well as a window fix for the scheduler and then I will build it. Anyone building now (and believe me I have been there many times) is going to be frustrated and have to deal with beta testing every component and windows for AMD. AMD dropped the f#cking ball because they should have delayed Ryzen a month, given mobo and ram vendors as well as Microsoft time to get their shit working right. But no, they were too scared of Intel's consumer 6 core coming out soon, and they sure should be with what is going on in this shitshow. If Ryzen doesn't get all this fixed within a month, no one is gonna care as they will wait another month for Intel and not have ANY problems for the same price.
I agree. 1080p is a good resolution to test.

If you have an old i5, then Ryzen will not be a sigegrade.

Old i5s aren't particularly good at pushing high fps in new titles. You need a new i5 to have a chance, and even that obviously won't last.
 

Glo.

Diamond Member
Apr 25, 2015
5,707
4,551
136
Is this a known engineering sample? what's the source?

Does this mean the CPU part of the APU will use only ~17W, or does it mean the CPU may use more and the GPU will throttle when the CPU runs 100%?

edit: wait, you deduct from Polaris 11 that the GPU part will use about 11-12W IIUC?
No. It appears you are not up to the speed with information we know. Polaris 11 16 CU design in Radeon Pro 460 from Macbook Pro consumes under load around 18W of power(GPU only). It is clocked at 907 MHz, and TDP, or rather power gate, in BIOS of the GPU is 35W. The rest is consumed by memory, made from 4 memory cells for 128 bit memory bus. AdoredTV has tested how Polaris 11 with 14 CU design behaves declocked to 850 MHz, and it was around 15-18W of power under load, so there is a pattern here.

Raven Ridge APUs are not using Polaris GPUs, but Vega. Vega GPUs have been optimized to work with higher core clocks at the same thermal envelope compared to previous generations of GPUs.

And now we have Raven Ridge mobile Engineering Sample with 4C/8T design clocked at 3.0/3.3 GHz, and 12 CU's with 1 CU taken out totalling with 11 CU's for 35W TDP for whole APU package. About this engineering sample Canard PC Twitter reported some time ago, already.
 
  • Like
Reactions: Drazick

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
It's hipycritical to say "1080p on low settings" is a end all be all CPU gaming test, without also aknowledging the advantage when it comes to low frames and being more future proof by having more cores.
More cores do not make it more future proof when it is 1 that limits you. But you all have a point, testing purely what drawcalls limit is not the brightest thing to do, letting limitations of other parts of engine to chime in is very useful too.
1. wow! you totally avoided my questions. But just to entertain you, I update GPU more often, and ryzen with its 8c is a better deal. and i do play with highest settings on FX6300 + 470 at 1080p
Perform the NOR and we land that you do not care about CPU performance in games. The fact that you use fx-6300 verifies it, and as such we come to conclusion that ryzen is indeed a better deal for you. Logic!
2. did u know that AMD requested reviewers to turn off some settings on intel, which the reviewers did not honor, understandable so. one of those settings enables a full core turbo of 4GHz on Intel. so no. you did not compare a similarly clocked BWE. Ryzen 1800x stock is 5.5% behind BWE Stock with all core turbo to 4Ghz and SMT on in BF1. according to GN. guess what was the difference in clock is: .3/4*100=7.5%
Full core turbo of 4Ghz on what CPU? If you really want to know, turbo max makes it have 4Ghz on a single core, MCE makes it have 3.7Ghz on all cores, if you were talking about 6900k. Not 4Ghz, as you seem to think. By the way, thanks for reminding me that BF1 results in GN review were really not that bad, /r/AMD had me thinking otherwise for some reason.
3. AVX? Now we are jumping boats? But AVX1 is yet to be accepted, im sure by the time AVX2/3 gain tractions in mid-2020s AMD and us consumers will have moved on
It took Intel 2 months to devise a patch that by some dark sorcery adds AVX support to Blender. 2 months after seeing Ryzen ES compete with 6900k in a workload that did not use AVX at a time. I give it 2 years before Intel spends it's dollars on getting all the big productivity software to use AVX actively. It won't affect me since it is borderline impossible to make compilers use it in meaningful sense. But rendering, photo editing, encoding, hell even some rare games? It's already started going this way and hell, it would be more effective than any amount of bribing Intel would otherwise do.

And now we have Raven Ridge mobile Engineering Sample with 4C/8T design clocked at 3.0/3.3 GHz, and 12 CU's with 1 CU taken out totalling with 11 CU's for 35W TDP for whole APU package. About this engineering sample Canard PC Twitter reported some time ago, already.
2M3001C3T4MF2_33/30_N that one? Does not sound like it has any iGPU on it, but then again, AMD have ditched it's logic for QS already. In fact, it sounds like a very early Ryzen quad core sample, one AMD Polaris on this forum would talk about.
 

Glo.

Diamond Member
Apr 25, 2015
5,707
4,551
136
2M3001C3T4MF2_33/30_N that one? Does not sound like it has any iGPU on it, but then again, AMD have ditched it's logic for QS already. In fact, it sounds like a very early Ryzen quad core sample, one AMD Polaris on this forum would talk about.
2M3001C3T4MF2_33/30_N with AMD 15DD iGPU

Full name.
 
  • Like
Reactions: Drazick

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Full name.
Yeah, but there are few confusing things about it:
1. I really doubt that this iGPU is Vega based. The Big Die Vega was barely finalized by the time we have heard of this sample.
2. F2 revision? Seriously!?
 

hotstocks

Member
Jun 20, 2008
81
26
91
Did you notice I said 4.7Ghz old i5? At that speed it keeps up with new games at 1080p more than fine, beats Ryzen. I know that in future games that don't heavily favor one core, then I may need an upgrade to more cores. I thought Ryzen would be it, but going from a 4.7Ghz i5 to a 3.9Ghz Ryzen just is a sidegrade at best for gaming NOW.
 

sirmo

Golden Member
Oct 10, 2011
1,012
384
136
Did you notice I said 4.7Ghz old i5? At that speed it keeps up with new games at 1080p more than fine, beats Ryzen. I know that in future games that don't heavily favor one core, then I may need an upgrade to more cores. I thought Ryzen would be it, but going from a 4.7Ghz i5 to a 3.9Ghz Ryzen just is a sidegrade at best for gaming NOW.
Not when it comes to stutter and min-frames.
 
  • Like
Reactions: hotstocks

guachi

Senior member
Nov 16, 2010
761
415
136
For one, if you don't have a lot of money and want a big monitor, a lot of people go with a 32" 1080p HDTV, and there are billions of those out there.

If you don't have a lot of money you aren't gaming with an R7, 6900, 7700, or a 1080 GPU, are you? At that point the review is more for amusement than actual useful information. Your opinion on the usefulness of the review as a potential buyer is zero. You'd never buy the CPU no matter how it performed because "you don't have a lot of money".

I own a 4k monitor. I use my lowly CPU heavily but refuse to spend $1000 for an Intel equivalent. I want to know how the R7 handles 4k gaming. 1080 gaming is interesting, I suppose, but I've never, ever, ever gamed at 1920x1080 and have no desire in 2017 to ever do so in the future.

I have a 4.7gh i5 that has seen 4 graphics cards and am now on a Nvidia 1080 on the same 1080p monitor. And unless you are talking about competition gaming, most people are gaming at 60 hz or 60 fps. So when I see a Ryzen/1080 review at 1080p, it is VERY relevant to me and most gamers that want all the eye candy from their gpu at 1080p.

The claim "all the eye candy" and "1080/60Hz" monitor can't belong in the same sentence. If you are at 1080 or 60Hz you don't have "all the eye candy". Resolution and refresh rate are "eye candy" as well.

And we don't care what "most people" game on. Since the tests were of a high end GPU and CPU, what we care about are what monitors those people (or people who might buy these things in the future) game on.

Further, you can't claim that we should test at 1080/60 because that's what most people own while simultaneously not complaining about using a 1080 GPU for that testing. Almost no one owns a GPU that powerful. If we test at 1080/60Hz because "most people are gaming at 60 Hz" shouldn't we test with some mid-level GPU?

Also, four GPUs and you haven't upgraded your monitor? Why not skip a cycle and get a 144Hz 1920x1080 Gsync monitor or something and actually enjoy all that extra fps you paid for?