ZGR
Platinum Member
- Oct 26, 2012
- 2,052
- 656
- 136
The old AMD marketing department would love you!
I'm glad this game supports multithreading at least.
The old AMD marketing department would love you!
and, when the "3080 Ti" is released it might perform at 4K or 1440P like the 2080 Ti performs at 1080P, so there is something there...
with AMD cpu's you pay $300 for the 1600, then $300 more for the 2600, then $350 more for Zen 2
Yeah, every Intel customer who bought the i5 7500 just before AMD introduced this Ryzen 1600 milking scheme must be swimming in joy right now. /sPeople complained about the 8700k price , but you only pay once , with AMD cpu's you pay $300 for the 1600, then $300 more for the 2600, then $350 more for Zen 2 just to get the performance of a 8700k at 5.0ghz when you upgrade your GPU at the high end.
Imagine what a Ryzen 1600 looks like with a 2080ti! It will be CPU limited at 1440p.
Granted, I'm no expert on GPU drivers or graphics pipelines, but I didn't think draw calls or driver overhead had anything to do with resolution. Are you sure about that?Simply lowering resolution to remove the GPU bottleneck is going to skew results away from driver overhead/draw calls and towards other parts of game logic.
Granted, I'm no expert on GPU drivers or graphics pipelines, but I didn't think draw calls or driver overhead had anything to do with resolution. Are you sure about that?
Imagine what a Ryzen 1600 looks like with a 2080ti! It will be CPU limited at 1440p.
People complained about the 8700k price , but you only pay once , with AMD cpu's you pay $300 for the 1600, then $300 more for the 2600, then $350 more for Zen 2 just to get the performance of a 8700k at 5.0ghz when you upgrade your GPU at the high end.
Do you think a 2600 will push a 3080ti that's 35% faster next year at 1440p like a 8700k? I think there is no way, and how about the 1600x? Not a chance.
1080p scores DO MATTER! It tells us how your CPU will perform when you upgrade your GPU.
Even 1440p is starting to matter more now that we have the 2080ti.
I hope we finally have some CPU reviews that will take a good look of how the 1600 and 2600 compared to the 8700k with a 2080ti, because next year's 7nm rtx3070 is this year 2080ti.
Do you think a 2600 will push a 3080ti that's 35% faster next year at 1440p like a 8700k? I think there is no way, and how about the 1600x? Not a chance.
1080p scores DO MATTER! It tells us how your CPU will perform when you upgrade your GPU.
Even 1440p is starting to matter more now that we have the 2080ti.
I hope we finally have some CPU reviews that will take a good look of how the 1600 and 2600 compared to the 8700k with a 2080ti, because next year's 7nm rtx3070 is this year 2080ti.
https://www.computerbase.de/2018-09...e-rtx-2080-ti/#diagramm-performancerating-fps
The 8700K is 30% faster @1080p. This is the biggest difference till now.
This was my point, thanks.Prices aside (as mentioned above) that's actually a good point and why I went with a 8700K over a 8600K or Ryzen.
To put it into perspective, the 8700K price is about 1/3 that of a high end GPU like a 2080 Ti, so in the grand scheme of things, it doesn't change the overall cost of a high end gaming system much to invest in a better CPU. While it may be 'overkill' in todays games, a 8700K will outlast a first or 2nd gen Ryzen chip in terms of gaming usefulness, so you can most likely squeeze out an extra generation or two of GPU upgrades before the CPU becomes the bottleneck.
Driver overhead almost always refers to draw calls, and they're independent of resolution. What affects draw calls, is the position of the camera. In other words, changing the rendered scene will change the number of draw calls being issued.
That being said, different aspect ratios could indeed change the number of draw calls being rendered, due to frustum culling (culling objects outside of the camera's field of view). A 21:9 resolution could render more objects than a 16:9 resolution, and a 16:9 resolution render more than 5:4.
What can also affect the number of objects being rendered, is the actual camera FOV itself being changed. Smaller FOV = smaller vertical & horizontal length of the camera = smaller scene being rendered.
Release test Ryzen 2.
Intel: Average: 98FPS, Percentile: 71 FPS
AMD: Average: 90 FPS, Percentile: 65 FPS
This test.
Intel: Average: 93FPS, Percentile: 67 FPS
AMD: Average: 78 FPS, Percentile: 45 FPS
What cause of this?
So given a hypothetical infinitely-powerful GPU that can always hit some arbitrary framerate at any resolution with any quality setting, would you expect CPU usage to change at different resolutions, given the same FoV and camera angle?
Possibly the reviewer running a poorly-configured 2700x system. DDR4-2133 or similar.
Sure it does help you (maybe not you but people in general) if one CPU is x% faster it means that at the same FPS (due to res or weaker GPU or any reason) the faster CPU will have x% more CPU being idle able to be used for background tasks or streaming or whatever,seems to be a big deal in this forum.I mean, I don't really know what you're getting at here. Testing a CPU at some resolution I will not use, does not help me. I will still want to know what differences there will be at my chosen resolution. Maybe there will be none at all
Yup ~7% IPC and about 20% if not more clock speed at all core turbo but 30% is an outlier...The ram speeds aren't in there at all. What is troublesome is seeing what has been shown many time as a 12-14 percent deficit all of a sudden 30? With limited infromation on the test beds. It seems more like click bait than anything to me. Intel currently enjoys a 7ish percent advantage in IPC, and a small disadvantage in SMT. Plus a large clockspeed advantage. To sum it up, this %30 is an outlier. By quite a bit, with too little information to dig into it.
Got a recent link to those claims?At lower res and high fps the bm is generally dictated by memory subsystem. As can also be seen by cache differences.
When times and years goes by bm is bound by throughput.
The idea to evaluate gpu longevity by testing at lower res have been debunked. It's not that simple.
Got a recent link to those claims?
Something that has a 8700k vs a 1700x vs a 2700x with a 2080ti and a 2080 at 1440p? No need to use 1080p really.
My guess is the 8700k will be still pulling away from the AMD cpu's and next year the next gen Gpu's will even look worse for AMD, mabe even at 4k.
See the pattern?
1080p looked bad for the 1700x vs the 8700k last year with the 1080ti, 1440p looks bad for the 2700x vs the 8700k this year with the RTX 2080, and Mabe next year with the rtx3080 the 2700x will look even worse vs the 8700k at 5.0ghz at 4k.
That's the way its looking.
The 9900k,9700k will pull away even more with games that use many threads.
At lower res and high fps the bm is generally dictated by memory subsystem. As can also be seen by cache differences.
The idea to evaluate gpu longevity by testing at lower res have been debunked. It's not that simple.