Official AMD Ryzen Benchmarks, Reviews, Prices, and Discussion

Page 215 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

unseenmorbidity

Golden Member
Nov 27, 2016
1,395
967
96

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Minor regressions are characteristic of nV Dx12 performance and are not anywhere near as dramatic as they used to be few months ago with losses in performance comparable to what Ryzen + nV posts presently.
Isn't that my point? I don't get what this is stating. Much like my next answer this about Nvidia and DX12. Ryzen just happens to be hit a lot in the crossfire. It's about the performance jump on DX12 AMD GPU's vs. Nvidia's at best neutral performance.
Of course it did, the dim Scot forgot to enable Crossfire in Dx11 drivers, as result he compared a single 480 in Dx11 against 2 480s in Dx12. So, his comparisons for AMD bear no relevance. At all.
Like I said above its about relative performance on DX12 and DX11 between the GPU manufacturers. AMD shows that we should be seeing a boost in DX12 performance regardless of CPU choice. What it also shows is that the DX12 is particularly bad on AMD with Nvidia cards. Those two become I don't want to call them facts but are observable information. So while Nvidia could "optimize" their DX12 performance for AMD, the real issue is that their DX12 support whether hardware or software related is bad, if they did it properly Intel CPU's would see a boost they do not. I specifically said we can't use the end numbers (the ones with the 1800x being within 6% of 7700) as proof of Ryzen performance. Whether or not CF is enabled means nothing compared to information the video shows about the effects of DX12 on the two different GPU brands.

Dx12 is the future? To me it looks like the past, if anything. So no, it does not invalidate anything except making sure that Dx12 benches are presently irrelevant.
I like Vulcan, I assume that is what you are talking about. Promoting Vulcan ignores several things. 1. 90% of desktops use Windows. 2. That Scorpio is rumored to use a Zen-Vega APU. 3. That outside 2 companies have extensively used non-DX API's, Id and DICE. With Dice it was still mostly DX with Mantle enhancements when detected. Where Id has always prioritized non-DX API's.

The reason these matter. Scorpio if Zen+Vega, basically officially makes it a competent desktop system. The Xbox OS still will require a decent amount of DX for games. So while it might seem like it would be best to write for Vulcan specifically because of the PS4 selling so much, the PS4 is going to be so far behind Scorpio in performance specially on CPU power it would almost be easier to write to metal/vulcan for the PS4 and just do branching development for PC/Xbox in DX12. The PS4 will still be so IPC starved that you have to write for it's use case specifically. So add poor history of Non-MS API's. There is little reason to believe that PC gaming will be anything but DX based. If that's the case DX12 be the primary API going forward. I'll add another point to that. DX12 shows a great increase in CPU performance when GPU drivers are working correctly Ryzen. This is a API that is completely agnostic or skewed towards Intel CPU design. Vulcan is heavily spearheaded by AMD, while open and available to everyone this would imply that it is even more balanced towards AMD's design process than DX12. Which means proper DX12 should be a glimpse into what Vulcan can accomplish if driver support done correctly. I don't know if the Nvidia Vulcan driver is a borked as the DX12 one, Dx12 would still give a better feeling of the future than DX11 would.
 

sushukka

Member
Mar 17, 2017
52
39
61
I like Vulcan, I assume that is what you are talking about. Promoting Vulcan ignores several things. 1. 90% of desktops use Windows. 2. That Scorpio is rumored to use a Zen-Vega APU. 3. That outside 2 companies have extensively used non-DX API's, Id and DICE. With Dice it was still mostly DX with Mantle enhancements when detected. Where Id has always prioritized non-DX API's.

The reason these matter. Scorpio if Zen+Vega, basically officially makes it a competent desktop system. The Xbox OS still will require a decent amount of DX for games. So while it might seem like it would be best to write for Vulcan specifically because of the PS4 selling so much, the PS4 is going to be so far behind Scorpio in performance specially on CPU power it would almost be easier to write to metal/vulcan for the PS4 and just do branching development for PC/Xbox in DX12. The PS4 will still be so IPC starved that you have to write for it's use case specifically. So add poor history of Non-MS API's. There is little reason to believe that PC gaming will be anything but DX based. If that's the case DX12 be the primary API going forward. I'll add another point to that. DX12 shows a great increase in CPU performance when GPU drivers are working correctly Ryzen. This is a API that is completely agnostic or skewed towards Intel CPU design. Vulcan is heavily spearheaded by AMD, while open and available to everyone this would imply that it is even more balanced towards AMD's design process than DX12. Which means proper DX12 should be a glimpse into what Vulcan can accomplish if driver support done correctly. I don't know if the Nvidia Vulcan driver is a borked as the DX12 one, Dx12 would still give a better feeling of the future than DX11 would.

Hope the Vulkan API will flourish as there would be one less reason to rely on Microsoft. Also not sure what do you mean by "90% desktops use Windows". Vulkan runs on Windows as it runs on many other OSes. Vulkan support is spreading all the time, not only the two default non-DX houses you mentioned. Here is a snippet from Wikipedia:
  • Source 2 – In March 2015, Valve Corporation announced the Source 2 engine, the successor engine to the original Source engine, would support Vulkan.
  • Serious Engine 4 – In February 2016, Croteam announced that they were supporting Vulkan in their Serious Engine.
  • Unreal Engine 4 – In February 2016, Epic Games announced Unreal Engine 4 support for Vulkan at Samsung's Galaxy S7 Unpacked event.
  • id Tech 6 – In May 2016, id Software announced Doom, running the id Tech 6 engine, would support Vulkan.
  • CryEngine – Crytek has plans to include Vulkan support in CryEngine.
  • Unity – The engine currently has a beta version supporting Vulkan; it is on track to be included in version 5.6 targeted for March 2017.
  • Xenko – Vulkan support was added in July 2016.
  • Intrinsic – A free Vulkan based cross-platform game engine published on GitHub.
  • Torque 3D – In April 2016, the developers community announced they will include Vulkan support.
To me it seems that Vulkan will be very much supported in many upcoming games as the most used game engines are listed above.
 
Last edited:

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
Like I said above its about relative performance on DX12 and DX11 between the GPU manufacturers
Except that the video in question shows little of it, for reason stated.
AMD shows that we should be seeing a boost in DX12 performance regardless of CPU choice.
Quite wrong, AMD had it's own perf regressions in a bunch of Dx12 titles as well. AMD GPUs only really hit the jackpot with Dx12 when CPU bottleneck is certain and developer does not mess up implementation (i.e. had some major assistance from AMD).
Whether or not CF is enabled means nothing compared to information the video shows about the effects of DX12 on the two different GPU brands.
Except that lack of CF data means we can't even draw comparisons of effects of Dx12 on the the two of four GPU+CPU combos. For all i care, i could use [H] data i have linked and claim that AMDGPU+Intel CPU performs worse in Dx12 than in Dx11, but by much smaller margin than nVGPU+AMDCPU. Because we can only speculate how that would work in his case.

Your book will not sell many copies at this rate.
It's not for sale anyways, because it costs more this way.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
Data compiled from Computerbase Ryzen review:
https://www.computerbase.de/2017-03.../#diagramm-battlefield-1-dx11-multiplayer-fps
BF1 DX11 720p
6900k 143.8
1800X 122.4
7700k 116.4
BF1 DX12 720p
6900k 122.4 down 14.9%
1800X 90.7 down 25.9%
7700k 127.6 up 9.6%

Deus Ex DX11 720p
6900k 106.7
1800X 80.5
7700k 87.1
Deus EX DX12 720p
6900k 83 down 22.2%
1800X 63.6 down 21%
7700k 83.6 down 4%

Rise of the Tomb Raider DX11 720p
6900k 165.7
1800X 135.7
7700k 152
Rise of the Tomb Raider DX12 720p
6900k 172.5 up 4.1%
1800X 117.5 down 13.4%
7700k 168.2 up 10.65%

Total War Warhammer DX11 720p
6900k 45.5
1800X 40.3
7700k 43.3
Total War Warhammer DX12 720p
6900k 34.6 down 24%
1800X 30.7 down 23.8%
7700k 42.4 down 2.1%
 
Last edited:

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Hope the Vulkan API will flourish as there would be one less reason to rely on Microsoft. Also not sure what do you mean by "90% desktops use Windows". Vulkan runs on Windows as it runs on many other OSes. Vulkan support is spreading all the time, not only the two default non-DX houses you mentioned. Here is a snippet from Wikipedia:
  • Source 2 – In March 2015, Valve Corporation announced the Source 2 engine, the successor engine to the original Source engine, would support Vulkan.
  • Serious Engine 4 – In February 2016, Croteam announced that they were supporting Vulkan in their Serious Engine.
  • Unreal Engine 4 – In February 2016, Epic Games announced Unreal Engine 4 support for Vulkan at Samsung's Galaxy S7 Unpacked event.
  • id Tech 6 – In May 2016, id Software announced Doom, running the id Tech 6 engine, would support Vulkan.
  • CryEngine – Crytek has plans to include Vulkan support in CryEngine.
  • Unity – The engine currently has a beta version supporting Vulkan; it is on track to be included in version 5.6 targeted for March 2017.
  • Xenko – Vulkan support was added in July 2016.
  • Intrinsic – A free Vulkan based cross-platform game engine published on GitHub.
  • Torque 3D – In April 2016, the developers community announced they will include Vulkan support.
To me it seems that Vulkan will be very much supported in many upcoming games as the most used game engines are listed above.
The reason I say 90% use windows is because that means 90% of PC's have access to DirectX. It still isn't any difference than it ever has been. So right away you talking about one Development platform/API that 90% of the PC market has access to. Scorpio then adds to that. The Xbone like the original PS4 doesn't have room for API overhead, the PS4 due to it's OS not being windows based has games written specifically for the hardware, the Xbone, again not having room is also pretty much written to metal with some DX handles to fit OS requirements. Scorpio basically takes its 2013 mid level graphics and archaic netbook cores, and makes it a 2017 mid level desktop APU. Meaning it's not going to give a Titan set up a run for it's money, but it's got room to take a PC developed game, port it over and optimize it for the Scorpio hardware. I think it's more likely that Scorpio supported games will actually be from the PC development tree and not the PS4/Xbone tree. The PS4 Pro just being a faster refresh of the original APU in the previous version, will just get a PS4/Xbone release optimized for the faster performance. In the end for the Scorpio will be easier to write for by taking a DX12 port of whatever development platform they are using and optimizing it, instead of having to have a PS4/Xbone/PS4Pro metal, a Scorpio Metal/DX, and a PC Vulkan release.

id is the only company I know of that has ever sat back on it's engine and developed it for non-DX setup from the beginning and id Tech 4 they still were forced to throw in DX support. While it's great to see more industry wide support for Vulcan, I doubt that it gets as much penetration as you would hope considering it would require yet a another code break from the consoles, including Scorpio that would without Vulcan be able to use a DX port with optimization. So they could either design for the Xbone/PS4/Pro and then work with a DX package for PC/Scorpio. One require 3 development paths and the other 2. Then you add the fact that they aren't going to just do DX11/Vulcan. What most will do till DX11 is dead is probably DX11, DX12, and Vulcan. Hedging their bets. What Vulcan needs to do is be head and shoulders better than DX. This is where AMD bites themselves in the ass. AMD has always tried to support any API equally, they kind of have to if they pulled what Nvidia has with DX12, it would just be yet another reason not to get an AMD GPU. Nvidia has always held DX and Microsoft in Contempt. They will work on the DX12 setup when the pressure is on but they have always been slow to optimize for DX and even actively pushed for OpenGL adoption in the past. If Nvidia competently supported DX12 and AMD gave it the shaft, then Nvidia would be the king in DX12 and when Vega came out it could do well in DX11 and Vulkan and DX12 would look like crap, Nvidia would still be the king of the mountain, and would use Vulkan to finally push MS off the mountain top on API's by continuing to optimize for Vulcan over DX12. Problem is if Vega is within even 10% of the 1080 non-ti in DX11 it will blow away a Titan in DX12. This will cause Nvidia to work on fixing DX12 performance. Which means both cards will be closer to being optimally optimized for DX12 which increases the hurdle a properly optimized Vulkan game implementation has to overcome. Once you add Scorpio on top of that you basically make it hard for me to see Vulcan making much of a push past the original marketing implementations. It's got to be noticeably better.

I have seen this story played out dozens of times going all the way back to DX6. Every other release someone tries to unseat it. This time there is a bigger chance than ever before. But scorpio becomes that last hurdle that I doubt Vulcan can overcome. If you are going to sell Scorpio games you are going to write for DX12 and if you are going to do DX12 for Scorpio, you are going to offer it on the PC version and AMD with Vega will force Nvidia to fix their DX12 performance, which will cause any potential gap in DX12 to Vulcan close a lot. Vulcan can still be better but will it be worth the development time.
 

Topweasel

Diamond Member
Oct 19, 2000
5,436
1,654
136
Except that the video in question shows little of it, for reason stated.
No not for the reason stated. You obviously just forwarded to number screens to comment. He goes through the numbers to show why he thought something was up and what he was testing for before he gets to the payoff. Which touches on the point you seem to wish to ignore.

Quite wrong, AMD had it's own perf regressions in a bunch of Dx12 titles as well. AMD GPUs only really hit the jackpot with Dx12 when CPU bottleneck is certain and developer does not mess up implementation (i.e. had some major assistance from AMD).
My point was more that if this is fully representative of DX12 performance on Nvidia cards. I think it is because with the AMD GPU setup you see better CPU usage on DX12 on both CPU's and its a Gameworks game. Which means the exact opposite. RotTR had Nvidia come in and assist, not AMD. Which means that the DX12 gains you see are completely GPU driver based and not game coded better for a code.

Except that lack of CF data means we can't even draw comparisons of effects of Dx12 on the the two of four GPU+CPU combos. For all i care, i could use [H] data i have linked and claim that AMDGPU+Intel CPU performs worse in Dx12 than in Dx11, but by much smaller margin than nVGPU+AMDCPU. Because we can only speculate how that would work in his case.
Sorry I haven't seen your link to Hocp. But is it in just one game. This can be important. The more samples we have the better we can figure out what is going on. It could be that RotTR is poorly optimized for DX12 or whatever Hocp benched was. The big problem is DX12 should be performance neutral at worst. It isn't for Nvidia on a game optimized for their cards and drivers. The CF data if it's disabled as you say is irrelevant. Because it's not about what the max potential of Ryzen or the 7700k is, it's about the shift in performance. I don't care if the 7700k is 100FPS ahead of the Ryzen in DX12 in a world where AMD offered a card that wouldn't bottleneck on a 7700k at that rez. The performance shifting between DX11 and DX12 in invalidates any CPU performance of RotTR in DX12 and brings into question any DX12 result until we have an AMD card that doesn't bottleneck the 7700k in DX11 or DX12. If we have more testing data of AMD GPU's in DX12 in other games.

Again this isn't AMD closes the gap on the 7700k. We still don't know if in DX12 with a better card and better drivers if Ryzen would still be close to the 7700. That doesn't matter. AdoreTV and now my point is that the DX12 performance numbers from RotTR and possibly other games are borked and shouldn't be used to figure out what the Ryzen gaming performance really is.

Here is another sample.

https://www.computerbase.de/2017-03.../#diagramm-battlefield-1-dx11-multiplayer-fps

DX11 performance AMD actually beats everything but the 6900k and 6950k.

https://www.computerbase.de/2017-03.../#diagramm-battlefield-1-dx12-multiplayer-fps

On DX12 AMD falls off the wagon. It goes from neck and neck with the 6950 and decently behind the 6900k. The 7700, 7600, and 6850 go from getting a small boost to no boost accordingly. The 6900k and 6950k take decent sized hits. All the others get slight slight hits to performance. Ryzen then takes a huge hit. The trend is there and easy to see. DX12 is supposed to help with multithreading and increase performance with more cores otherwise neutral. But with Nvidia's drivers it seems to cap at 4C on DX12 whereas DX11 seems to scale with both cores and clock speeds allowing the Ryzen to keep up to the the 6950 through clock speed. But the 6900 having not having a clock deficit, pulls away the way you would expect BW IPC would let it.

Obviously some test on AMD GPU's with both CPU's on BF1 is needed. But the picture starting to develop and that is any bench on any game using DX12 needs to be taken with a grain of salt. Not just for Ryzen's sake but for all CPU's, Ryzen seems the worst off for it, and Vega is going to be a rude awakening in DX12 if Nvidia doesn't fix this, but you can't bottleneck the CPU on an Nvidia card in DX12 right now and get a picture of future performance. Those benches are useless.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
L1 on Ryzen is half the width of L1 on Haswell as you pointed out, hence half bandwidth as measured by AIDA64 is what you'd expect from it. However, Ryzen has twice the L2 per core(512KB vs 256KB), and both L2 and L3 are denser than what Intel can achieve on Skylake - all these details are there in the ISSCC presentation. L3 on Ryzen runs on its own clock domain as well. Thus the faster L2 and L3 on Ryzen, in terms of raw bandwidth, is in line with the specs.

Remember the reason why we were even having this conversation in the first place, is because of SIMD performance. Bandwidth is extremely important for the sustained operations of wide vector SIMD performance, and the primary reason why Intel doubled the bandwidth of the L1 cache in Haswell in the first place was to make using AVX2 worth it.

And we should all know by now that AMD's AVX2 implementation is greatly subpar compared to Intel's.

There are differences in the implementation of caches, and you cannot call one implementation better than the other at this moment, given the details on Ryzen's implementation is sparse.

The only limitation AMD has with regard to the size of the caches has to do with the CCX design.

I never said that Intel's is better, or that AMD's is worse. We were talking about SIMD performance and I mentioned that cache bandwidth is crucial to its performance. Intel's cache design philosophy seems to be focused on making their L1 cache as fast and as accurate as possible, and using the L2 cache for support only. The L3 is used for interprocessor communications and reducing memory traffic, as it's completely unified.

AMD's cache setup is obviously a lot different and I don't know what their focus is, but it's probably not SIMD performance.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,774
3,597
136
Easy, https://www.hardocp.com/image/MTQ2ODIxNjQyNm9BYnNZQkhEdUdfNl8yX2wuZ2lm


Here is how 2 480s stack up to a single 1070 in DirectX 11. Yes, 1440p, i know, but since his runs were clearly not CPU limited, it suffices.
Is that Geothermal Valley or Soviet Installation? There's no point if it isn't. I know of a forum member who doubled the fps he was getting with dual-980Tis and DX12 in those regions.
For what it's worth nV used to have similar performance losses with DirectX 12 when running Intel CPUs few months ago.
Their new DX12 driver does nothing except in Hitman DX12 and to a lesser extent, Doom Vulkan, according to the same source.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,774
3,597
136
Remember the reason why we were even having this conversation in the first place, is because of SIMD performance. Bandwidth is extremely important for the sustained operations of wide vector SIMD performance, and the primary reason why Intel doubled the bandwidth of the L1 cache in Haswell in the first place was to make using AVX2 worth it.

And we should all know by now that AMD's AVX2 implementation is greatly subpar compared to Intel's.



I never said that Intel's is better, or that AMD's is worse. We were talking about SIMD performance and I mentioned that cache bandwidth is crucial to its performance. Intel's cache design philosophy seems to be focused on making their L1 cache as fast and as accurate as possible, and using the L2 cache for support only. The L3 is used for interprocessor communications and reducing memory traffic, as it's completely unified.

AMD's cache setup is obviously a lot different and I don't know what their focus is, but it's probably not SIMD performance.
Halved L1 bandwidth and half AVX2 throughput doesn't seem to result in Ryzen 8-cores having half the performance of Broadwell-E 8-cores, in the types of AVX2 workloads that are being benchmarked - like h.265.

Even in AIDA64 PhotoWorxx a 6900K isn't twice as fast as a 1700(X) or 1800X.

There's more to SIMD than AVX2: the only place where you would see peak AVX2 throughput in Haswell+ is in specific scientific applications.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Halved L1 bandwidth and half AVX2 throughput doesn't seem to result in Ryzen 8-cores having half the performance of Broadwell-E 8-cores, in the types of AVX2 workloads that are being benchmarked - like h.265.

Even in AIDA64 PhotoWorxx a 6900K isn't twice as fast as a 1700(X) or 1800X.

Well obviously we're not going to get perfect scaling. Especially since AVX2 is still a work in progress anyway. From what I understand, important instructions like gather were micro coded in Haswell and were thus extremely slow and practically unusable. But with every tick tock cadence they have been improving it. At any rate, the differences between Intel and AMD in this particular area likely come down to philosophy. AMD wants to offload heavy processing to the GPU as much as possible so it will probably minimize investment in wider vectors, whilst Intel seems to want to keep it on the CPU and so will do the opposite.
 

Elixer

Lifer
May 7, 2002
10,376
762
126
Scorpio if Zen+Vega, basically officially makes it a competent desktop system
There is practically no way to make custom silicon for Scorpio based on Zen or Vega, assuming the launch window is still sometime in 2017.
I suspect something based off of polaris though, using modified Bristol Ridge CPUs.
They are most likely ramping up right now to meet the 2017 deadline.
 

lolfail9001

Golden Member
Sep 9, 2016
1,056
353
96
No not for the reason stated. You obviously just forwarded to number screens to comment. He goes through the numbers to show why he thought something was up and what he was testing for before he gets to the payoff. Which touches on the point you seem to wish to ignore.
Oh, did he went through why his results imply lack of CF in Dx11? I would read an article if he wrote it, i am not going to waste time trying to listen through it for explanations.
I think it is because with the AMD GPU setup you see better CPU usage on DX12 on both CPU's and its a Gameworks game.
What if i told you it is GPU bottlenecked? As it is, by the way.
The big problem is DX12 should be performance neutral at worst.
That is assuming that Dx12 version implements all the easy performance hacks Dx11 has accumulated in drivers over the years. And there's plenty of those. Dx12 should be performance neutral, but that assumes qualification not too many game developers have.
The performance shifting between DX11 and DX12 in invalidates any CPU performance of RotTR in DX12 and brings into question any DX12 result until we have an AMD card that doesn't bottleneck the 7700k in DX11 or DX12.
Valid, that does invalidate what little predictive power Dx12 results had. They never had any, but okay.
On DX12 AMD falls off the wagon.
BF1 has bad Dx12 implementation for both GPU vendors, as was tested a while ago. As result, nobody should use Dx12 in BF1. Or test with it for anything but academic purposes.
DX12 is supposed to help with multithreading and increase performance with more cores otherwise neutral.
No, Dx12 is supposed to land more job into developer's hands. Is it surprising that most of developers require extensive vendor assistance to make use of it without screwing up? That's why i do not give much credit to Dx12 or Vulkan.
Obviously some test on AMD GPU's with both CPU's on BF1 is needed.
Will you agree that a test with OCd Skylake will suffice? Because that's how it stacks up with AMD GPU: https://www.hardocp.com/image/MTQ3NzMwNjY1NUIwZVV3aGVTaGNfMl8yX2wucG5n . Not well either, heh.

Is that Geothermal Valley or Soviet Installation? There's no point if it isn't. I know of a forum member who doubled the fps he was getting with dual-980Tis and DX12 in those regions.
They don't specify... Though it has to be said that i am sooner to believe SLI was not functioning for him in Dx11 :p
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,764
3,131
136
Remember the reason why we were even having this conversation in the first place, is because of SIMD performance. Bandwidth is extremely important for the sustained operations of wide vector SIMD performance, and the primary reason why Intel doubled the bandwidth of the L1 cache in Haswell in the first place was to make using AVX2 worth it.

And we should all know by now that AMD's AVX2 implementation is greatly subpar compared to Intel's.



I never said that Intel's is better, or that AMD's is worse. We were talking about SIMD performance and I mentioned that cache bandwidth is crucial to its performance. Intel's cache design philosophy seems to be focused on making their L1 cache as fast and as accurate as possible, and using the L2 cache for support only. The L3 is used for interprocessor communications and reducing memory traffic, as it's completely unified.

AMD's cache setup is obviously a lot different and I don't know what their focus is, but it's probably not SIMD performance.

You are so wrong and so clueless its not even funny.

Intels AVX/AVX2 implementation is not significantly better then AMD's.

1. Intel take a big hit when in 128bit mode when executing 256bit data vs a traditional 128bit SIMD unit until the top 1/2 of the 256bit unit becomes active.
2. They both have around the same instruction latency/throughput with some better for amd some better for intel.
3. They both decode similarly ( this was an actual problem for Bulldozer)


Now on to the Cache, i dont know what you are smoking but it is so wrong.

The L2 in core is not for "support", it is where the streaming prefetchers are and the L1 and L2 aren't inclusive or exclusive of each other but are both inclusive in the L3.
In Zen the the L1 is write back so i assume it is exclusive(it might be inclusive) of the L2, The L3 hold L2+L1 tag data and maybe some inclusive lines but is largely exclusive.

Whats the difference between the two? really its about multi core scaling and handling of cache coherency. In terms of general performance, generally speaking you can treat them largely as equal.
Now one obvious difference is the width of the read/write ports. intel has end to end 256 bit datapaths ( execution, load store , cache). AMD has end to end 128bit datapaths (execution , load store, cache).
intel can most definitely hit higher throughput, but instruction throughput and latency isn't any better.

What this all actually means is only in workloads where 128bit load and store becomes a bottleneck does intels design offer advantage. Go look at the stilts data to see how many apps across a large suite of app's that actually is. If high ILP 256bit was actually that common they wouldn't be shutting off 1/2 of their SIMD units by default, would they?


So no, AMD's avx/avx2 is not greatly subpar compared to Intel's, thats just FUD from someone who has no clue what they are talking about!
 

moonbogg

Lifer
Jan 8, 2011
10,635
3,095
136
I don't know if this has been posted, but it addresses the huge Ryzen deficit in Rise of the Tomb Raider. The deficit seems to have been clearly caused by the use of an Nvidia GPU. He tested Ryzen and the 7700K with an RX480 crossfire setup...and...and...the 1800x was only 6% behind while when testing with a GTX 1070, Ryzen was something like a ridiculous 30%+ behind. An interesting watch for sure. Very interesting. My only question is, if the Nvidia DX12 driver is crippling Ryzen, why doesn't it also cripple the 7700K? No idea.

https://youtu.be/0tfTZjugDeg
 

Udgnim

Diamond Member
Apr 16, 2008
3,662
104
106
I don't know if this has been posted, but it addresses the huge Ryzen deficit in Rise of the Tomb Raider. The deficit seems to have been clearly caused by the use of an Nvidia GPU. He tested Ryzen and the 7700K with an RX480 crossfire setup...and...and...the 1800x was only 6% behind while when testing with a GTX 1070, Ryzen was something like a ridiculous 30%+ behind. An interesting watch for sure. Very interesting. My only question is, if the Nvidia DX12 driver is crippling Ryzen, why doesn't it also cripple the 7700K? No idea.

https://youtu.be/0tfTZjugDeg

based on his DX12 comparison of the 480 and 1070 where he confirmed that DX12 produced better frames than DX11 for the 1070 in certain areas, it will be interesting to see if his conclusions for Vega vs Nvidia under RotTR & DX12 will end up being correct
 

ryzenmaster

Member
Mar 19, 2017
40
89
61
My only question is, if the Nvidia DX12 driver is crippling Ryzen, why doesn't it also cripple the 7700K?

If it is true that the Nvidia driver allows for little to no parallelism, then when you are benchmarking DX12 games, you are benchmarking single threaded performance. This is something the 7700k does better than Ryzen. Now keep in mind the 7700k still has 3 more cores, so it is still being crippled.

There could be something to it, because afaik Nvidia still doesn't really support async compute at hardware level. Instead what could be happening is that they have implemented a driver hack to force concurrent draw calls etc to be serialized.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
On the upside Nvidia will have to fix the issue this year or Ryzen and Coffee Lake would badly hurt their sales if they can't properly scale beyond 4 cores with DX12.
Can they do it on Pascal though or do they need Volta?

Power consumption and efficiency numbers might be very interesting DX11 vs DX12, Radeon vs GeForce.
 
Last edited:

ryzenmaster

Member
Mar 19, 2017
40
89
61
Can they do it on Pascal though or do they they need Volta?

Much to my knowledge their current and previous gen GPUs all suffer from hardware level limitations. Their products are tuned for single threaded APIs and it has served them well. Certainly their DX11 performance tends to be better than AMD.

When it comes to async compute though, in titles like Gears Of War on DX12 and Doom on Vulkan, we do tend to see some performance gains on AMD when enabling async compute.
 

imported_jjj

Senior member
Feb 14, 2009
660
430
136
Much to my knowledge their current and previous gen GPUs all suffer from hardware level limitations. Their products are tuned for single threaded APIs and it has served them well. Certainly their DX11 performance tends to be better than AMD.

When it comes to async compute though, in titles like Gears Of War on DX12 and Doom on Vulkan, we do tend to see some performance gains on AMD when enabling async compute.

I somewhat assume that it is hardware but not 100% certain and it would be a big deal as anyone going with more than 4 cores would avoid Nvidia. It would get so much worse if Volta has the same issue.
Ofc lots of folks already own Nvidia GPUs and for them it would be much better if this can be solved in software.