guru3dDoom Vulkan Benchmarks

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

renderstate

Senior member
Apr 23, 2016
237
0
0
I thought Sontin was being sarcastic after people went crazy accusing id & NVIDIA of sabotaging the new doom game on some AMD parts. I can't otherwise make rational sense of what he wrote :)
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Interesting excerpt from the Eurogamer interview with the programming team:

Axel Gneiting: We are using all seven available cores on both consoles and in some frames almost the entire CPU time is used up. The CPU side rendering and command buffer generation code is very parallel. I suspect the Vulkan version of the game will run fine on a reasonably fast dual-core system. OpenGL takes up an entire core while Vulkan allows us to share it with other work.

Doom running on a fast dual-core machine, that's something I'd be interested in seeing.

I thought Sontin was being sarcastic after people went crazy accusing id & NVIDIA of sabotaging the new doom game on some AMD parts. I can't otherwise make rational sense of what he wrote :)

Yeah, that was my thought too.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,697
1,292
136
Will it though? As shown by digital foundry and others, Fury is keeping up and surpassing the 8gb 1070 @ 4k even with TSSAA 8x and Ultra settings

If history is any indication, then yes, it will become a bottleneck. There gets to be a point where too small of a frame buffer hurts performance beyond the usual need to lower details. It happened with the 1GB HD 5870, it happened with the 1.5GB GTX 480/580, by now everyone here knows it happened to the 2GB GTX 680/770 (which is very significant because the Nvidia fanboys back then used all the same arguments about VRAM to defend the 680 that the AMD fanboys are using today to defend the FuryX), and it's just starting to happen to 3GB cards. Progress marches on.

If there were to be a pause in the trend, it would happen around ~8GB because of the amount of memory current generation consoles can dedicate to the GPU, but that too I think is unlikely.
 
Last edited:
Feb 19, 2009
10,457
10
76
If history is any indication, then yes, it will become a bottleneck. There gets to be a point where too small of a frame buffer hurts performance beyond the usual need to lower details. It happened with the 1GB HD 5870, it happened with the 1.5GB GTX 480/580, by now everyone here knows it happened to the 2GB GTX 680/770 (which is doubly significant because the Nvidia fanboys back then used all the same arguments about VRAM to defend the 680 that the AMD fanboys are using today to defend the FuryX), and it's just starting to happen to 3GB cards. Progress marches on.

This is certainly true, going to happen faster at 4K than 1440 and 1080p though. 4GB was the only reason I didn't get 2x Fury X when it was launched.

But there's always the option to run one notch down to get very fast performance, rather than running on Hyper/Nightmare and be unplayable on a single GPU at 4K.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Interesting excerpt from the Eurogamer interview with the programming team:

Axel Gneiting: We are using all seven available cores on both consoles and in some frames almost the entire CPU time is used up. The CPU side rendering and command buffer generation code is very parallel. I suspect the Vulkan version of the game will run fine on a reasonably fast dual-core system. OpenGL takes up an entire core while Vulkan allows us to share it with other work.

Doom running on a fast dual-core machine, that's something I'd be interested in seeing.

It works fine on my g3258, even in opengl, though I do have an nvidia card.

I haven't tried vulkan cause I got bored of the game and uninstalled it a while back.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
id Tech 6 is the best thing to happen to PC gaming in a very long time. Not tied to an OS/D3D is :thumbsup: :thumbsup: :thumbsup: :thumbsup:

PC Games Hardware tested an i7-5820K @ 1.2GHz w/ a Titan X @ 1500/4200[/B]. At 1.280 x 720 w/o AA/AF, under OpenGL they got 89 FPS and that increased to 152 FPS (+71%) when switching over to Vulkan.

It gets even better for budget gamers using powerful Nano/Fury X GPUs with slow CPUs, such as the FX 8350. FX-8350 in the power saving mode at 1.8 GHz under OpenGL maxed out at only 45 FPS, but when the game was switched to Vulcan, the same 1.8Ghz CPU produced a smooth 64 FPS (+42%).
http://www.pcgameshardware.de/Doom-...Patch-bessere-Performance-Benchmarks-1201321/

That means even without shader intrinsics and AC advantage of GCN, Vulkan still proves that OpenGL and DX11 were highly inefficient even for NV despite their multi-threaded DX11 drivers. Of course no one with a Titan X will play the game at 720p but it still goes to show that other PC gamers with lesser CPUs and GPUs stand to benefit tremendously from DX12 and Vulkan AAA games, should they be coded efficiently.

While it is true that NV had superior DX11/OpenGL optimized drivers overall, Vulkan still highlights just how inefficient and outdated the old APIs were to begin with. It's just a shame it took a whopping 5 years since GCN was introduced for the PC industry to realize this and start to move forward.

It also proves that Mantle wasn't a worthless investment by AMD since both DX12 and Vulkan benefited from Mantle. Thus, the entire PC gaming industry owes it to AMD to pushing the switch over to these low-level APIs!
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,949
504
126
It's just a shame it took a whopping 5 years since GCN was introduced for the PC industry to realize this and start to move forward.
I don't think the industry as a whole ever realized this so to speak they were dragged kicking and screaming into the future. Mantle forced Microsoft's hand they couldn't let the likes of Vulkan be the only thin API out there.
It also proves that Mantle wasn't a worthless investment by AMD since both DX12 and Vulkan benefited from Mantle. Thus, the entire PC gaming industry owes it to AMD to pushing the switch over to these low-level APIs!
Mantle is turning out to be AMD64 v2.0 in respect to the impact.
 

renderstate

Senior member
Apr 23, 2016
237
0
0
If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.

Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement :)

Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.

BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly :)


Threadcrapping and trolling are not allowed
Markfw900
 
Last edited by a moderator:

sirmo

Golden Member
Oct 10, 2011
1,012
384
136
If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.

Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement :)

Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.

BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly :)
Someone should tell Sony, Microsoft and all those console developers they are doing it wrong.
 
  • Like
Reactions: DarthKyrie

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.

Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement :)

Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.

BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly :)
What proof do you have to back up your claims that coding for Vulkan is any more difficult than for opengl?

I'm sure it's just fud.

When html was released I'm sure it was confusing and a lot of work to learn. Doesnt mean it was some insurmountable task after a programmer learned the language. Vulkan and dx12 will become the standard and second nature for devs just like any other type of programming that's new.

I can see you are upset that amd is performing well with vulkan, but give it up. You sound like an angry child that didn't get his candy bar at the checkout in the supermarket.
 
  • Like
Reactions: DarthKyrie

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
What is this about AMD's openGL drivers sucking? They certainly don't, not in this game at least.

AJmxid92ARaCEgnW6bxUcN.png


Some cards are lower than usual, but all of them are handing in respectable performance. Fury X beating 980 and 390 beating 970.

tRL6a67CjK2ptDhA9y9HCQ.png


Then at 1440p, Fury X > 980 Ti/Titan X.

:confused:

Digital Foundry: https://www.youtube.com/watch?v=WvWaE-3Aseg

As stated in the video

970: 93.9
390: 90.9

As you can see, that's a 3% lead, not a 30% - the performance uplift granted to a 390 by using Vulkan, unfortunately for the 970, it only gains around 3% itself.
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.

Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement :)

Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.

BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly :)

I don't know about that ...

The software ecosystem is very important and it's the reason why Nvidia's proponents keep saying that Nvidia wins because of software so why chastise AMD for doing the same ?

Microsoft could have been a lot less merciful to Nvidia and lock them out of D3D12 for a good amount of time thus making it possibly worthwhile for a certain IHVs hardware to bear fruit and have a head start even if you had to wait for a little while you at least had an API that's exclusively for your customers for some time ...

Besides transitioning to a new API is not as painful as you make it out to be when AAA engine developers have to support consoles too ...
 
Last edited:
  • Like
Reactions: DarthKyrie

dzoni2k2

Member
Sep 30, 2009
153
198
116
Exactly, most of the work is already done on consoles. So all this nonsense about poor devs having to work with thin APIs is ridiculous. Console APIs are even thinner

What AMD needs to do now and looks like that's exactly what they are pushing for is make porting from consoles to PC as easy and straightforward as possible.
 
  • Like
Reactions: DarthKyrie

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.
I thought...
GCN is a *great* architecture
It was great, then it turns out that it can be 30% faster, now it's a dud.

I guess we're getting to see how you really feel now.

By the way, what does it mean when architecture takes three years before its performance drops below cards that weren't even competing with it? Does that mean it's awesome?
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
Doesn't that video show that CPU time in openGL for AMD is 10ms vs 6ms for nvidia? More than 50% higher overhead kinda sucks dude.
The video I linked does not as it's just the regular opengl test, but you are correct. That does suck, but at the same time, it doesn't seem to be holding back performance too much on average, as demonstrated by the video I linked.

But I suppose the overall point isn't incorrect.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
The video I linked does not as it's just the regular opengl test, but you are correct. That does suck, but at the same time, it doesn't seem to be holding back performance too much on average, as demonstrated by the video I linked.

But I suppose the overall point isn't incorrect.

In some cases it really does. Maybe not in that game.
 

IllogicalGlory

Senior member
Mar 8, 2013
934
346
136
It's the only OpenGL game that such a bottleneck matters in as far as I can tell. Wolfenstein/RAGE are capped at 60, so CPU performance isn't as important, while Talos Principle has Vulkan and DX paths as well. Although maybe it's the reason I see 30 FPS in KotOR 2 on my 290 at times.

In Doom, AMD cards still hand in fine performance, about where they normally are, plus the latest video from Digital Foundry shows the Fury X exceeding 1070 performance at 4K, not merely achieving parity as might be expected if it's the reduction of overhead that's solely responsible for the gains.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
What proof do you have to back up your claims that coding for Vulkan is any more difficult than for opengl?

I'm sure it's just fud.

When html was released I'm sure it was confusing and a lot of work to learn. Doesnt mean it was some insurmountable task after a programmer learned the language. Vulkan and dx12 will become the standard and second nature for devs just like any other type of programming that's new.

lol are you comparing html and vulkan?
....

Hello triangle in mantle is something like 600 lines. vulkan/dx12 is undoubtedly similar.

in opengl, it can be done in a few dozen.


It's the only OpenGL game that such a bottleneck matters in as far as I can tell. Wolfenstein/RAGE are capped at 60, so CPU performance isn't as important, while Talos Principle has Vulkan and DX paths as well. Although maybe it's the reason I see 30 FPS in KotOR 2 on my 290 at times.

In Doom, AMD cards still hand in fine performance, about where they normally are, plus the latest video from Digital Foundry shows the Fury X exceeding 1070 performance at 4K, not merely achieving parity as might be expected if it's the reduction of overhead that's solely responsible for the gains.

Maybe you're right that no other game is majorly affected, but I wasn't only talking about games. PCSX2 has a few accuracy related options with opengl that result in major performance issues on amd cards. The disparity seems unusually large too, some games that are unplayable on my friend's 290 are completely fine on my 950. And he has a faster cpu too.
 
Last edited:
Feb 19, 2009
10,457
10
76
What is this about AMD's openGL drivers sucking? They certainly don't, not in this game at least.

AJmxid92ARaCEgnW6bxUcN.png


Some cards are lower than usual, but all of them are handing in respectable performance. Fury X beating 980 and 390 beating 970.

tRL6a67CjK2ptDhA9y9HCQ.png


Then at 1440p, Fury X > 980 Ti/Titan X.

:confused:

Digital Foundry: https://www.youtube.com/watch?v=WvWaE-3Aseg

As stated in the video

970: 93.9
390: 90.9

As you can see, that's a 3% lead, not a 30% - the performance uplift granted to a 390 by using Vulkan, unfortunately for the 970, it only gains around 3% itself.

It's this mantra they keep repeating, that AMD was so much worse in OpenGL already so that's why they gained so much.

These people ignore the results, with actual video evidence no less, that 390 ~ 970, 390X ~ GTX 980, it's actually onpar. It's not like it was 25% behind.

Certainly no Project Cars.

It's the same fud I see when people say AMD has terrible DX11. What? If it's so terrible, WTH is it that the 390 > 970, 390X ~ 980? Even the Fury X is ~ 980Ti at 1440p and above. In mostly DX11 titles. That's terrible DX11?

------------------------

Despite already having competitive performance in DX11, we keep seeing people repeat the fud that AMD has worse DX11 driver overhead..

It seems strange then, seeing these results for CPU usage in actual DX11 games (I dare you guys who claim AMD has worse DX11 to watch these examples!! It will blow your mind):

https://youtu.be/AOACge8JhNo?t=54s

https://youtu.be/PqgOfR-Oc4U?t=1m6s

GTA V, CPU heavy game even: https://www.youtube.com/watch?v=Ye2mumere4M (No difference)

Mirror's Edge, another big open-world game: https://www.youtube.com/watch?v=lnzI-LR9-cs (Again, no difference in CPU usage)

It seems to me, whenever AMD has a problem with DX11 games, it's actually related to GameWorks.

This very issue first came to light around the COD:AW time, when Digital Foundry did an AMD/NV comparison and found AMD tanked in frame rates.

>> https://www.youtube.com/watch?v=lQzLU4HWw2U

Guess what? GameWorks & PhysX in that title.

In COD Black Ops 3, no GameWorks, suddenly there's zero issues, or even reversal with NV GPUs having higher CPU utilization & lower performance.

Again, Far Cry 4, AMD under performs, CPU bottlenecks, same engine and even reusing same assets, Far Cry Primal, no issues, AMD wins. DX11.

Fallout 3/New Vegas, neutral title, great performance for everybody. Fallout 4, GameWorks arrives and AMD is running gimped.

It's quite clear that AMD does have a DX11 problem, but only when NV sponsors the game.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
You can load up the CPU to create a bottleneck easier on AMD hardware in DX11. That's what Project Cars does with CPU PhysX.
 
Feb 19, 2009
10,457
10
76
You can load up the CPU to create a bottleneck easier on AMD hardware in DX11. That's what Project Cars does with CPU PhysX.

Yes this is true. AMD is limited to 1 rendering thread. AMD recommends devs dedicate a single game thread to rendering only, and run the game logic and other tasks on threads 2+.

If a game is running mostly single threaded, especially with CPU PhysX, it will pwn AMD big time. Like in Project Cars, Call of Duty Advanced Warfare etc.

My point is CoD Black Ops, same engine as the former game but without the NV GameWorks or PhysX, suddenly AMD's CPU usage is lower, while frame rates are higher.

This is what I mean when I say AMD has a DX11 problem. Their problem is NVIDIA's GameWorks & PhysX in games they sponsor.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
That and nVidia making it as difficult as possible for AMD to be able to optimize for their GW features.