• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

guru3dDoom Vulkan Benchmarks

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Ha, I don't know. That's a good test of the theory!

Get Doom, test it and post results here 😀

doom_1920_v.jpg


72963-mother-of-god-super-troopers-m-wxua.jpeg


Faster than OG Titan😱😎
 
Last edited:
Bit misleading as the original flop numbers are completely wrong.

1070 has a flop rating of 6.4TFLop. That is without taking the overboost into account.
Could range anyware from 6.4 to 7.2TFlop (1.9GHz)

Same story for the 970 980,..

980 ti has 6Tflops that can range up to 6.7TFlops (1.2GHz)

Also the numbers of AMD 480 is misleading as it gives max 5.8TFlops


edit : 970 would push 3.9Tflops (not 3.4) probably ranging up to 4.2TFlops (1.25GHz)

Thanks, was wondering why Nvidia #s were all over the place
 
Still does not explain why a game like Doom is so much compute hungry to have a lineal perf/TFlops.

A lot of console game engines shifted graphical effects over to compute based effects to better make use the hardware they have. It started with Sony and their PS4, and now Microsoft & Xbox is onboard with games like Quantum Break and Forza amongst many others. ID is just following suit with their graphics engine.

The upshot is AMD based PC gaming is also getting the *massive* performance gains from these optimizations originally started for consoles.

As far as I know everyone expected nvidia to also get these performance gains, but when developers tried to access the Async scheduler on nvidia hardware it didn't work. And that's where we are at today.
 
A lot of console game engines shifted graphical effects over to compute based effects to better make use the hardware they have. It started with Sony and their PS4, and now Microsoft & Xbox is onboard with games like Quantum Break and Forza amongst many others. ID is just following suit with their graphics engine.

The upshot is AMD based PC gaming is also getting the *massive* performance gains from these optimizations originally started for consoles.

As far as I know everyone expected nvidia to also get these performance gains, but when developers tried to access the Async scheduler on nvidia hardware it didn't work. And that's where we are at today.

Sure but its scaling linearly up to Fury X 8 TFlops (at least)... that does not look a bit too much for a game like that to you?
 
Sure but its scaling linearly up to Fury X 8 TFlops (at least)... that does not look a bit too much for a game like that to you?

You understand that the point of compute is to take things that were done in the fix function pipeline and do them in compute shaders because you want to do it differently to how the fixed function hardware allows you to.

So what exactly is the problem?
 
Sure but its scaling linearly up to Fury X 8 TFlops (at least)... that does not look a bit too much for a game like that to you?

No. As long as the framerate can continue to increase (not capped) then it will continue to use the available resources to calculate more frames...
 
Sure but its scaling linearly up to Fury X 8 TFlops (at least)... that does not look a bit too much for a game like that to you?

Too much GPU power? That's akin to saying 640k is too much memory.

People always want more. That AMD was able to fit 8 TFlops into a small Fury X is impressive, but I believe the timing was off, the software was much behind where AMD thought it would be years before when they started designing the Fury X chip.

Software is finally catching up, and considering VR, 8TFlops is certainly not too much. A major reason I figure for AMD giving away so many 16 TFlop RadeonPRO Pro Duo cards to developers and content creators. I wager AMD has given away many times over the amount of Pro Duos they'll ever sell.

Perhaps even more importantly though, the flexibility compute based operations bring gives game developers many more options on how best to use and deploy GPU resources for their games.
 
Is vulkan going to be the go to API for programmers for the next generation of pc games? If so, I think I may hold off on a 1070 and see where the dust settles.
 
Is vulkan going to be the go to API for programmers for the next generation of pc games? If so, I think I may hold off on a 1070 and see where the dust settles.

If you look at the major AAA games coming out in a few months and towards the end of the year, it's all DX12.

Vulkan is still rare. ATM only Valve & id Software and a smaller indie group.

DX12 has momentum and I am sure MS is sending its software engineers to studios to "help" them move to DX12. It maintains their ecosystem. Vulkan is a threat to MS.
 
If you look at the major AAA games coming out in a few months and towards the end of the year, it's all DX12.

Vulkan is still rare. ATM only Valve & id Software and a smaller indie group.

DX12 has momentum and I am sure MS is sending its software engineers to studios to "help" them move to DX12. It maintains their ecosystem. Vulkan is a threat to MS.

Yes, the list of games that support Vulkan is very, very short.
https://en.wikipedia.org/wiki/List_of_games_with_Vulkan_support
 
That link has a link to DX12 supported games. The list looks much longer but the list of games with currently useful DX12 support is still pretty low, too.

Though, for me, Civ VI and Doom and TW:Warhammer are games I'd play to death so I guess that is enough games for me.

Hooray for franchises that have been around since forever! (1991, 1993, 2000)
 
That link has a link to DX12 supported games. The list looks much longer but the list of games with currently useful DX12 support is still pretty low, too.

Though, for me, Civ VI and Doom and TW:Warhammer are games I'd play to death so I guess that is enough games for me.

Hooray for franchises that have been around since forever! (1991, 1993, 2000)

There's a handful DX12 now.

Gears of Wars Ultimate
Quantum Break

Both of these were broken on release, but with patches are good now. I'll wait til QB is on sale to get it. 🙂

Forza Apex = great performance, no issues. Free too. 😵
Hitman = unfinished early access.
Ashes = niche RTS with great performance.
Total War Warhammer = not niche strategy / RTS, top seller, all round awesome game (I've got 260 hours already in it)! DX12 still beta though, but in big battles, the difference in MIN FPS due to CPU bottleneck is great.

Rise of the Tomb Raider = buggy DX12, still buggy, but at least it's helping people on potato CPUs.

The big DX12 ones are coming in a few months. Basically once BF1 is out and Civ 6, nobody can say DX12 doesn't matter anymore, not with a straight face.
 
FYI since many people have been complaining that id worked only with AMD, you failed to read the patch announcement

Since late March 2016 we started working daily with both AMD and NVIDIA. Both have been great partner companies, helping bring full DOOM and Vulkan driver support live to the community. There was a lot of work on all fronts but we are pleased with the results.

https://bethesda.net/#en/events/game/doom-vulkan-support-now-live/2016/07/11/156

Nvidia also had id on stage to do the initial reveal of Vulkan support with the 1080 announcement.
 
FYI since many people have been complaining that id worked only with AMD, you failed to read the patch announcement



Nvidia also had id on stage to do the initial reveal of Vulkan support with the 1080 announcement.


I wish there was more performance (I still have a 750 Ti) to be had but I don't think it'll come.
 
This could also be like Talos Principle where it took nVIDIA awhile before performance improved when using Vulkan. Time will tell I guess.
 
You understand that the point of compute is to take things that were done in the fix function pipeline and do them in compute shaders because you want to do it differently to how the fixed function hardware allows you to.

I thought the point was to take stuff traditionally done on the CPU and do it on the GPU shaders..
 
Back
Top