DOOM updated with Vulkan support

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
A lot of people say AMD is more forward looking / creates GPUs that do well for the future, but it seems like AMD is a company that is creating their own goddamn future. They invented Async compute, pushed Mantle/DX12/Vulkan down everyone's throat with the consoles (which wouldn't have even been possible without AMD's efforts in APUs, which began all the way back when they bought ATi), invented HBM, and now who knows what they're planning to do with multi-GPU in the future.

I am somewhat concerned for the new APIs though, what if AMD decides to do a huge arch change and suddenly GCN is left in the dust like Kepler is today given the low level nature of Vulkan and DX12?

Architecturally as long as GPUs have separate engines, such as Compute Units/ALUs, Rasterizers and DMAs, it will be fine with DX12/Vulkan. Each of these subunits form the core of rendering because we're dealing with pixels afterall and have been for decades. Iterations improve them, they become more efficient, faster, but they are still there.

The biggest limit on AMD's ability to change GCN will be to keep the ISA backwards compatible. It's required of them now due to the console compatibility being a core feature of MS & Sony's ecosystem.

And yes, you are very right. AMD is creating a future that best suits their IP. It's been years in the work and will have a few more years to go for AMD to truly prosper. But it hinges on them not self sabotaging. The RX power issue shouldn't have even been an issue.

Also, If NV & Intel are not pushing hard for small multi-die on an interposer approach, they will be caught with their pants down in a few years time. Especially on 7nm and below where difficulties with large dies will be a major restriction for single monolithic architectures.
 
Feb 19, 2009
10,457
10
76
Silverforce11
A few weeks ago you responded to a post of mine about the benefits of Asynchronous Compute - anyways, you specifically mentioned the exact same situation as ID Software has brought up here:
"When looking at GPU performance, something that becomes quite obvious right away is that some rendering passes barely use compute units. Shadow map rendering, as an example, is typically bottlenecked by fixed pipeline processing (eg rasterisation) and memory bandwidth rather than raw compute performance. This means that when rendering your shadow maps, if nothing is running in parallel, you're effectively wasting a lot of GPU processing power.

Just wanted to say that you were correct and these asynchronous compute deniers should apologize for all their misinformation.

Well, they did not apologize at all for spreading fud and misinformation about Maxwell's "superior DX12" when it was the hot topic in 2014 here. FL12_1, revolutionary, more advanced & future proof (yeah really!) than GCN!

They didn't even apologize for spreading more fud and jumping to the defense of NVIDIA for Maxwell's fake Async Compute support. Nor the fake Maxwell preemption support for VR (it's non-existent) which NV claims they could do.
 

faseman

Member
May 8, 2009
48
12
76
Yeah... once you enable TSSAA you end up with this..
9zAxOxu.jpg

IRKbCWa.jpg

Lol wow look at the Fury X stretch its legs. Nearly double my 970
 
Last edited:

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Slight tangent but, your name and magic carpet avatar sort of ties in with DOOM, magic carpet was the game that lost out to the standard corridor/ground shooter. It was apparently slower to run on 4x86 intel hardware, but when I got my very first computer in 1996, It had a cyrix 5x86 and ran magic carpet very well. And it was INFINITELY better than doom at the time. You could fly around in free space, and had varied spells that could not only blast your enemies to bits on the ground or in the sky, you could create castles and volcanoes and fire meteors and CARVE tendrils into the earth better than any so called GOD game ever made.

rip magic carpet
rip bullfrog
rip the loss of a franchise archetype that never was, because the easier to run doom took over, that or people have terrible taste.

Rose coloured glasses much? Magic Carpet was an interesting concept but a terrible game compared to Doom. Of course that's just my opinion :)
 

positivedoppler

Golden Member
Apr 30, 2012
1,103
171
106
I feel pretty bad for the 970 owners. Maxwell seems like a home run when it first was released but the 3.5 gb fiasco then the claim of it supporting A-Sync when it's looking like it will be a critical part of future games.
 
Feb 19, 2009
10,457
10
76
wow, krumme deserves some kind of forum badge for the spot on prediction :D

Brings back memories, just the first page alone, see the usual suspects (including me heh) chiming in, with their predictions.

This is why I said that some users here have a constant history of false predictions, so if they say one thing, you can be certain it will turn out opposite/different.

This post is excellent:

http://forums.anandtech.com/showpost.php?p=35536990&postcount=42

I'll go ahead and leave these 2c here for posterity.

This API will only succeed if it's going to be cross platform. A-la OpenGL, this needs to be cross platform for AMD hardware. Both devs and AMD don't [care] about performance. It is not an important metric for the consumer (i.e. not AT VCG forum members). All AMD and devs care about is revenue: how many GPUs and how many copies are sold. All the consumer cares about: can I play it on my device?

If AMD plays their cards right, they could have their GCN architecture in every gaming device out there: consoles (XB1/PS4/SteamBox) PCs (Windows/MacOS), tablets (Android/iOS/Windows). Maybe even phones. Devs would have to do far less work to get their games working on every one of these devices. Consumers would get their favorite game released on all these platforms, and they'd be able to play on day 1 regardless of what they own, which = more sales for devs.

Play it right AMD. Relegate it to Windows only, and it will die just like Glide.

And I am sure AMD knew it as well, hence they shoved Mantle for free at everybody. It forced Microsoft to act and the rest is history.

This sentiment is even more truer now with x86/GCN consoles allowing for backwards compatibility for future consoles. It's basically a lock-in for AMD's ecosystem.

This one just made me LOL IRL:

http://forums.anandtech.com/showpost.php?p=35536346&postcount=8

Imagine seeing people with 7870's getting higher FPS than people that bought Nvidia GTX Titan,
the [redacted] storm this could kick up......

There's no backlash though, it's become acceptable to see Kepler taking dirt naps in modern games. "It's old, who cares..", the same fate awaits Maxwell later this year when the next wave of DX12 games hit. You have to ask yourself, why did NV not enable Async Compute in their drivers for Maxwell too if they can do it for Pascal? Why not give some love to Maxwell users?
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Lol. That thread is hilarious:

24601 said:
It's becoming obvious that [Mantle] is bulldozer 2.0

"Wait!!! Keep Waiting!!!!! It'll totally be awesome if you don't believe me you are evil/incompetent/lying/shill!!!!!"

And here we are and the low-level API war is in full force and its changing PC gaming. Bad speculation is bad. I love how you can read through and find the brand motivated comments, which ended up being false, and the evidence based comments which ended up being surprisingly clairvoyant.

I am certainly glad that it became a cross-vendor thing and not just an AMD proprietary tech.

Doom on Vulkan really completes the circle. OpenGL was junk. DX11 was better but still horribly inefficient. It's about time we stop wasting CPU and GPU cycles on pure, unadulterated waste spent making useless calculations for a bloated API.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Doom on Vulkan really completes the circle. OpenGL was junk. DX11 was better but still horribly inefficient. It's about time we stop wasting CPU and GPU cycles on pure, unadulterated waste spent making useless calculations for a bloated API.

lol, right..
 

caswow

Senior member
Sep 18, 2013
525
136
116
Explain how anything he said is wrong. You can't.

i think he belongs to the groupe of people who say that nvidia doesnt need dx12 or llapi in general to have good results. problem is this is not true even nvidia cards get boosts in llapis but because they dont get as much as amd cards people claim nvidia has already good utilization which is true but there is good and better utilization and async shaders is better than fine preemtion because its still not asynchronous
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Explain how anything he said is wrong. You can't.

DX11 and OpenGL aren't garbage when they allow hobbyists to more feasibly write their own rendering engines.

There are trade offs, and these APIs are heavier for a reason.

There is nothing wrong with trading cpu time for less programmer time, especially if you don't need every last drop of performance.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
i think he belongs to the groupe of people who say that nvidia doesnt need dx12 or llapi in general to have good results. problem is this is not true even nvidia cards get boosts in llapis but because they dont get as much as amd cards people claim nvidia has already good utilization which is true but there is good and better utilization and async shaders is better than fine preemtion because its still not asynchronous

The word you are looking for is concurrent, not asynchronous.
 

dzoni2k2

Member
Sep 30, 2009
153
198
116
DX11 and OpenGL aren't garbage when they allow hobbyists to more feasibly write their own rendering engines.

There are trade offs, and these APIs are heavier for a reason.

There is nothing wrong with trading cpu time for less programmer time, especially if you don't need every last drop of performance.

True. But you wouldn't call AAA title devs hobbyists, would you? That's where thin APIs are primarily aimed at, because they push graphics/performance envelopes.
 
Last edited:

dzoni2k2

Member
Sep 30, 2009
153
198
116
Nope, it will get truly hilarious when 3GB 1060 launches and gets recommended over 970 due to better Async Compute capabilities (see Time Spy benchmarks) :)

I thought Nvidia is scraping 3GB version? Who in their right mind would release a 3GB card at this level of performance? It will get destroyed in any game that uses more than 3GB.
 

coercitiv

Diamond Member
Jan 24, 2014
6,203
11,909
136
I thought Nvidia is scraping 3GB version? Who in their right mind would release a 3GB card at this level of performance? It will get destroyed in any game that uses more than 3GB.
All the needed info is already in the 1060 thread.
 

LTC8K6

Lifer
Mar 10, 2004
28,520
1,575
126
I thought Nvidia is scraping 3GB version? Who in their right mind would release a 3GB card at this level of performance? It will get destroyed in any game that uses more than 3GB.

It has to have 3gb or 6gb with the 192 bit bus, so 3gb is what it has to have, given it's position in the lineup.

It's not for games that will use more than 3gb, though.

It will be great for it's intended purpose.

Yes, it will get destroyed when used outside of that intended purpose, just like any video card will when used that way.

I think it will be sold as a 1050 though, and not a 1060.