renderstate
Senior member
- Apr 23, 2016
- 237
- 0
- 0
I thought Sontin was being sarcastic after people went crazy accusing id & NVIDIA of sabotaging the new doom game on some AMD parts. I can't otherwise make rational sense of what he wrote
I thought Sontin was being sarcastic after people went crazy accusing id & NVIDIA of sabotaging the new doom game on some AMD parts. I can't otherwise make rational sense of what he wrote
Will it though? As shown by digital foundry and others, Fury is keeping up and surpassing the 8gb 1070 @ 4k even with TSSAA 8x and Ultra settings
If history is any indication, then yes, it will become a bottleneck. There gets to be a point where too small of a frame buffer hurts performance beyond the usual need to lower details. It happened with the 1GB HD 5870, it happened with the 1.5GB GTX 480/580, by now everyone here knows it happened to the 2GB GTX 680/770 (which is doubly significant because the Nvidia fanboys back then used all the same arguments about VRAM to defend the 680 that the AMD fanboys are using today to defend the FuryX), and it's just starting to happen to 3GB cards. Progress marches on.
Interesting excerpt from the Eurogamer interview with the programming team:
Axel Gneiting: We are using all seven available cores on both consoles and in some frames almost the entire CPU time is used up. The CPU side rendering and command buffer generation code is very parallel. I suspect the Vulkan version of the game will run fine on a reasonably fast dual-core system. OpenGL takes up an entire core while Vulkan allows us to share it with other work.
Doom running on a fast dual-core machine, that's something I'd be interested in seeing.
id Tech 6 is the best thing to happen to PC gaming in a very long time. Not tied to an OS/D3D is :thumbsup: :thumbsup: :thumbsup: :thumbsup:
I don't think the industry as a whole ever realized this so to speak they were dragged kicking and screaming into the future. Mantle forced Microsoft's hand they couldn't let the likes of Vulkan be the only thin API out there.It's just a shame it took a whopping 5 years since GCN was introduced for the PC industry to realize this and start to move forward.
Mantle is turning out to be AMD64 v2.0 in respect to the impact.It also proves that Mantle wasn't a worthless investment by AMD since both DX12 and Vulkan benefited from Mantle. Thus, the entire PC gaming industry owes it to AMD to pushing the switch over to these low-level APIs!
My card cannot do async, and I am bitter
Someone should tell Sony, Microsoft and all those console developers they are doing it wrong.If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.
Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement
Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.
BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly
What proof do you have to back up your claims that coding for Vulkan is any more difficult than for opengl?If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.
Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement
Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.
BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly
If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.
Oh look, our OpenGL drivers was so rubbish that if we move all the work on the developers shoulders this game gets 50% faster! Wow! What an achievement
Small developers with must be absolutely thrilled. Actually middleware companies must be thrilled for real as the entry bar has been raised even more and their products are now even more important.
BTW, why stop here? Let's give access to the bare metal. Also compilers are for losers, time to write those thousands and thousands of shaders in assembly
I thought...If your hardware takes five years to be fully utilized, you didn't design a forward looking piece of hardware, you designed a dud.
It was great, then it turns out that it can be 30% faster, now it's a dud.GCN is a *great* architecture
What is this about AMD's openGL drivers sucking? They certainly don't, not in this game at least.
The video I linked does not as it's just the regular opengl test, but you are correct. That does suck, but at the same time, it doesn't seem to be holding back performance too much on average, as demonstrated by the video I linked.Doesn't that video show that CPU time in openGL for AMD is 10ms vs 6ms for nvidia? More than 50% higher overhead kinda sucks dude.
The video I linked does not as it's just the regular opengl test, but you are correct. That does suck, but at the same time, it doesn't seem to be holding back performance too much on average, as demonstrated by the video I linked.
But I suppose the overall point isn't incorrect.
What proof do you have to back up your claims that coding for Vulkan is any more difficult than for opengl?
I'm sure it's just fud.
When html was released I'm sure it was confusing and a lot of work to learn. Doesnt mean it was some insurmountable task after a programmer learned the language. Vulkan and dx12 will become the standard and second nature for devs just like any other type of programming that's new.
It's the only OpenGL game that such a bottleneck matters in as far as I can tell. Wolfenstein/RAGE are capped at 60, so CPU performance isn't as important, while Talos Principle has Vulkan and DX paths as well. Although maybe it's the reason I see 30 FPS in KotOR 2 on my 290 at times.
In Doom, AMD cards still hand in fine performance, about where they normally are, plus the latest video from Digital Foundry shows the Fury X exceeding 1070 performance at 4K, not merely achieving parity as might be expected if it's the reduction of overhead that's solely responsible for the gains.
I'm sure you could produce a game without writing any source code at all.Hello triangle in mantle is something like 600 lines. vulkan/dx12 is undoubtedly similar.
in opengl, it can be done in a few dozen.
What is this about AMD's openGL drivers sucking? They certainly don't, not in this game at least.
Some cards are lower than usual, but all of them are handing in respectable performance. Fury X beating 980 and 390 beating 970.
Then at 1440p, Fury X > 980 Ti/Titan X.
Digital Foundry: https://www.youtube.com/watch?v=WvWaE-3Aseg
As stated in the video
970: 93.9
390: 90.9
As you can see, that's a 3% lead, not a 30% - the performance uplift granted to a 390 by using Vulkan, unfortunately for the 970, it only gains around 3% itself.
You can load up the CPU to create a bottleneck easier on AMD hardware in DX11. That's what Project Cars does with CPU PhysX.