Wow, I missed one symbol and you can't answer though you got the idea what I'm saying.
Shaders? Nope. Shaders still gets translated and gets swapped with others if a IHV wants it.
What the driver cant do is to optimize memory management and other things. It is up to the developer. No improvement over DX12 shows a sloppy work. The nVidia card isnt under full load. Doesnt make any sense for an engine which is designed for a low level API. Look at the result from 1080p to 4K. The nVidia hardware is losing only 33% while the pixel count goes up 400%.
we dont know anything about pascal capabilities on dx12 at all only thing so far we have seen is some SP and DP perfomance claims and thats allI think you are vastly overestimating how much the typical pc gamer cares. The minority of dgpu owners that even look at game benchmarks overlaps to some (hopefully large) degree with people who know enough to understand why AMD cards are aging better when DX12 games become common.
Regardless, your argument only applies in this very narrow window of time we are in currently, and Tomb Raider may be the only real victim.
Nvidia itself will make maxwell look bad with pascal since it will no doubt offer real async compute support (unless you believe nvidia is completely oblivious to the direction real time graphics are headed in). The type of person who buys the latest and greatest hardware is probably also more likely to be a vocal fanboy. If devs really need to keep nvidia fanboys happy, wouldn't the GTX 1070 and 1080 folks be the ones that matter?
Nvidia is already "sabotaging" kepler performance vs maxwell and it isn't hurting them one bit. Why do you assume gameworks dx12 games in 2017 won't have async stuff that cripples maxwell cards? Nvidia's biggest competitor at this point is old nvidia cards. Pushing new tech (of questionable value) to entice upgrades helps their business.
Axix,
sure they said this. But where do they say that they will use the modified branch?
Obviously they are trying to get out of the marketing devil. Fact is they have only promoted their company, their engine and this game with AMD and uses only AMD hardware to programm it. Sure it will run great on this hardware. And it is time consuming to optimize for other hardware when you have never cared about them.
Does this game
"We believe our code is very typical of a reasonably optimized PC game." - Anno2205 looks much better and runs twice as good on nVidia hardware with DX11. Even games like the Total War series run better and can display hundreds of units. Marketing at best.
Why would anyone promote with Nvidia when their current hardware isn't built for modern APIs? AMD is the only horse in the race built for the future. I thought that part has been made crystal clear to you several times by several people over several weeks now.
Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.
Why would anyone promote with Nvidia when their current hardware isn't built for modern APIs? AMD is the only horse in the race built for the future. I thought that part has been made crystal clear to you several times by several people over several weeks now.
Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.
I think that by the time async compute is in enough games to be important, nvidia will have launched its full pascal line, except maybe the very low end.
They do not give a ETA because they will never be able to enable async compute on kepler and Maxwell.http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta/6
"Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled."
As the new benches show it's not about async,async only makes a small difference.I think that by the time async compute is in enough games to be important, nvidia will have launched its full pascal line, except maybe the very low end.
Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.
As the new benches show it's not about async,async only makes a small difference.
It's about what despoiler said,this here is the computations killer and why AMD with the higher TFLOPS get better numbers, but you have to ask yourself how many game devs are going to go through this kind of trouble of displaying thousands of units with AI and lighting and all that stuff,this kind of work will raise development costs by a fair amount and I doubt consoles could even pull that off,so doing this only for PC games? Yeah right!
They dont need to give a ETA because Oxide has sabotaged their DX12 performance. Async will not change this. The whole path is just bad.
Reviewer need to stop using Vaporware and an biased, paid benchmark to make such false claims. But i guess we need to wait for neutral games like Quantom Break and Fable Legends to put Oxide at rest.
It is like using 3DMark to determine the performance of the cards for real games.
Anno2205. Oh and this game looks and runs much better than this "DX12 showcase" benchmark.
But i guess we need to wait for neutral games like Quantom Break and Fable Legends to put Oxide at rest.
Only Radeon GPUs built on the GCN Architecture currently have access to a powerful capability known as asynchronous compute, which allows the graphics card to process 3D geometry and compute workloads in parallel.
AMD's advanced Graphics Core Next (GCN) architecture, which is currently the only architecture in the world that supports DX12's asynchronous shading.
My question to you, since you're so actively hostile against Oxide because their DX12 implementation according to you is unfair... what if similar results are observed for other games?
There are other hardware features like CR, ROV etc. which will improve performance. Why is Oxide not using them to realize better graphics?Such as Async Shading is not possible for NV GPUs, they gain no performance from enabling it.
Why not. Will you do the same?Will you come here to apologize to everyone that you have slurred against this past year?
"More than draw calls"? What exactly is this engine doing which is not improving the performance on nVidia hardware at all? There is no difference between Star Swarm and Ashes. Both showing a huge amount of units with lighting effects. And yet even in 720p Ashes doesnt scale on nVidia hardware and result in worse performance than Anno2205 with nearly ~1million citizens.Think back to Starswarm, it wasn't long ago that their DX12 drawcall implementation showed NV GPUs in a superior light, you happily used that to your agenda. Well, here's the thing, DX12 games are more than just raw draw calls.
Only Radeon™ GPUs built on the GCN Architecture currently have access to a powerful capability known as asynchronous compute, which allows the graphics card to process 3D geometry and compute workloads in parallel.
@sontin
If I am wrong about Async Shading and NV GPUs can in fact do it on their hardware, then I will apologize, definitely.
Will you?
Why should nVidia do something like this? They are not responsible for the DX12 implementation.I suggest until NV has proven otherwise, you calm down with your smearing of developers who have gone above and beyond with providing full source code and even push DX12 multi-adapter, so NV GPU + AMD GPU can work together. If you really care, why don't you ask NV why they don't refute the claims from AMD, it's a public claim afterall.
No, it was designed to showcase the need for a low level API:ps. Starswarm had no dynamic lights whatsoever. Many projectiles weren't even rendered. It was designed as a drawcall bottleneck, said so by their designers. In many ways, its similar to the 3dMark DX12 drawcall benchmark.
http://www.oxidegames.com/star-swarm/Star Swarm is a real-time demo of Oxide Games’ Nitrous engine, which pits two AI-controlled fleets against each other in a furious space battle. Originally conceived as an internal stress test, Oxide decided to release Star Swarm so that the public can share our vision of what we think the future of gaming can be. The simulation in Star Swarm shows off Nitrous’ ability to have thousands of individual units onscreen at once, each running their own physics, AI, pathfinding, and threat assessments.
Nitrous uses the power of its proprietary SWARM (Simultaneous Work and Rendering Model) technology to achieve incredible performance on modern, multi-core computer hardware. SWARM allows Nitrous to do things previously thought impossible in real-time 3D rendering, like Object Space Lighting — the same techniques used in the film industry — and having thousands of unique individual units onscreen at once.
http://www.oxidegames.com/directx-12/Q. So…translation, please?A. DirectX 12 basically gives you more performance without having to swap out for new hardware. Trust us, it’s a good thing!
Looks like this got lost between Star Swarm and Ashes.
Fun FACT of the day: Async Compute is NOT enabled on the driver-side with public Game Ready Drivers. You need app-side + driver-side!
Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.
I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.