• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Quantum Break: More like Quantum Broken.

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I remember you said in Overclock.Net that you worked for AMD in recent time and you left.

That's patently false and a lie. I worked for ATi back in 2002-2004. I then worked for HP, Compaq, Dell and then Bell.

I have never worked for AMD. That is patently false.
 
Not sure why some people are surprised by these results...again. At least for the current generation, AMD hardware is at an advantage in newer games especially those with DX12. AMD has also been much better at releasing improved drivers and hot fixes over the last year further improving performance.
 
Neogaf QB performance forum. There is so many post and complain. Even MS rep and Phil responded that they are aware of issues and monitoring it. They said Vsync fix will not until mid May in that forum.

Do you have a link to that specific quote? You quoted it so you should have a link no?
 
Neogaf QB performance forum. There is so many post and complain. Even MS rep and Phil responded that they are aware of issues and monitoring it. They said Vsync fix will not until mid May in that forum.
Yes there are tons of posts on Neogaf complaining because Neogaf uses Nvidia cards mostly and Nvidia cards suck at dx12.

So yes you're right, Nvidia has issues on dx12 games like quantum break which leads to people complaining. So yes Nvidia should fix their async performance.

Why does Nvidia suck so much at compute desperado?
 
Yes there are tons of posts on Neogaf complaining because Neogaf uses Nvidia cards mostly and Nvidia cards suck at dx12.

So yes you're right, Nvidia has issues on dx12 games like quantum break which leads to people complaining. So yes Nvidia should fix their async performance.

Why does Nvidia suck so much at compute desperado?
of course Cuda.
 
From QB rep.

" While the 390 is the faster card running that game - there is a problem there with VSync where the 970 is just missing the multiplier and the 390 is just hitting it - resulting in a massive but artificial performance differential"

Ouch.
icon14.gif
 
What I want, and am asking for, is a link to the quote you posted here. I am becoming increasingly curious as to why you don't appear to want to share it with us.

The forum, in question, is filled with complaints. I don't have the time to filter through all of the individual postings for context. So all I'm looking for is a link.
 
What i mean to say is that you are seeing the effect of low level api. There are too many complicated system and different setups which is causing problems. In some benchmark site you will see GTX 980 TI and Fury X are on par with each other and some other sites you see GTX 980 Ti or fury X by large margin even For R9 390 and GTX 970.


People think that DX12 is very easy to code. No it is not and it will 10X more effort and 10X more time to port a stable and good DX12 game compare to Dx11.
So what have the console guys been using all this time? I thought it was low level apis.

Sent from my SM-G930T using Tapatalk
 
So what have the console guys been using all this time? I thought it was low level apis.

Bingo.

Just don't get the Unity or Unreal devs get a hold of low level api, they will mess it up big time.

Console devs, they should be very used to it, always were closer to the metal on that platform, since the beginning really.
 
So every benchmark for UWP is fake?

There's a distinction that you should be aware of.

http://www.pcper.com/reviews/Graphics-Cards/PresentMon-Frame-Time-Performance-Data-DX12-UWP-Games

Current overlays aren't compatible with DX12 & UWP, the combination of UWP & DX12 means OSDs in afterburner, fraps, and whatever else that hooks into the DX11 engine fails to read it accurately or at all.

Only Intel's PresentMon works, but it's also not 100% as it monitors at a different position in the rendering path.

Until there's a better software tool to analyze it, or the actual game has a built in benchmark or built in FCAT support (Ashes has both!), the most accurate remains actual video capture.
 
Oh look...

That same poster retracted his statement..

"Seriously ignore it 😛 we have no way of knowing if they are actually missing the 30fps target as the VSync implementation is doing some really nasty things - the frametime graph highlights it - I'm surprised from the quick skipping through of the video I did that the DF people didn't pickup on it more or maybe I missed it. The whole test is broken and nothing more than a highlight of how bad MS and the UWP have become for PC gaming (and that nVidia are dropping the ball on their drivers)."

He based his views on the frametime graph which we know is wonky for reasons SilverForce already mentioned. UWP is not compatible with frametime metrics.
 
That i do not know.

But Nvidia supports Async correct? I've read it in their spec pages.

So why doesn't Nvidia update their drivers to utilize the Async they support?

What's your theory on why Nvidia continues to show regressions in Async performance despite supporting it? Do you think Nvidia should remove Async support from their spec page? Do you think that perhaps if Nvidia removed Async support from their spec pages, devs would use LESS async since both vendors don't support it now?
 
NVIDIA GPUs are just horrible at DX12. They lose performance relative to DX11 in every single title except for a few minute cases when a weak CPU is used.

Not really - http://www.dsogaming.com/pc-performance-analyses/hitman-pc-performance-analysis/

As we can see, and contrary to other titles, the benefits of DX12 are easily noticeable in HITMAN. In DX12, our GTX980Ti was used to its fullest during the built-in benchmark, and we got an average framerate of 88fps. For comparison purposes, the benchmark ran with 78fps in DX11.

Not saying nVIDIA is great in DX12, but it can get here and there a boost. Funny enough, this specific boost is in an AMD sponsored game, one that supposed to make heavy use of Async Shaders. 😀
 
Why does Nvidia suck so much at compute desperado?

Because they make GPUs for PCs that are actually capable of running games and have no need of the GPU to run the whole game.
Look at that even the i3-2100 hits 56FPS avg and that's with sli which is degrading performance.
QB_proz.jpg

And not only that but the ancient dual core only runs at 60% usage.
QB_intel.jpg


For anyone that complained about AI and degrading gameplay until now, the party is just starting, new games will be graphics pr0n with a minimum of gameplay.
 
Back
Top