Quantum Break тест GPU (Gamegpu.com)

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dogen1

Senior member
Oct 14, 2014
739
40
91
Probably. However, I think choosing MSAA which is one of the inefficient forms of AA with rendering techniques these days, to save GPU resources is beyond me. I mean they could also opt for more efficient form of AA such as SMAA rather than MSAA if their intention is to to save fillrate (& GPU resources) and increase quality in other areas.

That's partially why they used a light pre pass renderer. Read the paper in that link from before if you want more information. There's a lot of detail on how the MSAA samples are used.

Anyway, they're not just flipping switches and hope they work you know. Engine programmers spend a lot of time just researching, planning, and testing these things out. If MSAA wasn't a good choice they most likely wouldn't have used it.
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,400
2,437
146
Pitty so many games these days seem so broken. Hopefully they will at least put the DX12 back in for those who want it, no reason not to give gamers the choice.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
I guess I'm just missing where they need to upscale. They reproject the previous frames and use a subpixel jitter to produce the additional samples needed for the higher resolution.

What you described in the latter half of this paragraph is quite literally upscaling. The fact that they do upscaling via reconstruction doesn't mean it isn't upscaling. There are dozens of different upscaling algorithms, and this is just another one.

Remedy themselves calls this temporal upscaling.
 

Piroko

Senior member
Jan 10, 2013
905
79
91
RX 480 vs GTX 1060 from other youtuber.
https://www.youtube.com/watch?v=DL7OknOJZJg
I see the following:
Nvidia still defaults to the reduced color range when outputting HDMI
There seems to be a slight difference in how light reflective certain textures are. The ceiling in those screenshots has harsh reflections on the Nvidia side and soft reflection on the AMD side. There's also a positional difference on one light source in the twitter pics, where it's located below the desk on the AMD screenshots and behind the desk on the Nvidia ones. Can't tell which one is accurate. Probably a knocked over lamp?
In the video at timestamp 1:48 there's a difference in the strenth of light beams, or at least that's what I think. There seem to be slightly brigthened spots on the floor and there's a hunch of light beam on the wall right next to the window on the AMD side, so I'm assuming that the light beams themselves are there. Same for the third Twitter pic, the light beams are there, only much more suttle.
In the video at timestamp 2:02 there's an issue with the door lock shimmering on the Nvidia side and not on the AMD side. Seems like the temporal AA is tuned differently. Texture quality of those door handles is also off? Also, lol @ door not opening.
 
Last edited:

PowerK

Member
May 29, 2012
158
7
91
That's partially why they used a light pre pass renderer. Read the paper in that link from before if you want more information. There's a lot of detail on how the MSAA samples are used.

Anyway, they're not just flipping switches and hope they work you know. Engine programmers spend a lot of time just researching, planning, and testing these things out. If MSAA wasn't a good choice they most likely wouldn't have used it.
Just like their DX12 version, huh?
No matter how you slice it, I sincerely think their design approach is bad to force 4xMSAA in a console focused game when more efficient and compatible AA methods are available.
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
their approach is probably well suited for the Xbone hardware, I mean the small esram + not so fast DDR3 memory and so on.
 
  • Like
Reactions: Headfoot

Leadbox

Senior member
Oct 25, 2010
744
63
91
More than 1000 words:
UquanuI.jpg

CtjHt19WcAArOZA.jpg

CtjHt19WcAArOZA.jpg:large
Woah! Is there a video to accompany those screenies? You can see where the 1060's fps "advantage" comes from. I hope that isn't being presented as a valid performance comparison.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
Just like their DX12 version, huh?
No matter how you slice it, I sincerely think their design approach is bad to force 4xMSAA in a console focused game when more efficient and compatible AA methods are available.

Then you didn't read the presentation. MSAA is an important part of their pipeline, and they would have to restructure it otherwise.

Anyway, their MSAA is even cheaper than normal since they downsample the MSAA'd geometry buffer before lighting is done.
 
Last edited:

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91

nurturedhate

Golden Member
Aug 27, 2011
1,742
673
136
More than 1000 words:
UquanuI.jpg

CtjHt19WcAArOZA.jpg

CtjHt19WcAArOZA.jpg:large

Massive difference in quality there. No wonder the 1060 performs better. Maybe we need to go back 15 years and start comparing IQ settings between vendors again. If those are the same settings something is incredibly fishy.
 

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
Those images show DX11 for both. It should look basically identical.

He was asking for the source videos that the tweets and screenshots are based on so I provided it. I don't think anyone has done a side by side DX11/DX12 IQ comparison yet. I know I'm not going to shell out the money for the steam version as well to do one :)
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
That comparison has got to be messed up. The FoV on the 1060 one is much narrower. Different configurations?
 

dogen1

Senior member
Oct 14, 2014
739
40
91
I don't understand the direction Remedy took in rendering technology and artistic approach.
Their so called "temporal reconstruction" (aka, upscaling) seems another way of saying we failed in optimization hence base resolution is 1280x720.

I bought a Steam version (DX11) on the weekend, I knew the game was a mess. But I wanted to check it out.
No matter upscaling is ON or OFF, the visual fidelity is blurry. Upscaling OFF actually does provide a cleaner look. Not much difference, however. (Unless you take a screenshot and do a ON/OFF comparison).
Upscaling on = blurry
Upscaling off = less blurry.

Turning off (temporal)anti-aliasing should help with blurriness.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
So More bugs on Nvidia cards and fewer bugs on AMD cards ? any news about Upcoming Patch ? I saw light from roof on the gtx 1060 ( assume that there is no lamp) .It's not possible.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Thats because they use whatever API they saw was faster

DX11 hasn't been faster on NVidia in AotS for a while now, so that can't be it.. If anyone here is a native German speaker, I'd love to know their reason using DX11 for NVidia and DX12 for AMD for AotS..
 

nurturedhate

Golden Member
Aug 27, 2011
1,742
673
136
Both are buggy. Lighting doesn't appear right on either side for those parts.
Things really need to be tested further on the Nvidia side. We've seen issues of IQ and effects not being rendered properly in other games such as AotS and now this.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
DX11 hasn't been faster on NVidia in AotS for a while now, so that can't be it.. If anyone here is a native German speaker, I'd love to know their reason using DX11 for NVidia and DX12 for AMD for AotS..

A lot of the time Computerbase.de choice of API seems to be based more on assumptions than on reality, or at the very least they are very slow at updating their choices with time.

With that being said though they only use DX11 for Kepler, Maxwell and the GTX 1060 in that graph, the 1070 and the 1080 are using DX12 in AotS.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
@ antihelten, thanks for the clarification! I guess the reason why the 980 Ti is faster than the Fury X in HWC's review is because of the settings, which uses more VRAM.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Joker's testing at 1440p on a GTX1080 reveals that turning just 2 settings -- (1) Volumetric Lighting from Ultra to Medium and (2) Screen Space Reflections to Off -- boosted performance from 34 fps to 58 fps (+71%).
1:48-2:20 min mark: https://www.youtube.com/watch?v=QTzsnm-PzT0

Remember how you stated during GTX680 era that this GPU will last throughout the entire PS4/XB1 generation? Then when the OG Titan came out, I believe you said that the $1,000 OG Titan would easily last throughout this entire generation. I completely disagreed with both of those assertions. When testing Forza Horizon 3, Digital Foundry found that on an i7 5820K system ($390 CPU) and a GTX970 ($330 GPU), they had to lower certain IQ below Xbox One S's settings and get rid of 4xMSAA entirely just to hit 60 fps at 1080p. Conversely, in that title, the Jaguar + HD7790 Jaguar-powered $200 Xbox One ran Forza Horizon 3 at roughly High quality PC settings with 4xMSAA, albeit at 30 fps.

MS just announced that Gears of War 4 will have an 11GB (!) day 1 patch for physical disc owners.

The reason I am bringing this up is that just like last generation of consoles and the generation before, this generation of consoles will continue to have horribly optimized console-to-PC ports.

So what's your point? There'll always be poorly optimized titles, regardless of whether you play on consoles or PC.. Nothing new about that. In the majority of multiplatform titles, you get a better gaming experience by playing on PC than you do on consoles.. There's no cure for developer incompetence unfortunately.. And most consoles titles still run at 30 FPS.. 30 FPS is acceptable on consoles, but it has never been so for PC. Optimizing a game for 30 FPS is much easier than doing so for 60 FPS..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
The 'high-end' PC forums are so isolated from reality of PC gaming nowadays, it's almost a disservice to the 90% of PC gamers. If a console port doesn't run well on a $200 i5-6500 and 2-3 year old $400-700 GPUs, what's the big deal? Just get a $350-$400 i7 6700-6800K and a $400-700 GTX1070/1080. "Everyone is doing it." If you aren't upgrading your GPU every 2 years, you are just not PC enthusiast enough or are poor. Since PC gaming is a luxury, no one forces you to buy a $700 GPU and a $350 CPU to max out console ports (but if you start to complain or criticize poorly optimized PC games, you aren't a PC enthusiast as it's expected to brute force your way with new expensive shiny PC hardware to run unoptimized turds! The PC Master Race way!).

Then in the next 1-2 years when 1070/1080(& Vega) are struggling to hit 60 fps at 1080p in 2017-2018 PS4/XBox One games, who cares, just buy a $400-700 2018 Volta (to play AAA console ports!). If a 2016 Xbox One console port struggles at 1080p on a GTX970/390/780Ti/R9 290X, it's time to upgrade!!! Clearly, $300-700 2-3 year old GPUs with 3-4X the horsepower of an HD7790 are peasant PC hardware. The horribly unoptimized game engines and/or console-to-PC ports have little to do with it. /facepalm

During the Xbox 360 and PS3 era (which was perhaps the longest console cycle ever), most games could easily be maxed out (or near maxed) with midrange hardware. I remember I had my GTX 580 SLI setup for about 3 years during those days, which is very rare for me. I didn't see much use in upgrading, as the games were so easy to max out. But when the PS4 and Xbox One hit the deck, suddenly PC gamers required more powerful hardware to get an above console experience..

But one could argue that is how it should be. This is the normal consequence of technological advancements in the gaming industry and PC hardware market. Take VRAM for instance. During the Xbox 360/PS3 days, one could get away with having a GPU with only 1GB of VRAM, with 2GB perhaps being optimal. But with the PS4 and Xbox One, you need perhaps a minimum of a 3GB framebuffer in most titles to use the same textures as the consoles, and depending on whether the developer uses higher grade textures for the PC version, you could require as much as 8GB for the framebuffer.

Is this an issue of optimization? No it isn't. It's just a consequence of technological advancement. That's what you don't seem to understand RS. When it comes to bad game performance, not everything can be explained by poor optimization. Same thing with Kepler. NVidia misjudged the technological trajectory of the gaming market, and as a result, Kepler lost out big time compared to GCN due to it's much weaker compute abilities.. And yet many here like to blame NVidia for ceasing to optimize their drivers for Kepler, as though that could have prevented anything..

Generally speaking though, games are a lot more complex and sophisticated than they used to be, and thus require more hardware to get an "above console" experience. That's perhaps the biggest reason why people turn to PC gaming, alongside customization and modding.

The good news though, is that DX12 should definitely help to lengthen the hardware cycles, as developers will be able to squeeze more performance out of the hardware than before. But when the Xbox Scorpio hits the deck, there will probably be another upheaval :grinning:
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So what's your point? There'll always be poorly optimized titles, regardless of whether you play on consoles or PC.. Nothing new about that. In the majority of multiplatform titles, you get a better gaming experience by playing on PC than you do on consoles.. There's no cure for developer incompetence unfortunately.. And most consoles titles still run at 30 FPS.. 30 FPS is acceptable on consoles, but it has never been so for PC. Optimizing a game for 30 FPS is much easier than doing so for 60 FPS..

My point is this is yet another example of a poorly optimized and broken console-to-PC port. Sure, it has great graphics but the level of PC hardware it requires to match a $200 console is cringe-worthy. Just because a game is gorgeous looking does not give it a waiver from being labelled poorly optimized when considering the context. My other point was a reflection that despite you consistently claiming that console hardware has little to no benefits over PC parts when it comes to extracting low-level performance, time and time again this has been proven wrong this generation. First, an i3 + GTX750Ti could play most XB1/PS4 games and generally an i5 and a GTX950/960 is needed just to provide a similar IQ/performance to XB1/PS4. What makes Forza Horizon 3 such a startling stand-out example is the horrendous level of performance on a PC with an i7 + GTX970/R9 390 at 1080p given the gigantic gulf in hardware advantage.

The other point you made that optimizing a game for 30 fps is much easier is unproven. First, at 1080p 4xMSAA, neither the 970 nor the 390 can provide a locked 30 fps. XBOne otoh manages to more or less do just that with a GPU 3X slower. Second, there is little doubt that once XB Scorpio comes out, this game may even run at 4K or 1080p 60 FPS, something that today requires a $700-1200 1080/Titan XP.

During the Xbox 360 and PS3 era (which was perhaps the longest console cycle ever), most games could easily be maxed out (or near maxed) with midrange hardware. I remember I had my GTX 580 SLI setup for about 3 years during those days, which is very rare for me. I didn't see much use in upgrading, as the games were so easy to max out.

Do you realize that NV themselves rated GTX580 as 9X faster than the GPU inside PS3?

nvidiagraph3.jpg


Assuming a 50-80% SLI scaling, your setup was between 9X (no SLI scaling) to 16X faster than PS3. There is nothing special about $1000 USD of 580 SLI destroying 90%+ of 2010-2011 console ports considering XB 360/PS3 came out in 2005/2006. I am not sure why you bring up the comparison of GTX580 SLI being adequate for end of generation Xbox 360/PS3 games and the current context of Quantum Break or FH3. In contrast, Forza Horizon 3 came out about 3 years since Xbox One did and it's wiping the floor with a $700 780Ti. You are saying that's legit?

But when the PS4 and Xbox One hit the deck, suddenly PC gamers required more powerful hardware to get an above console experience..

You are just stating the obvious without providing any of the details. You actually set yourself up by claiming that GTX580 SLI didn't need to be upgraded until PS4/XB1 launches but you forgot that a single GTX580 is much faster than the HD7790 in Xbox One?

1920x1080 4xMSAA, GTX580 is 53% faster than HD7790.
https://www.computerbase.de/2013-03/amd-radeon-hd-7790-test/2/#abschnitt_leistung_mit_aaaf

Considering GTX970/390 cannot even do 1080p 4xMSAA 30 fps locked on the PC when paired with an i7 5820K @ 4.4Ghz, how do you think a GTX580 3GB would do? It would bomb.

But one could argue that is how it should be. This is the normal consequence of technological advancements in the gaming industry and PC hardware market.

Yes, it is how it should be except once again you are missing the details. It wasn't unusual for Xbox 360 or PS3 to have similar level of graphics/performance to a 2005-2006 PC with a $599 7800GTX 256MB in it. That's because the GPUs inside those consoles are very similar in performance. Did you know that GTX780 is almost 2.5X faster than HD7790?

https://www.computerbase.de/2013-10...80x-test/6/#abschnitt_leistungsratings_spiele

According to TPU, when using modern games, R9 390 ~ 970 @ 1080p, are 24-26% faster than GTX780. That means R9 390/970 are 247% x 1.24 = 306% or 3.06X faster than HD7790 (~ XBox One's GPU).

Even if we were to assume that 1.75Ghz Jaguar 8-core has identical IPC to the Bulldozer FX8150, its performance compared to an i7 5820K would be close to ~4X lower in a well-threaded game engine.

ac_proz.jpg


8150 = 48 fps * (1.75ghz/3.6Ghz) => 23-24 fps Xbox One's CPU
3970X = 95 fps

In summary: That means GTX970/390 is ~ 3X faster than Xbox One's GPU and i7 5820K at stock clocks would be 4X faster. However, per Digital Foundry's video, R9 390/970 are struggling and drop to low-40 fps range at 1080p on an i7 5820K OC without 4xMSAA.

Take VRAM for instance. During the Xbox 360/PS3 days, one could get away with having a GPU with only 1GB of VRAM, with 2GB perhaps being optimal. But with the PS4 and Xbox One, you need perhaps a minimum of a 3GB framebuffer in most titles to use the same textures as the consoles, and depending on whether the developer uses higher grade textures for the PC version, you could require as much as 8GB for the framebuffer.

This is irrelevant because R9 390 or RX 480 have 8GB of VRAM. You did not bother comparing the CPU and GPU performance differences - I did it for you above.

Is this an issue of optimization? No it isn't.

100% it is. Given the performance advantage a modern i5 6600K/6700K/i7 5820K (etc.) and R9 390/970 have over Xbox One, the game should run at least 2X faster. Since the game runs at 1080p 30 fps 4xMSAA on Xbox One, it should run at 1080p 60 fps 4xMSAA on PC hardware that's 3-4X faster if the port has been well-optimized.

It's just a consequence of technological advancement. That's what you don't seem to understand RS.

I disagree. You yourself have criticized how weak and under-powered modern consoles are and yet now you are reverting to the exact same thing I spoke about that "PC Master Race" does -- you justify the CPU/GPU demands because "It's expected that we should upgrade every 2-3 years." It would be totally different if this particular game ran at 720p 0xMSAA with medium settings on Xbox One and had trouble hitting 1080p 4xMSAA on an R9 390/RX 480/GTX1060.

When it comes to bad game performance, not everything can be explained by poor optimization.

In this case, it can be. 1 thread pegging modern i5/i7 CPU usage to 90-100%, horrendous performance on GPUs that are 2.5-3X faster than HD7790/XBox One's GPU. The developer needs to work closely with AMD/NV to improve the game's performance via patches and drivers. But I suppose AMD/NV wouldn't mind if 780Ti/GTX970/980 bombed against GTX1060 and for AMD if RX 480 was 40-50% faster than R9 390 (!).

Same thing with Kepler. NVidia misjudged the technological trajectory of the gaming market, and as a result, Kepler lost out big time compared to GCN due to it's much weaker compute abilities.. And yet many here like to blame NVidia for ceasing to optimize their drivers for Kepler, as though that could have prevented anything..

Except 2 things: (1) NV magically improved performance in Project CARS and The Witcher 3 on Kepler cards after Kepler owners complained -- so we have a proven history of NV's drivers adding huge performance gains post-launch; (2) Kepler's performance degraded dramatically against GCN during the same 1.5-2 year period when GCN and Maxwell remained a lot closer. Both Kepler and Maxwell have static schedulers and neither architecture was ever a compute monster. Both of these architectures don't even have async compute and I don't recall much proof that Maxwell is a superior compute architecture than Kepler. Right now a GTX780/780Ti trail R9 290/290X by 20-30% in modern titles.

Of course this has nothing to do with what we are talking about.

Generally speaking though, games are a lot more complex and sophisticated than they used to be, and thus require more hardware to get an "above console" experience. That's perhaps the biggest reason why people turn to PC gaming, alongside customization and modding.

You are trying to make an argument that older GPUs such as GTX780Ti/R9 290/290X/970 shouldn't play modern titles well at 1080p since technological advancement means future games get more demanding/complex and these GPUs/architectures were never meant to play those future titles well. This argument would be all fine and dandy EXCEPT that a 1.75Ghz 8-core Jaguar + HD7790 is miles behind in performance to GTX780/780Ti/970 GPUs and architecturally are not any better than Hawaii. Then how do you explain such mediocre performance on 970/R9 390? You are saying HD7790 is more advanced for modern game engines? ;)

The good news though, is that DX12 should definitely help to lengthen the hardware cycles, as developers will be able to squeeze more performance out of the hardware than before. But when the Xbox Scorpio hits the deck, there will probably be another upheaval :grinning:

Notice how Quantum Break, Forza Horizon 3 and Gears of War Ultimate all launched with performance issues? It's starting to form a trend that Xbox One exclusives are poorly optimized when ported to the PC. Since the developer is already acknowledging stuttering and performance issues, I am surprised you are not acknowledging that FH3 has performance issues.
 
Last edited: