[bitsandchips]: Pascal to not have improved Async Compute over Maxwell

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

HurleyBird

Platinum Member
Apr 22, 2003
2,812
1,550
136
If this is report is true (and that's a *big* if seeing as the source isn't exactly iron clad, to put it mildly) then buying a Pascal based card in the next few months is probably just as large a mistake as buying a GTX 670/680 was back in the day. The only difference is that now we have the foresight where we can say "If Pascal has poor async compute, it will not age well compared the competition." A similar analogue was not really present for Kepler.

On the other hand I suspect that Polaris will have good support for conservative rasterization (ATI is historically good at future proofing their products and supporting standards, although they've been slipping a bit under AMD's management), but if it doesn't (and especially if Nvidia has good support for async comp in addition to CR) the reverse may very well be true.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It may not be that stupid for Nvidia to bet on row power. More powerful GPU means that you have better performance everywhere, even in Directx 11 games, not only when developer wants it.

nVidia isn't betting on raw power. It's their only option.

How could this happen though? They clearly stated that they were working on DX12 with MSFT ~6 years ago. They would have known what hardware requirements would be for the API. I'm calling pure BS on this.

It's pretty apparent that the console hardware has driven DX12 and it is very likely a derivative of Mantle just like Vulkan is. Especially when we've seen the code similarities. This is why AMD appears better prepared. They wrote the play and cast the players, so to speak.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
you're a bit aggressive there man, buzzkiller is just pointing out that CR could be what nvidia uses to exploit and maintain some kind of win.

think gameworks libraries with only cr support or slow cpu emulation.
Well nv can not use it like they used to i similar situations. But nv have something a millions times better anyway. Brand awareness in spades.

There has been a consolidation on the game engine market for years. Mantle and dx 12 will just enforce that. The profit is only there if your game is supported by a stellar engine.
Imo that means less tacking on more or less paid features like in the past. Good for that. We dont need funky hair or hidden worlds. The games is optimized for the consoles. The ressources is tied to the new engines that is more expensive to develop because of thinner api layer and needs to be reused on pc market.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Better perf/mm² might allow them to price their GPUs better and thus drive sales, but that's an outcome of perf/mm². People still don't care about it, and other than a few people here no cares whether an IHV giving them performance X at Y watts for Z dollars does it with a 200mm² or 300mm² die.


Stop and think about it. perf/mm² matters to customers whether they realize it or not.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Thats a shame if true :( Its 2.5 year now since mantle and +3 year since the consoles went gcn. Nv have known perhaps 2 years before that what would happen.

Ofcource Mantle hit harder than some still refuse to accept. But i presumed all the way Pascal would have true asynch capabilities.
Its a damn miss. But hey it makes buying next gen gfx pretty simple. Its pretty idiotic to buy a gfx without it if you intend to play new dx 12 games. There is not two ways about it. Get ready for the biggest compute smokescreen in history - will beat Maxwell technical marketing several times.

No reason for you to think otherwise. Heck, we were (mis)lead to believe Maxwell had it, never mind Pascal.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
It does, Tom Clancy's The Division uses conservative rasterization for HFTS ...



It's not done using DX12 but I'm pretty sure CR is used in Tom Clancy's The Division using an NVAPI extension ...

Yup:

nVidia Tom Clancy's The Division Graphics & Performance Guide said:
The Division today introduces the world to NVIDIA Hybrid Frustum Traced Shadows, henceforth referred to as "NVIDIA HFTS". This new, advanced shadow technique leverages hardware features of NVIDIA GeForce GTX 900 Series Maxwell graphics cards to create realistic geometrically-accurate hard shadows, that smoothly transition to soft shadows in real time. These shadows greatly improve upon those generated by existing techniques, delivering the highest-quality shadows seen to date in gaming, with near-perfect contact shadowing and vastly improved screen-wide shadowing.

To achieve this feat several technologies and techniques are utilized. The first and most important of these is Frustum Tracing, a form of Ray Tracing that runs far faster, yet is still highly accurate. This is a direct result of NVIDIA HFTS's Frustum Tracing utilizing Conservative Rasterization, a technology built into second generation GeForce GTX 900 Series, Maxwell-architecture GPUs.

From: http://www.geforce.com/whats-new/gu...guide#tom-clancys-the-division-shadow-quality

Like all sane gamers, I am 100% for new (hardware accelerated) techniques that make graphics look better. And The Division looks fantastic. I hope AMD aggressively supports these features and nVidia aggressively supports AC. I do not understand people trying to poo poo any of these new features in DX12....
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
If this is report is true (and that's a *big* if seeing as the source isn't exactly iron clad, to put it mildly) then buying a Pascal based card in the next few months is probably just as large a mistake as buying a GTX 670/680 was back in the day. The only difference is that now we have the foresight where we can say "If Pascal has poor async compute, it will not age well compared the competition." A similar analogue was not really present for Kepler.

On the other hand I suspect that Polaris will have good support for conservative rasterization (ATI is historically good at future proofing their products and supporting standards, although they've been slipping a bit under AMD's management), but if it doesn't (and especially if Nvidia has good support for async comp in addition to CR) the reverse may very well be true.
I am pretty sure if both Zlatan and Fottemberg tell it is so unfortunately it is. Zlatan gave the advice back in 2013 to buy 290 excactly for this reason. Credit for that. 2.5 years ago !
 

Pinstripe

Member
Jun 17, 2014
197
12
81
nVidia isn't betting on raw power. It's their only option.

How could this happen though? They clearly stated that they were working on DX12 with MSFT ~6 years ago. They would have known what hardware requirements would be for the API. I'm calling pure BS on this.

Asynch isn't a DX12 API requirement, it's a hardware-specific design that gives you a performance boost. Nvidia just opts for raw power to give you the same/better boost. I don't see the problem if both vendors can reach high framerates with their own path and stay DX12 API conform.


It's pretty apparent that the console hardware has driven DX12 and it is very likely a derivative of Mantle just like Vulkan is. Especially when we've seen the code similarities. This is why AMD appears better prepared. They wrote the play and cast the players, so to speak.

I saw that movie too. Consoles are now out over two years and Nvidia stilll bests "GCN-optimized" console ports. But sure it will be totally different this time around, right?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Kepler didn't fall off due to a "driver hijack", but the fact that it's architecture is outdated with modern (PS4/XB1 tailored) game engines. Given that Pascal is mostly just a shrinked Maxwell with some extras, Maxwell should hold up pretty well for this console generation provided people don't go crazy on the game settings.

How is Maxwell going to hold up well. It's bad at DX12. Hawaii will probably pass GM200.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Intel is irrelevant with the pathetically low primitive rate that they push their iGPUs out ...

If people think tessellation is bad on AMD wait till they see how much more poorly it scales on Intel GPUs ...

Fixed function units isn't exactly Intel's strength ...

He's just stating a fact, not making a marketing presentation.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Asynch isn't a DX12 API requirement, it's a hardware-specific design that gives you a performance boost. Nvidia just opts for raw power to give you the same/better boost. I don't see the problem if both vendors can reach high framerates with their own path and stay DX12 API conform.




I saw that movie too. Consoles are now out over two years and Nvidia stilll bests "GCN-optimized" console ports. But sure it will be totally different this time around, right?

I bet you're one those that only looks at how the 980 Ti is doing vs the competition in deciding who's winning? Perfectly logical, right?
 

Bryf50

Golden Member
Nov 11, 2006
1,429
51
91
Asynch isn't a DX12 API requirement, it's a hardware-specific design that gives you a performance boost. Nvidia just opts for raw power to give you the same/better boost. I don't see the problem if both vendors can reach high framerates with their own path and stay DX12 API conform.

I saw that movie too. Consoles are now out over two years and Nvidia stilll bests "GCN-optimized" console ports. But sure it will be totally different this time around, right?

I'm not sure what your getting at with this weird "raw power" thing. Raw POWAH + Async Compute is better than just raw power. Therefore raw power is not an alternative to Async Compute.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
I bet you're one those that only looks at how the 980 Ti is doing vs the competition in deciding who's winning? Perfectly logical, right?

No, but I still see new game releases with benchmarks nowadays where a GTX 970 beats a Fury. It's not supposed to be like that, but it still happens. So much for GCN and Asynch will save AMD's broken [rear].


Infraction issued for trolling and profanity.

-Rvenger
 
Last edited by a moderator:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
you're a bit aggressive there man, buzzkiller is just pointing out that CR could be what nvidia uses to exploit and maintain some kind of win.

think gameworks libraries with only cr support or slow cpu emulation.

He simply stated that Intel had the best support for the feature and that's when the aggressive responses started. Now we've got more people coming to nVidia's defense.
 

nurturedhate

Golden Member
Aug 27, 2011
1,767
773
136
No, but I still see new game releases with benchmarks nowadays where a GTX 970 beats a Fury. It's not supposed to be like that, but it still happens. So much for GCN and Asynch will save AMD's broken [rear].

And we've seen cases where 390s beat 980ti's. What's your point?
 
Last edited by a moderator:

Leadbox

Senior member
Oct 25, 2010
744
63
91
No, but I still see new game releases with benchmarks nowadays where a GTX 970 beats a Fury. It's not supposed to be like that, but it still happens. So much for GCN and Asynch will save AMD's broken [rear].
How about the ones were a 390 is hanging with 980Ti and separating itself by 30% from the 970, do you see those?
 
Last edited by a moderator:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Asynch isn't a DX12 API requirement, it's a hardware-specific design that gives you a performance boost. Nvidia just opts for raw power to give you the same/better boost. I don't see the problem if both vendors can reach high framerates with their own path and stay DX12 API conform.




I saw that movie too. Consoles are now out over two years and Nvidia stilll bests "GCN-optimized" console ports. But sure it will be totally different this time around, right?

Who cares if it's a requirement to claim support? When your game is stuttering and/or stalling because the hardware can't complete the operations in parallel like they were written, not having the capability will matter.

Up until now it was a poorly optimized API that it was running on. Not an API that was designed to take advantage of the console optimizations.
 

Pinstripe

Member
Jun 17, 2014
197
12
81
And we've seen cases where 390s beat 980ti's. What's your point?

That the vast majority of games at launch do irrefutably run better on Nvidia? Witcher 3, Fallout 4, RoTTR, Gears and many more. I thought the point of GCN and AMD's Mantle/DX12 initiative was to make console porting smoother and more reliable, and yet PC games still need countless patches and AMD driver hotfixes to rectify their shortcomings while Nvidia provides optimizations Day1. Yes, it does matter to the masses of gamers. Not to the elitist-nitpickers on hardware sites, apparently.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
No, but I still see new game releases with benchmarks nowadays where a GTX 970 beats a Fury. It's not supposed to be like that, but it still happens. So much for GCN and Asynch will save AMD's broken [rear].

...and those are due to GCN console optimizations? Or are they due to nVidia specific game code that AMD has no access to and needs to wait for the game to be released, or near release, before they can work on it?

Also, can you please link to these games. I'm not so sure they have stayed in that state.
 
Last edited by a moderator:

caswow

Senior member
Sep 18, 2013
525
136
116
And it will remain so if AMD's marketshare won't grow. No developer will dare to screw over 80% Nvidia users.

yea that argument again. one day its "no one will fk with nvidia because of 80% marketshare" and the next day when benchmarks are out its amd biased devs D: