[bitsandchips]: Pascal to not have improved Async Compute over Maxwell

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

krumme

Diamond Member
Oct 9, 2009
5,956
1,596
136
I dont buy this Pascal is a facelifted shrunk Maxwell until i see it. What is the source for that?
 

Krteq

Golden Member
May 22, 2015
1,009
729
136
nVidia itself

NVIDIA-2015-Pascal-GPU_Compute-Performance-635x357.jpg


Pascal = Maxwell + Mixed Precision + 3D Memory + NVLink
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
So we can expect results like the 280x where using async doesn't really give a performance boost, but it doesn't hurt either?

Isn't there some DirectX 12 feature Nvidia supports that GCN doesn't? Is there some way Nvidia can throw some money around to get developers to use that feature to even the playing field.

There is, Nvidia can use hardware accelerated conservative rasterization to their advantage presuming that Polaris doesn't implement the same thing or further improve their geometry shaders in the pass-through case and in vertex data expansion too ...
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
There is, Nvidia can use hardware accelerated conservative rasterization to their advantage presuming that Polaris doesn't implement the same thing or further improve their geometry shaders in the pass-through case and in vertex data expansion too ...


They only partially. Intel has better support for it than anyone.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
They only partially. Intel has better support for it than anyone.

Intel is irrelevant with the pathetically low primitive rate that they push their iGPUs out ...

If people think tessellation is bad on AMD wait till they see how much more poorly it scales on Intel GPUs ...

Fixed function units isn't exactly Intel's strength ...
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
Intel is irrelevant with the pathetically low primitive rate that they push their iGPUs out ...

If people think tessellation is bad on AMD wait till they see how much more poorly it scales on Intel GPUs ...

Fixed function units isn't exactly Intel's strength ...


Oh yea... and then? :\


Look at the spec, they barely support it.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Oh yea... and then? :\


Look at the spec, they barely support it.

The support is adequate IMO ...

Sure Nvidia doesn't support underestimate conservative rasterization or post-snap degenerate triangles not been culled or 1/256 uncertainty regions but the basic idea is there plus there's tons of applications out there that will work with tier 1 support ...
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
The support is adequate IMO ...

Sure Nvidia doesn't support underestimate conservative rasterization or post-snap degenerate triangles not been culled or 1/256 uncertainty regions but the basic idea is there plus there's tons of applications out there that will work with tier 1 support ...


It would be better if you stopped acting like it's some endgame move. It's more a political hey we too.
 

moonbogg

Lifer
Jan 8, 2011
10,731
3,440
136
People wondering if Maxwell will fall off a cliff like Kepler did? Didn't I see some benchmarks floating around with a 390x beating a 980ti? Pretty sure I saw that somewhere.
I wouldn't be surprised if Nvidia refused to use Async simply as a matter of pride because AMD did it first. They would rather run their entire company into the ground than admit AMD made a better move and admit that their backs are now against the wall, the room is burning RED all around them and the only way to escape is through that emergency door with a neon sign on it that says "ASYNC COMPUTE THIS WAY BISTCHE".
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Ugh, you keep mentioning it as some great advantage but the truth is you can't back that up. Why? Because it doesn't exist yet. CR has potential to be something or not.

you're a bit aggressive there man, buzzkiller is just pointing out that CR could be what nvidia uses to exploit and maintain some kind of win.

think gameworks libraries with only cr support or slow cpu emulation.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Why do you have to be so overly defensive about Nvidia having a pro ?

There is always DX12 mGPU support that will allow a person to throw in a card from the other guys to take advantage of it's additional benefits. I see no reason why AMD would sabotage it, history shows NVidia doesn't really like the idea. Would seem in todays gaming it's one of the biggest things DX12 will bring....As long as developers are on the boat of course.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
There is always DX12 mGPU support that will allow a person to throw in a card from the other guys to take advantage of it's additional benefits. I see no reason why AMD would sabotage it, history shows NVidia doesn't really like the idea. Would seem in todays gaming it's one of the biggest things DX12 will bring....As long as developers are on the boat of course.

Do you really believe in a broad mGPU support? I am starting to doubt if there will be more than a handful. You have to convince developers to use time and money on something very few will ever use.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Do you really believe in a broad mGPU support? I am starting to doubt if there will be more than a handful. You have to convince developers to use time and money on something very few will ever use.

In the perfect gaming world the answer would be yes.

Time will tell in the end no doubt.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
But where is that pro implemented?

AMD has Ashes to point to as an example of Async. Who has implemented what Nvidia only supports?

It's extremely, extremely early in the DX12 lifespan... we don't even have a single native DX12 game in full release yet. Just early release and whatever Hitman is doing with that episodic thing and some DX9-11 games with DX12 bandaid patch applied. Once AotS is officially out of early release, then we'll finally have 1 single real, fully released native DX12 game. And even AotS has a foot in both worlds since it still can do DX11. Just like how we didn't see the true potential and power of DX11 until we had DX11 only games (that made extensive and deeply embedded use of things like DirectCompute, Tesselation, etc.) we won't see the true power of DX12 until the engines come out that only support DX12, which wont be for at least another year or longer.

Unless y'all are willing to go on record saying that nVidia wont push CR and ROV at all over all of DX12's lifespan, Buzzkiller has a good point.

Just because its not here on the first 2 or 3 DX12 games doesn't mean it won't show up ever...
 
Last edited:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
nVidia itself

NVIDIA-2015-Pascal-GPU_Compute-Performance-635x357.jpg


Pascal = Maxwell + Mixed Precision + 3D Memory + NVLink

This makes total sense considering NV hadn't had a new Tesla product in years. They NEED another pro offering and this would match that desire. Very likely, asynchronous capabilities would be a distant priority.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
Ugh, you keep mentioning it as some great advantage but the truth is you can't back that up. Why? Because it doesn't exist yet. CR has potential to be something or not.

It does, Tom Clancy's The Division uses conservative rasterization for HFTS ...

But where is that pro implemented?

AMD has Ashes to point to as an example of Async. Who has implemented what Nvidia only supports?

It's not done using DX12 but I'm pretty sure CR is used in Tom Clancy's The Division using an NVAPI extension ...