AMD cancels the driver path for Primitive Shaders

Muhammed

Senior member
Jul 8, 2009
453
199
116

Krteq

Golden Member
May 22, 2015
1,005
713
136
Hmm, interesting. It seems that implicit implementation in driver was too difficult or contra-productive due to compiler complexity.

So we are at the same situation as at the launch. Primitive Shaders have to be implemented explicitly by dev.

Anyway, shouldn't that driver path be alternative from a beginning?

//y33H@ also mentioned it
y33H@ said:
AMDs letzte Aussage war, der alternative Treiber-Pfad für Primitive Shaders wurde verworfen, stattdessen halt direkt per API via Entwickler ... was wohl noch keiner (in einem bereits veröffentlichtem Spiel) gemacht hat.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
I can't remember who I debated with about where PS should live. Developers absolutely would be able to leverage the most from PS because they know exactly what is happening inside their game. Driver magic would have been great though because the power the architecture would at least in part be available across the board. Either way, developers was probably the right choice from the start for an evolutionary feature. My other thought is that Raja clearly was mismanaging RTG at least for this feature. Originally PS was slated as a dev tool(Raja), then it was stated to be auto driver magic(Raja), and now Su clearly put it back to being a dev tool. Huge amounts of programming resources wasted by Raja changing his mind. He must have grossly underestimated the complexity of the task. He said as much in a quote.
Also, to all of you gloom and doomers. Y'all turned out to be wrong. There is still a feature that hasn't been leveraged which is what we've been telling you all along.
 
  • Like
Reactions: Olikan

Muhammed

Senior member
Jul 8, 2009
453
199
116
Last edited:
  • Like
Reactions: ozzy702 and xpea

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Whether it ever gets any dev attention will depend mostly on whether the feature and a similar/same API is included in the PS4 Pro and Xbox One X chips
 
  • Like
Reactions: KompuKare

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Whether it ever gets any dev attention will depend mostly on whether the feature and a similar/same API is included in the PS4 Pro and Xbox One X chips
I have always wondered if PS4 pro "triangle sieve", is not the samethig as primitive shaders
 

Guru

Senior member
May 5, 2017
830
361
106
I can't remember who I debated with about where PS should live. Developers absolutely would be able to leverage the most from PS because they know exactly what is happening inside their game. Driver magic would have been great though because the power the architecture would at least in part be available across the board. Either way, developers was probably the right choice from the start for an evolutionary feature. My other thought is that Raja clearly was mismanaging RTG at least for this feature. Originally PS was slated as a dev tool(Raja), then it was stated to be auto driver magic(Raja), and now Su clearly put it back to being a dev tool. Huge amounts of programming resources wasted by Raja changing his mind. He must have grossly underestimated the complexity of the task. He said as much in a quote.
Also, to all of you gloom and doomers. Y'all turned out to be wrong. There is still a feature that hasn't been leveraged which is what we've been telling you all along.

The thing is they don't know exactly what is happening. In a lot of cases they have to make an educated guess, so its not an exact science and this why its so hard.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
I have always wondered if PS4 pro "triangle sieve", is not the samethig as primitive shaders

"What happens is you take your vertex shader, and you compile it twice, once as a compute shader, once as a vertex shader. The compute shader does a triangle sieve – it just does the position computations from the original vertex shader and sees if the triangle is backfaced, or the like."

The high level concept seems exactly the same at least.
 
  • Like
Reactions: Olikan

Wall Street

Senior member
Mar 28, 2012
691
44
91
So my read on this was that when the marketing material said vega would do 11 triangles per clock with only four geometry engines, now they are admiting it will only do 4 with the default DX11 and OpenGL code paths.

Techreport does a good chart called polygon throughout which shows how much AMD lags here. Unfortunately, they didn't include it in the vega review.

https://techreport.com/review/31562/nvidia-geforce-gtx-1080-ti-graphics-card-reviewed/3
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
So my read on this was that when the marketing material said vega would do 11 triangles per clock with only four geometry engines
It can process (raster and discard) 11 (actually, 17) tris@clock.
Not raster that much.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Nope, the argument was vega gaining 30% or more out of new drivers when PS is enabled. Now it's not going to be enabled EVER. Most developers wont bother as well, how much marketshare does vega have compared to pascal anyway?

Yah that was one argument. Still you can't say that something we don't know is invalid. That is a logical fallacy. The counter is that someone stating something without some sort of hard evidence is unsupported. I think you are both probably overstepping in this case.

As far as dev use, Sebbi (Ubisoft)said quite specifically that primitive shaders is exactly where we need to evolve to. Guys like him, Dan Baker, iD software, and Johan will leverage tech like that. Sweeney probably won't until Nvidia does or it gets folded into an API. The engine guy at Unity may or may not. Everyone else will not unless it's an AMD sponsored game.
 
  • Like
Reactions: Muhammed

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
While its very plausible it's no longer planned, 1 random forum post about what he claims to have heard isn't exactly official
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Nope, the argument was vega gaining 30% or more out of new drivers when PS is enabled. Now it's not going to be enabled EVER. Most developers wont bother as well, how much marketshare does vega have compared to pascal anyway?
I bet you will see some games using it, those sponsored by AMD, so, iD games.

On the fly converting was always going to be difficult, and I am betting that this feature required too many people to make it worth it for AMD in terms of $$$ per person devoted to making this happen for each game.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
on what lvl is support necessary? per game or per engine?
API level, or services library at least. The implementation is not hard in the engine. A well designed converter can do 90 percent of job automatically, the last 10 percent is really easy, and the result is much better primitive discard on Vega. But personally I don't like the idea, because GPGPU culling is better. It's uglier, and harder to implement to the engine, but it will work on every hardware that can run compute shader (pretty much everything nowadays). I think this approach might be faster than primitive shader. With rapid packed math and async compute this is almost guaranteed. The main advantage of primitive shader is the easier implementation. That's for sure. But GPGPU culling is just my own egoistic view, because it works on the consoles, so it can be a true cross-platform solution.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
API level, or services library at least. The implementation is not hard in the engine. A well designed converter can do 90 percent of job automatically, the last 10 percent is really easy, and the result is much better primitive discard on Vega. But personally I don't like the idea, because GPGPU culling is better. It's uglier, and harder to implement to the engine, but it will work on every hardware that can run compute shader (pretty much everything nowadays). I think this approach might be faster than primitive shader. With rapid packed math and async compute this is almost guaranteed. The main advantage of primitive shader is the easier implementation. That's for sure. But GPGPU culling is just my own egoistic view, because it works on the consoles, so it can be a true cross-platform solution.
Computerbase tested culling in wolf2
https://www.computerbase.de/2017-10/wolfenstein-2-new-colossus-benchmark/

In 4k the rx64 is still a sizable way from the 1080ti and a good way from where we would expect the shaders to get us.
If thats similar to what primitive shaders can do this vega arch is more or less as inefficient as we know it. And will stay so at max wolf 2 level.
Its doesnt really matter for vega rx as its minority.
But one have to wonder why it then ends up in upcomming intel product and tens of millions of future apu? Darn huge numbers. For something that even in wolf2 is more performing like a polaris.
And if culling can do it. Why then go all the way with the ngg path?
Other questions arises. Was rx Vega delayed half a year to enable the posibility of this driver path?
 
Last edited:

dogen1

Senior member
Oct 14, 2014
739
40
91
Computerbase rested culling in wolf2
https://www.computerbase.de/2017-10/wolfenstein-2-new-colossus-benchmark/

In 4k the rx64 is still a sizable way from the 1080ti and a good way from where we would expect the shaders to get us.
If thats similar to what primitive shaders can do this vega arch is more or less as inefficient as we know it. And will stay so at max wolf 2 level.
Its doesnt really matter for vega rx as its minority.
But one have to wonder why it then ends up in upcomming intel product and tens of millions of future apu? Darn huge numbers. For something that even in wolf2 is more performing like a polaris.
And if culling can do it. Why then go all the way with the ngg path?
Other questions arises. Was rx Vega delayed half a year to enable the posibility of this driver path?

What resolution did they test at? The higher the resolution, the smaller the benefit would be.
 

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
What resolution did they test at? The higher the resolution, the smaller the benefit would be.
All resolutions and it is still stuck at 1440. But anyway its a gpu for 4k. Tons of shaders. High freq and hbm2.
The gpu project is seriously mismanaged.
Take eg tiled rendering. Amd claims now the benefit is already there. Marketing bs. The effect is miniscule.
You can launch a gpu and bring the benefits 3 years later. Thats fine. But then:
Make sure you dont kill your organization with a workload they cant do. Typical developer thinking. It brings things forward but is dangerous.
Communicate what is needed. For consumers as well as developers. Communicate about uncertainties so they dont stand as unknown risks.
Now it seems like half a year is wasted and the trustworthiness is low.
I have been a fan of Lisa for nearly 10 years and praised her years before it became the new black. But she must take a bit of responsibility here. A super profile like Raja needs tight management and support and some gaffa to shut his mouth.
 

Yotsugi

Golden Member
Oct 16, 2017
1,029
487
106
You have to explain that to me.
It needs to bin and chew the geometry first in tiled (something AMD GPUs hate doing).
Any performance improvements will hit the same 4frontends@4ever bottleneck of anything GCN.