Something big is brewing with Radeon Software

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
fixing cpu overhead, finally?
Biggest thing they could do.

I really wish people would stop repeating this as if it's fact. What people misinterpret as CPU/driver overhead is actually legacy API design vs modern GPU architecture. DX11 all the way back to the beginning are at their core single threaded. AMD's GCN GPUs are built for multi-thread, highly parallel operation. Legacy APIs simply cannot feed GCN fast enough using a single main thread. Consider this. AMD's DX12 driver doesn't exhibit any "overhead" issues. Given that they know how to make that driver, do you really think it's not within their power to make a better DX11 driver? No, it's just legacy API vs GCN architecture. It's why they pursued Mantle. It's why we have DX12 and Vulkan. They needed a new APIs to allow GCN to be used to its full potential.
 
Last edited:

Dave2150

Senior member
Jan 20, 2015
639
178
116
I hope they unveil a shadowplay equivalent. AMD has good video encoding hardware, which is going to waste on the majority's GPU's. Shadowplay feels like it's been around for so long, and is such an obvious benefit to the NVIDIA playerbase.
 
  • Like
Reactions: Bacon1 and Ranulf

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I really wish people would stop repeating this as if it's fact. What people misinterpret as CPU/driver overhead is actually legacy API design vs modern GPU architecture. DX11 all the way back to the beginning are at their core single threaded. AMD's GCN GPUs are built for multi-thread, highly parallel operation. Legacy APIs simply cannot feed GCN fast enough using a single main thread. Consider this. AMD's DX12 driver doesn't exhibit any "overhead" issues. Given that they know how to make that driver, do you really think it's not within their power to make a better DX11 driver? No, it's just legacy API vs GCN architecture. It's why they pursued Mantle. It's why we have DX12 and Vulkan. They needed a new APIs to allow GCN to be used to its full potential.
I'm sure they could spend the time and effort to refactor the base architecture of their DX11 drivers to make them more performant. I'm also sure it would cost immense amounts of time and money, for very little payback since DX11 is already on its way out.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
I'm sure they could spend the time and effort to refactor the base architecture of their DX11 drivers to make them more performant. I'm also sure it would cost immense amounts of time and money, for very little payback since DX11 is already on its way out.

DX11 will be the most popular and widely used API for many years to come.
 

Yakk

Golden Member
May 28, 2016
1,574
275
81
DX11 is still used, but base API development has been EOL years ago. At this point might as well just brute force through it with better hardware. I'd rather not see AMD wasting resources on continuing to hack DX11 when they need to continue developing next generation APIs with Microsoft, Sony, and AAA developers along with supporting it for smaller developers.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
DX11 will be the most popular and widely used API for many years to come.
"many" is a stretch, but it will certainly be the majority API for a while, just like every single other DX API upgrade before it will take time to move to the new one. The point is DX11 is getting reduced adoption, not increased. It's on its way out, not on its way in.

It's very, very expensive to refactor a massive codebase like that. It makes tons more sense to do a new codebase for a new API better than to try and refactor a widely used, widely tested codebase just so the tail end of a single generation has one more small checkbox for fanboys to use in internet fights.

Basically, it's a waste of time and resources. If they had done this in 2012 - different story. But in Q4 2016, not a good idea.

Incremental increases in performance is all that is reasonable to do - so I hope they continue providing small but consistent performance improvements, as they have done since GCN released.
 

Yakk

Golden Member
May 28, 2016
1,574
275
81

Yakk

Golden Member
May 28, 2016
1,574
275
81
AMD could use the tech they acquired when they bought with HiAlgo to optimize their low level driver code instead of using code injection. Essentially baking it right into the drivers and not risking triggering anything.
 

flopper

Senior member
Dec 16, 2005
739
19
76
I really wish people would stop repeating this as if it's fact. What people misinterpret as CPU/driver overhead is actually legacy API design vs modern GPU architecture. DX11 all the way back to the beginning are at their core single threaded. AMD's GCN GPUs are built for multi-thread, highly parallel operation. Legacy APIs simply cannot feed GCN fast enough using a single main thread. Consider this. AMD's DX12 driver doesn't exhibit any "overhead" issues. Given that they know how to make that driver, do you really think it's not within their power to make a better DX11 driver? No, it's just legacy API vs GCN architecture. It's why they pursued Mantle. It's why we have DX12 and Vulkan. They needed a new APIs to allow GCN to be used to its full potential.

well your fact logic wont work here.
the 390 driver suddenly improved dx11 for the 290 along the way due to amd found some oh overhead they could remove.
if the legacy whatever would be true that wouldnt have happen.
so I am right your not.
I won the prize.
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
well your fact logic wont work here.
the 390 driver suddenly improved dx11 for the 290 along the way due to amd found some oh overhead they could remove.
if the legacy whatever would be true that wouldnt have happen.
so I am right your not.
I won the prize.

I saw that driver update and the forum threads spawned from it claiming some giant leap in performance. A 5% increase is normal driver optimization. Nvidia keeps optimizing too. Note that the 3dmark benchmark is for API overhead, which is exactly my point. They can't address the rest of the giant gap that Nvidia leads with because they can't. There at no point will be a driver update that magically gives AMD a 20% perf boost in DX11 across the board. DX11 simply cannot deliver to GCN.
 

Dygaza

Member
Oct 16, 2015
176
34
101
I really wish people would stop repeating this as if it's fact. What people misinterpret as CPU/driver overhead is actually legacy API design vs modern GPU architecture. DX11 all the way back to the beginning are at their core single threaded. AMD's GCN GPUs are built for multi-thread, highly parallel operation. Legacy APIs simply cannot feed GCN fast enough using a single main thread. Consider this. AMD's DX12 driver doesn't exhibit any "overhead" issues. Given that they know how to make that driver, do you really think it's not within their power to make a better DX11 driver? No, it's just legacy API vs GCN architecture. It's why they pursued Mantle. It's why we have DX12 and Vulkan. They needed a new APIs to allow GCN to be used to its full potential.

Very well written post.

Also, I believe it's very naive to even think that AMD wouldn't have already solved overhead issue , if it was fixable with drivers directly. They have a lot of talented engineers working for them. Surely not all of them can be incompetant.

Besides, people too easily claim overhead to be the problem , without even checking what thread is holding the back. Surely often in AMD's case it is that AMD driver thread that is stressing one core heavily, but so often it's also game's main logic thread. For the users performance perspective they perform similiarly, but are pretty easily identified.
 
  • Like
Reactions: DarthKyrie

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Also, I believe it's very naive to even think that AMD wouldn't have already solved overhead issue , if it was fixable with drivers directly. They have a lot of talented engineers working for them. Surely not all of them can be incompetant
This is the naive thing to think. Software development is very, very expensive. Especially for something like this. It would be millions to tens of millions of dollars of man-hours you have to divert from other projects in order to refactor a massive, well tested codebase. It could be technical debt - it could be a fundamental mismatch between architecture and API, it could be tons of things we don't have any idea about. AMD might not even know, because discovering the places to optimize is very difficult. Way easier said than done. The fact AMD and nVidia keep delivering black-box performance optimizations for dozens of games every year is nothing short of incredible. It doesn't matter how competent your people are if they dont have enough time to work on each project.
 
Last edited:

Bacon1

Diamond Member
Feb 14, 2016
3,430
1,018
91
What people misinterpret as CPU/driver overhead is actually legacy API design vs modern GPU architecture

Yep, you can see that when some engines run amazing on both AMD and Nvidia, while other engines (mostly single thread heavy) will do much worse.

Unity vs Frostbite vs Unreal vs others like ones Square Enix uses.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
DX11 all the way back to the beginning are at their core single threaded. AMD's GCN GPUs are built for multi-thread, highly parallel operation. Legacy APIs simply cannot feed GCN fast enough using a single main thread.

If AMD needs this higher throughput from the driver so badly, then why have they not implemented it as their competitor has? Is there a specific reason or technical roadblock that prevents it?
 

Glo.

Diamond Member
Apr 25, 2015
5,705
4,549
136
If AMD needs this higher throughput from the driver so badly, then why have they not implemented it as their competitor has? Is there a specific reason or technical roadblock that prevents it?
Difference between Static Scheduling and Dynamic Scheduling. AMD does not rely on CPU to schedule tasks for the GPU. The fact that it was core design of DX11 was problem for AMD GPUs.

AMD GPUs can manage themselves.
 

lifeblood

Senior member
Oct 17, 2001
999
88
91
If AMD could have just rewritten their drivers to better work with DX11 they would have. The fact they spent so much time and money on mantle shows that GCN was simply not optimal on DX11 and it needed something else to let it shine.
 
  • Like
Reactions: DarthKyrie

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
If AMD could have just rewritten their drivers to better work with DX11 they would have. The fact they spent so much time and money on mantle shows that GCN was simply not optimal on DX11 and it needed something else to let it shine.
No. Software development is extremely expensive. It's not that easy.