Cerb
Elite Member
Not really:^^ Bold from mine.
Thus substituting the DX API by some low-level API (a la libCGM) or eliminating the API completely and the same GPU will give more performance. This is basically what I did mean when said that some cpu draw calls can affect gpu performance on PCs.
The first category there...isn't. A GPU draw call is a call made by the CPU. IE, it is your third category, and that category is either a bit wrong, or poorly described. The CPU would be best put to use doing other things during the time of the call, in which it has to switch contexts, and may or may not have to wait on the GPU to do something (I'd bet nV and AMD have optimizations that involve lying in the callbacks 🙂). DX10 reduced the need to go to the GPU as much, and as part of WDDM, moved a great deal of the graphics code into userspace, enabling the API calls to not need to have so many context switches, thus removing most of the badness from the API.There are gpu draw calls, there are cpu draw calls, and there are cpu draw calls that affect gpu performance.
Bare metal is expensive. Too expensive, already, today. Being able to access it for occasional hand optimization is important, since console players won't get faster hardware next year. But it's not the way they are going to be programming their game, unless they intend to spend 10+ years producing it. The ideal is to have an API, including abstract VM IRs for streamy stuff like shaders, that can be translated to machine code that spends most of its time doing the work, rather than waiting on slow memory and buses, and/or call prologues and epilogues.