Richard Huddy of AMD says DX API & Consoles holding back PC gaming visuals.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
It would be awesome to go back to circa-early 1990's gaming where you need to check the box to see if your specific brand of card is present. That would be way better than just getting a game and playing it.

/sarcasm
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked. its so sad, i miss 2007 era where crysis launched, it was full excitement everytime new GPU launched because we want to know if that GPU finally can play crysis.

there have been lots of times when nvidia cards were clearly superior to amd, why go after one that absolutely dominated for amd? you could make the argument that nvidia still has a better feature set nowadays because they have physics, d3d, and cuda, while amd only has eyefinity (and just released a limited form of 3d)? also, by your stance 2900xt was a lot better than 8800 ultra b/c the amd card was dx10.1 compliant, so that argument goes both ways.

you are forgeting right now AMD card have higher performance in openGL than NVdia, APP, and physics ?? and you don't need to go multi card configuration with AMD to taste multi card config,

and 2900XT not supporting DX 10.1, it was HD 38XX.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
DX hurts nvidia far more than hurts AMD. If vendors completely made their own standards, then ATi wouldn't still be around. I know it was faster, but the 9700 Pro's feature set sucked compared to the Geforce FX's feature set.

You could argue, being faster was a feature too, you could enable AA and AF and still be playable with the 9700 pro in games that came years after (Doom3, HL2, Far Cry).

Also AMD had features that nvidia didn't support like tessellation, 3Dc, etc. You can't say it hurts one vendor or the other and saying that ATI wouldn't be around if it weren't for DX is a very odd thing to say.

Are you implying that if it weren't for DX the FX would've been better than R300?

Well I could say if it weren't for DX then Cayman would be better than GF110 cause the latter sucks at OGL.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked.

TBH, games have always been CPU limited on the fastest GPUs. Games just don't use modern CPUs as well as GPUs. Except for a few games.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked.

Yes thats true, but not what the AMD guy is saying. He is saying that API's are holding back performance which is about one of the most ignorant things I've ever read.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
I personally think that games are storage limited. SSD's becoming mainstream should lead to some interesting developments in texture streaming.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Yes thats true, but not what the AMD guy is saying. He is saying that API's are holding back performance which is about one of the most ignorant things I've ever read.

Yep, its not the 1st time huddy has said some things that make no sense.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
This is a quote from repi @ Dice commenting Huddy over at beyond3d

repi @ Dice said:
I've been pushing for this for years in discussions with all the IHVs; to get lower and lower level control over the GPU resources, to get rid of the serial & intrinsic driver bottleneck, enable the GPU to setup work for itself as well as tear down both the logic CPU/GPU latency barrier in WDDM and the physical PCI-E latency barrier to enable true heterogeneous low-latency computing. This needs to be done through both proprietary and standard means over many years going forward.

I'm glad Huddy goes out and in public talks about it as well, he get's it! And about time that an IHV talks about this.

This is the inevitable, and not too far, future and it will be the true paradigm shift on the PC that will see entire new SW ecosystems being built up with tools, middleware, engines and games themselves differentiating in a way not possible at all now.

- Will benefit consumers with more interesting experiences & cheaper hardware (more performance/buck).

- Will benefit developers by empowering unique creative & technical visions and with higher performance (more of everything).

- Will benefit hardware vendors with being able to focus on good core hardware instead of differentiating through software as well as finally releasing them and us from the shackles of the Microsoft 3 year OS release schedule where new driver/SW/HW functionality "may" get in.

This is something I've been thinking about and discussing with all parties (& some fellow gamedevs) on different levels & aspects of over a long period of time, should really write together a more proper blog post going into details soon. This is just a quick half-rant reply (sorry)

The best graphics driver is no graphics driver.

Frostbite 3 maybe....
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
TBH, games have always been CPU limited on the fastest GPUs. Games just don't use modern CPUs as well as GPUs. Except for a few games.

CPU limited to what though? 100 FPS?
I'd change the language around, because the CPU is hardly a limitation with the fastest GPUs. It's that the CPU (if it's a high end CPU or an overclocked midrange CPU) was never really a limitation and the GPU is fast enough that it's no longer a limitation either.

Usually developers make it so that the performance at which high end CPUs become "the limitation" the user experience is generally very close to absolutely perfect. By that I meanFPS are high enough that there is no perceived performance limit to a human. While numbers can be higher, it's not really noticeable.

The exception would be 3d gaming, where you have to be maintaining ~double the FPS to get the same perceived performance as 2D due to left and right frames.
 
Last edited:

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
CPU limited to what though? 100 FPS?
I'd change the language around, because the CPU is hardly a limitation with the fastest GPUs. It's that the CPU (if it's a high end CPU or an overclocked midrange CPU) was never really a limitation and the GPU is fast enough that it's no longer a limitation either.

Usually developers make it so that the performance at which high end CPUs become "the limitation" the user experience is generally very close to absolutely perfect. By that I meanFPS are high enough that there is no perceived performance limit to a human. While numbers can be higher, it's not really noticeable.

The exception would be 3d gaming, where you have to be maintaining ~double the FPS to get the same perceived performance as 2D due to left and right frames.

Well the person I was responding too thinks we do need more than 150fps in games as he was alluding to the 6990 being CPU limited.

Well I thinki if a game is running so fast on a gpu that the CPU becomes the limitation then they arent making good use of that GPU are they.

Also it isnt 100fps. Take a look at StarCraft 2, it doesnt run any better on a quad core tham a dual core and that game is definatly not running at 100fps. Also another game I had personal experience with was f1 2010. On a 5770 at 1920x1080 8xAA I got a extra 5fps when I overclocked my phenom x4 to 3.5ghz from 2.8ghz. So do you think its ok that a modern DX11 game would be partly CPU limited when a modern Quad core CPU is paired with a sub $100 graphics card?

Forgive me if my post makes little sense. Im feeling a bit dleepy. :)
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Thank you, that's why the claim to me is bogus. Every bit of that power available can be put to use but Huddy claims it's sitting there doing nothing. Huddy said it himself of the DX API that when it comes to a performance impact on the pc " it can vary from almost nothing at all to a huge overhead ".

Then his genius self goes on to say " On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance".

???? I'll take my backwards compatibility, Huddy. If you want hardware specific programing pay people off like nVidia.

Software layer overhead is just an inherit part of PC development. You have good developers and you have bad ones. The good ones like Valve know what they are doing and get very consistent scaling performance across multiple graphics cards, even on the low end ones. Yes Source isn't the most advanced engine around, but it is goddamn efficient and pumping out what it does at very high FPS, even with the resolution and AA cranked up. It's amazing that something as low end as a Geforce 310M 256 MB (16 shaders) can manage 720p, mixed medium-high settings, trilinear AF, no AA with Left 4 Dead 2 without too much slow down below 30 FPS.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,007
126
A console is a fixed platform that never changes; a PC isn’t like that. You can’t make the API go away. Then you’d have to program to specific hardware like in the DOS days. No modern game developer is going to do that given how complex games already are.

Also a modern operating system with pre-emptive multi-tasking and a HAL can’t allow programs to take control of the hardware like that.
 

alcoholbob

Diamond Member
May 24, 2005
6,390
470
126
This is nothing more than a press release by a video graphics company trying to sell more hardware.

What's holding back graphics is lazy animation department using low quality pasty textures and/or terrible art design and then trying to use HDR and dynamic lighting and blur to cover up the fact that the game looks like shit.
 

Martimus

Diamond Member
Apr 24, 2007
4,490
157
106
there have been lots of times when nvidia cards were clearly superior to amd, why go after one that absolutely dominated for amd? you could make the argument that nvidia still has a better feature set nowadays because they have physics, d3d, and cuda, while amd only has eyefinity (and just released a limited form of 3d)? also, by your stance 2900xt was a lot better than 8800 ultra b/c the amd card was dx10.1 compliant, so that argument goes both ways.

In features, the 2900XT had a lot more than the 8800GTX. But like the FX, the performance was no where near where it needed to be. nVidia just recently caught up in audio and video features that AMD has had since the 2900 XT, but they have had the more important performance most of this time.
 
Mar 13, 2011
134
0
0
This is nothing more than a press release by a video graphics company trying to sell more hardware.

What's holding back graphics is lazy animation department using low quality pasty textures and/or terrible art design and then trying to use HDR and dynamic lighting and blur to cover up the fact that the game looks like shit.

o.0 Example please...? Perhaps it's because I haven't played much games, but... this happens...?
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Imho,

I'm just surprised to see that AMD offering how important innovation is and how standards can hold back innovation. I agree with it. A mix of proprietary and standards will do wonders for innovation and push the industry much more than waiting for open standards to do it on its own.
 

Throckmorton

Lifer
Aug 23, 2007
16,829
3
0
o.0 Example please...? Perhaps it's because I haven't played much games, but... this happens...?


Most new games these days don't look good. They have a lot of technically fancy effects, but poor artistic quality, ugly textures, clunky animations, etc
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Even though I agree with it but what bothers me is IHV's will sing and dance to a tune that is for their own best interest:

Richard Huddy singing the DirectX 11 tune and why we all should be excited:

http://blogs.amd.com/play/2009/06/02/why-we-should-get-excited-about-directx-11/


Now the dramatic switch to standards are holding AMD back?

Why?

For me, it's simple, it's about competitive leverage and what does AMD have?

The future of Fusion!
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
This is a quote from repi @ Dice commenting Huddy over at beyond3d
Now, that I can agree with. However, that can be dealt with by major API and general SDK changes, and certainly should be dealt with that way, too. IMO, there's no good reason why DX11 can't be like DX9: give it a fairly long life, and rebuild it for the next round.

It would not be easy, but there's no reason that MS could not, with input from HW and SW folks, come up with a solution appearing low-level, that still has to hit a video driver, and still abstracts from the HW. They could stand to make an API, and bytecode-based language, that simulates an abstract parallel processing machine, with set 3D scene functions on the side, and leave all the rest to the devs. Still have it not actually be machine access, and compile it all on application load, so not JIT/interpreter latency issues can come about, but get rid of high-level abstractions involving building a scene.

Again, it wouldn't be easy, but it certainly would be no more difficult than .NET. I remember the old days, and from a gamer standpoint, DirectX has been outstanding, and is 100% worth the risk of not being able to get theoretical hardware performance. Making something better would be acceptable. Machine-level access? Oh, Hell no.

P.S. if Roderic is correct in that link, as well, one of the key Nays for the PC v. console, the geometry limitations, has been mostly removed w/ DX10, though still not as cheap and easy as if the dev were given the ability to give you a BSOD.
 
Last edited:

bryanW1995

Lifer
May 22, 2007
11,144
32
91
GEESH, off-topic, but how many rigs do you need?

I7 is my gaming rig at home. x3350 is for my daughters to play dora/diego/etc. q6600 is at mom's house so she can check her email. x3350 is my office computer. All have 1 or more gpus and run seti most of the time. I just started passing computers down instead of selling them off a few years ago. I also have a couple latptops at the house and about 20 c2d e7xxx series rigs at work, though typically only a few of those run DC at any one time.

back OT:

to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked. its so sad, i miss 2007 era where crysis launched, it was full excitement everytime new GPU launched because we want to know if that GPU finally can play crysis.



you are forgeting right now AMD card have higher performance in openGL than NVdia, APP, and physics ?? and you don't need to go multi card configuration with AMD to taste multi card config,

and 2900XT not supporting DX 10.1, it was HD 38XX.

huh? AMD is better than nvidia in physics? good to know, I guess we can stop talking about that now. I think you left something out in your multi-card configuration comment, that made no sense at all.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
CPU limited to what though? 100 FPS?
I'd change the language around, because the CPU is hardly a limitation with the fastest GPUs. It's that the CPU (if it's a high end CPU or an overclocked midrange CPU) was never really a limitation and the GPU is fast enough that it's no longer a limitation either.

Usually developers make it so that the performance at which high end CPUs become "the limitation" the user experience is generally very close to absolutely perfect. By that I meanFPS are high enough that there is no perceived performance limit to a human. While numbers can be higher, it's not really noticeable.

The exception would be 3d gaming, where you have to be maintaining ~double the FPS to get the same perceived performance as 2D due to left and right frames.

With ultra high end gpu/cpu combos there will be literally no difference at all b/c your monitor is limiting everything to 60 or 120 fps, anyway.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

I thought this article was interesting and did not see it posted here yet. Apologies if it has already been posted.

Excerpts:

It's funny,' says AMD's worldwide developer relations manager of its GPU division, Richard Huddy. 'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.'

'I certainly hear this in my conversations with games developers,' he says, 'and I guess it was actually the primary appeal of Larrabee to developers – not the hardware, which was hot and slow and unimpressive, but the software – being able to have total control over the machine, which is what the very best games developers want. By giving you access to the hardware at the very low level, you give games developers a chance to innovate, and that's going to put pressure on Microsoft – no doubt at all.'


and

'The funny thing about introducing shaders into games in 2002,' says Huddy, 'was that we expected that to create more visual variety in games, but actually people typically used shaders in the most obvious way. That means that they've used shaders to converge visually, and lots of games have the same kind of look and feel to them these days on the PC. If we drop the API, then people really can render everything they can imagine, not what they can see – and we'll probably see more visual innovation in that kind of situation.'

and

Now the PC software architecture – DirectX – has been kind of bent into shape to try to accommodate more and more of the batch calls in a sneaky kind of way. There are the multi-threaded display lists, which come up in DirectX 11 – that helps, but unsurprisingly it only gives you a factor of two at the very best, from what we've seen. And we also support instancing, which means that if you're going to draw a crate, you can actually draw ten crates just as fast as far as DirectX is concerned.

But it's still very hard to throw tremendous variety into a PC game. If you want each of your draw calls to be a bit different, then you can't get over about 2-3,000 draw calls typically - and certainly a maximum amount of 5,000. Games developers definitely have a need for that. Console games often use 10-20,000 draw calls per frame, and that's an easier way to let the artist's vision shine through.'

and

Indeed, Crytek's R&D technical director, Michael Glueck, said 'yes, that would appeal to us,' when he heard about Huddy's claims. However, he also seemed surprised, pointing out that 'AMD is the biggest force driving the OpenCL standard; it even released drivers to run OpenCL on SSE2 and that's kind of a way to remove low-level hardware access for simple compute tasks.'

Glueck also points out that 'some years ago, all CPU performance critical tasks were done by the programmer, from low-level assembly optimisations to high-level thread and memory management. Now that the "compute world" merges CPUs and GPUs, you waste a lot of time, by using higher level API-layers such as OpenCL, CUDA or Direct Compute, to execute less smart algorithms on GPUs, because they still run faster than on CPUs.

'Having direct access to hardware would mean no drivers magically translating your byte code once again, and also having low-level memory management available, which you have to some degree with CUDA, and also your own thread scheduler would be really enjoyable. It definitely makes sense to have a standardised, vendor-independent API as an abstraction layer over the hardware, but we would also prefer this API to be really thin and allow more low-level access to the hardware. This will not only improve performance, but it will also allow better use of the available hardware features.'

Either way, Glueck agrees that APIs are likely to become less important to game developers in the future, especially as GPUs start to behave more like general purpose units than fixed function graphics hardware.

APIs are currently 'made to suit a special hardware design,' explains Glueck, 'but as more and more GPUs become general purpose, I doubt there will be more fixed function units added. Even nowadays, rasterisation and tessellation in software on OpenCL can work quite nicely. It won't match the speed of the fixed hardware, but just like alpha-test, perspective interpolation and dedicated vertex- and fragment-units; it will be done by GPGPU units at some point.'

Whether this actually happens remains to be seen, but Huddy certainly seems convinced that the idea of having no API at all is going to put pressure on Microsoft in the future. By getting low-level access to the hardware, games developers could potentially make many more draw calls, and push PCs way ahead of consoles in terms of graphics, which is where they should be, given the hardware we've got. However, this could also come at a cost in terms of stability and compatibility.