there have been lots of times when nvidia cards were clearly superior to amd, why go after one that absolutely dominated for amd? you could make the argument that nvidia still has a better feature set nowadays because they have physics, d3d, and cuda, while amd only has eyefinity (and just released a limited form of 3d)? also, by your stance 2900xt was a lot better than 8800 ultra b/c the amd card was dx10.1 compliant, so that argument goes both ways.
DX hurts nvidia far more than hurts AMD. If vendors completely made their own standards, then ATi wouldn't still be around. I know it was faster, but the 9700 Pro's feature set sucked compared to the Geforce FX's feature set.
to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked.
to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked.
Yes thats true, but not what the AMD guy is saying. He is saying that API's are holding back performance which is about one of the most ignorant things I've ever read.
repi @ Dice said:I've been pushing for this for years in discussions with all the IHVs; to get lower and lower level control over the GPU resources, to get rid of the serial & intrinsic driver bottleneck, enable the GPU to setup work for itself as well as tear down both the logic CPU/GPU latency barrier in WDDM and the physical PCI-E latency barrier to enable true heterogeneous low-latency computing. This needs to be done through both proprietary and standard means over many years going forward.
I'm glad Huddy goes out and in public talks about it as well, he get's it! And about time that an IHV talks about this.
This is the inevitable, and not too far, future and it will be the true paradigm shift on the PC that will see entire new SW ecosystems being built up with tools, middleware, engines and games themselves differentiating in a way not possible at all now.
- Will benefit consumers with more interesting experiences & cheaper hardware (more performance/buck).
- Will benefit developers by empowering unique creative & technical visions and with higher performance (more of everything).
- Will benefit hardware vendors with being able to focus on good core hardware instead of differentiating through software as well as finally releasing them and us from the shackles of the Microsoft 3 year OS release schedule where new driver/SW/HW functionality "may" get in.
This is something I've been thinking about and discussing with all parties (& some fellow gamedevs) on different levels & aspects of over a long period of time, should really write together a more proper blog post going into details soon. This is just a quick half-rant reply (sorry)
The best graphics driver is no graphics driver.
TBH, games have always been CPU limited on the fastest GPUs. Games just don't use modern CPUs as well as GPUs. Except for a few games.
CPU limited to what though? 100 FPS?
I'd change the language around, because the CPU is hardly a limitation with the fastest GPUs. It's that the CPU (if it's a high end CPU or an overclocked midrange CPU) was never really a limitation and the GPU is fast enough that it's no longer a limitation either.
Usually developers make it so that the performance at which high end CPUs become "the limitation" the user experience is generally very close to absolutely perfect. By that I meanFPS are high enough that there is no perceived performance limit to a human. While numbers can be higher, it's not really noticeable.
The exception would be 3d gaming, where you have to be maintaining ~double the FPS to get the same perceived performance as 2D due to left and right frames.
Thank you, that's why the claim to me is bogus. Every bit of that power available can be put to use but Huddy claims it's sitting there doing nothing. Huddy said it himself of the DX API that when it comes to a performance impact on the pc " it can vary from almost nothing at all to a huge overhead ".
Then his genius self goes on to say " On consoles, you can draw maybe 10,000 or 20,000 chunks of geometry in a frame, and you can do that at 30-60fps. On a PC, you can't typically draw more than 2-3,000 without getting into trouble with performance".
???? I'll take my backwards compatibility, Huddy. If you want hardware specific programing pay people off like nVidia.
there have been lots of times when nvidia cards were clearly superior to amd, why go after one that absolutely dominated for amd? you could make the argument that nvidia still has a better feature set nowadays because they have physics, d3d, and cuda, while amd only has eyefinity (and just released a limited form of 3d)? also, by your stance 2900xt was a lot better than 8800 ultra b/c the amd card was dx10.1 compliant, so that argument goes both ways.
This is nothing more than a press release by a video graphics company trying to sell more hardware.
What's holding back graphics is lazy animation department using low quality pasty textures and/or terrible art design and then trying to use HDR and dynamic lighting and blur to cover up the fact that the game looks like shit.
o.0 Example please...? Perhaps it's because I haven't played much games, but... this happens...?
o.0 Example please...? Perhaps it's because I haven't played much games, but... this happens...?
Now, that I can agree with. However, that can be dealt with by major API and general SDK changes, and certainly should be dealt with that way, too. IMO, there's no good reason why DX11 can't be like DX9: give it a fairly long life, and rebuild it for the next round.This is a quote from repi @ Dice commenting Huddy over at beyond3d
GEESH, off-topic, but how many rigs do you need?
to be honest i'm agree with this AMD guy, right now console are holding back pc games advancement, i mean even HD 6990 was CPU bottlenecked. its so sad, i miss 2007 era where crysis launched, it was full excitement everytime new GPU launched because we want to know if that GPU finally can play crysis.
you are forgeting right now AMD card have higher performance in openGL than NVdia, APP, and physics ?? and you don't need to go multi card configuration with AMD to taste multi card config,
and 2900XT not supporting DX 10.1, it was HD 38XX.
CPU limited to what though? 100 FPS?
I'd change the language around, because the CPU is hardly a limitation with the fastest GPUs. It's that the CPU (if it's a high end CPU or an overclocked midrange CPU) was never really a limitation and the GPU is fast enough that it's no longer a limitation either.
Usually developers make it so that the performance at which high end CPUs become "the limitation" the user experience is generally very close to absolutely perfect. By that I meanFPS are high enough that there is no perceived performance limit to a human. While numbers can be higher, it's not really noticeable.
The exception would be 3d gaming, where you have to be maintaining ~double the FPS to get the same perceived performance as 2D due to left and right frames.
Neither is on par with Intel, though.huh? AMD is better than nvidia in physics?
