Keysplayr
Elite Member
- Jan 16, 2003
- 21,211
- 50
- 91
Originally posted by: evolucion8
Originally posted by: Keysplayr
GPGPU performance.
Why isn't it optimized? Why can't they optimize it? Why wont they? If they were able to do it, they would, right? This argument is utter BS because they have had waaaaay more than enough time to properly code for this arch. I think it's the best they could get out of it. I love how you guys are touting performance that will never materialize because it's damn near impossible to code for ATI in it's current arch. Which translates to, if you can't code for it, it's almost useless to try.
That's your opinion which we all respect, but nVidia and ATi engineers are no lousy engineers, they make decisions based on R&D. GeForce FX was a very flawed architecture and yet nVidia was able to optimize it so good that it was able to almost keep up with the R3X0 architecture in nVidia optimized games, considering that ATi has been working for much more time for their super scalar architecture, that ATi has a much better background in software engineering thanks to it's merge with AMD, there's no huge reason to spend so much time in GPGPU performance when most ATi cards sold are used in games.
Is a matter of execution and resource allocation with the driver development team. While nVidia architecture has the upper hand currently with the GPGPU, I don't see it as a key at the selling point level, or as a must have feature, since most applications today aren't completely parallel and will require general purpose calculations which will run like crap with the massive parallel GPU's of today. At the end, both, the software engineer and the ATi driver engineer must work to take advantage of the optimizations at the architecture and yet, Folding@Home is an old client which doesn't even use the Data cache share found on the HD 4000 series, it was made only for HD 3x00 and lower, and GeForce 8 which uses a completely different approach which will work great no matter what, nVidia is about predictable performance since no optimization is necessary to get a good performance of it in GPGPU applications, ATi is about extracting and maximizing parallelism which will require more work.
Well spoken, but after saying all that, it still doesn't change the end game.
This raises many questions. Questions that have been asked many times before.
For examples:
"There is no huge reason for AMD to spend so much time in GPGPU performance"
Ask the universities, laboratories, military, corporations what they spent on Nvidia GPU's for their computing (not gaming) needs. If that's not a reason to pursue GPGPU technologies, I don't know what else is.
"Is a matter of execution and resource allocation with the driver development team."
As many of us have suspected over the last few months, AMD may not even HAVE the resources to dedicate to GPGPU R&D. They are running very thin these days and it's understandable that they need to focus on what can make them money, right now.
It's a tough situation.
"Folding@Home is an old client which doesn't even use the Data cache share found on the HD 4000 series"
I'm no programmer, but is it that difficult to code F@H to utilize the Data caches found on the HD 4000 series? These are my points. It must be TERRIBLY difficult to code for this architecture, otherwise we would be seeing many many more 3rd party applications over the last year or since R7xx launched. We don't have to rely on AMD's resources other than an good SDK for devs. Let the devs do all the work. But they are not. Either because of a really crappy SDK (stream) or the SDK is good, but the hardware is just so wrong, or awkward for these types of GPGPU applications that devs don't even bother.
"nVidia is about predictable performance since no optimization is necessary to get a good performance of it in GPGPU applications, ATi is about extracting and maximizing parallelism which will require more work."
Apparently this is correct. People are using CUDA. Devs are using it. We see what their labors accomplish. Fruitful. Worth the time. ATi is not about extracting and maximizing parallelism. As you said, they are for gaming. GPGPU is third fiddle to them and it painfully demonstrates this aspect in the form of next to no support from devs, or AMD themselves.