Originally posted by: evolucion8
With the latest Cat 9.6, I ran Sandra Lite 2009.5.15.97, and in GPGPU Processing, I did 3138.25MPixels/s on Float Shader Performance, and 1687.98MPixels/s on Double Shader Performance, much faster than the HD 4850 in Crossfire, which means that the latest driver update brings considerable performance improvements in GPGPU performance. With the reference score, the HD 4850 in Crossfire was much faster than the GTX 295 on Double Shader Performance and slighly faster on Float Shader performance, but my score was much higher than both.
In GPGPU Memory Bandwidth, Internal Memory Bandwidth, my card is 12GB/s slower than the 9800GTX+ which means that Global Data Share comes handy in the nVidia architecture, but in Data Transfer Bandwidth, my card almost twice faster than the GTX 280. So it means that depending in the scenario, bandwidth hungry applications will run faster on nVidia hardware, and mathematically hungry applications will run faster on ATi hardware. I know that's a synthetic benchmark and is a home made evaluation with Sandra, but it can give you a hint of what to expect.
In Folding@Home, ATi is slower, but in MilyWay@Home, ATi is faster.
http://www.gpugrid.net/forum_thread.php?id=705
DoctorNow wrote:
It's also possible now to run a new custom-made MilkyWay@Home-app on your GPU, but currently ONLY possible with an ATI-card and a 64-Bit Windows-system.
More details you can read in this thread.
Thought I'd just inform you, as it surely gets overlooked in the other thread. But be warned, currently it's really in pre-alpha stage. Buying a card for that wouldn't be fun. But if you've already got a HD38x0 (64 shader units) or HD48x0 (160 units) you might want to check it out. The speed is rediculous
Paul D. Buck wrote:
If they get it out the door soon I might just get a couple of the lower end ATI cards that can handle it just for the mean time till they get the Nvidia version done
NV version is not going to happen anytome soon as they use double precision exclusively. You may remember that NV included 30 double units in GT200 along with the 240 single precision shaders. Well, ATIs RV770 has 160 5-way VLIW units and all of them can run 1 or 2 doubles each clock. That's such a massive advantage, it just plain wouldn't make sense to use NV cards here.
http://milkyway.cs.rpi.edu/mil...orum_thread.php?id=589
http://www.youtube.com/watch?v=nnsW-zB95Is
So it would mean that the hardware is just as good as how much the developer can push it, and if it the developer is capable, can extract more power with an ATi card than nVidia counterpart, nVidia is just easier to program and get predictable performance.