New video card: Issues to consider other than raw performance and price

skaertus

Senior member
Mar 20, 2010
217
28
91
I am searching for a new graphics card and I am in doubt between NVIDIA and ATI. I've seen different benchmarks and, as tricky as benchmarks may be, they have given me a clue of which card is superior to the other in terms of performance. Numbers are easy to compare.

But perhaps there is more to a video card than just numbers. I've read in some places that there are not so trivial differences between ATI and NVIDIA. For instance: ATI drivers are more buggy; the image quality of NVIDIA is superior to the image quality of ATI (and vice-versa); ATI usually embraces the latest technology while NVIDIA often rebrands its old video cards; ATI is designed towards gaming and NVIDIA is designed towards general computing; ATI is slower at 2D graphics; one is faster in some kinds of applications than the other.

NVIDIA GeForce and ATI Radeon are really very different beasts, beginning with their architecture. Comparing raw numbers is a very simplistic approach and, at least to me, it doesn't seem to reveal all the truth behind these cards.

I am particularly interested in these non-trivial differences. I haven't owned a decent video card for quite a while right now (some years, in fact), so I cannot really tell the differences of the latest ATI and NVIDIA series.

I am looking for a high-end video card for general purposes, not just gaming (but also some gaming). I intend to use two 1920x1080 monitors to do quite a few demanding multimedia tasks. So, I would appreciate if someone told me which are the main differences between these two cards, apart from the ones I can easily see on traditional benchmarks. On which one the image quality is better? For which kind of applications is NVIDIA or ATI better? Which will probably better support GPGPU? Which offers better performance for dual monitors and high resolutions? And so on...
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
The short answer is that differences in driver quality and image quality are not true today. They MAY have been true at one point but that was a very long time ago. If you have an nVidia card in one machine and a similar speed ATi card in another running side by side, you would be hard pressed to tell the difference. One thing that IS true though is that for the same level of performance, ATi cards these days are less power-hungry.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I am looking for a high-end video card for general purposes, not just gaming (but also some gaming). I intend to use two 1920x1080 monitors to do quite a few demanding multimedia tasks. So, I would appreciate if someone told me which are the main differences between these two cards, apart from the ones I can easily see on traditional benchmarks. On which one the image quality is better? For which kind of applications is NVIDIA or ATI better? Which will probably better support GPGPU? Which offers better performance for dual monitors and high resolutions? And so on...

Nvidia hands down has better GPGPU support and is considerably ahead of ATI in application support. So, generally speaking, applications that are not games will usually run better on an equally priced nvidia card vs. ATI. However, ATI hands down has the better gaming GPU at the moment. Nvidia has a new line of cards coming out in two weeks so the performance crown may or may not shift.
 
Jan 24, 2009
125
0
0
Nvidia hands down has better GPGPU support and is considerably ahead of ATI in application support. So, generally speaking, applications that are not games will usually run better on an equally priced nvidia card vs. ATI

If the application supports it, I think you'd be hard pressed to find a consumer level application that, well, you'd want to use that supports all that stuff.

I really haven't seen much discussion on applications that use GPGPU aside from folding@home and video encoding (I'm not contesting the fact that nv is better at these two things in GPGPU).

But, unless you're a folder or you're constantly encoding video, there isn't really ANYTHING there, like, at all, on either side.
 

Dark4ng3l

Diamond Member
Sep 17, 2000
5,061
1
0
Nvidia hands down has better GPGPU support and is considerably ahead of ATI in application support. So, generally speaking, applications that are not games will usually run better on an equally priced nvidia card vs. ATI. However, ATI hands down has the better gaming GPU at the moment. Nvidia has a new line of cards coming out in two weeks so the performance crown may or may not shift.


Unless he does distributed computing or really demanding professional applications(the kind of stuff where he would usually want a Quadro or Tesla board for) he won't see any difference. What is multimedia applications? It could be photoshop and windows media player for all we know.
 

Sylvanas

Diamond Member
Jan 20, 2004
3,752
0
0
I am searching for a new graphics card and I am in doubt between NVIDIA and ATI. I've seen different benchmarks and, as tricky as benchmarks may be, they have given me a clue of which card is superior to the other in terms of performance. Numbers are easy to compare.

But perhaps there is more to a video card than just numbers. I've read in some places that there are not so trivial differences between ATI and NVIDIA. For instance: ATI drivers are more buggy; the image quality of NVIDIA is superior to the image quality of ATI (and vice-versa); ATI usually embraces the latest technology while NVIDIA often rebrands its old video cards; ATI is designed towards gaming and NVIDIA is designed towards general computing; ATI is slower at 2D graphics; one is faster in some kinds of applications than the other.

NVIDIA GeForce and ATI Radeon are really very different beasts, beginning with their architecture. Comparing raw numbers is a very simplistic approach and, at least to me, it doesn't seem to reveal all the truth behind these cards.

I am particularly interested in these non-trivial differences. I haven't owned a decent video card for quite a while right now (some years, in fact), so I cannot really tell the differences of the latest ATI and NVIDIA series.

I am looking for a high-end video card for general purposes, not just gaming (but also some gaming). I intend to use two 1920x1080 monitors to do quite a few demanding multimedia tasks. So, I would appreciate if someone told me which are the main differences between these two cards, apart from the ones I can easily see on traditional benchmarks. On which one the image quality is better? For which kind of applications is NVIDIA or ATI better? Which will probably better support GPGPU? Which offers better performance for dual monitors and high resolutions? And so on...

If you google around and visit many uneducated forums you'll get uneducated opinions that are flat out wrong. Examples from your first paragraph being 'ATI drivers are buggy' or 'Nvidia focuses more on GPGPU than gaming'. Both of these are highly subjective. ATI and Nvidia both have solid drivers and if you ask someone whos is better they will respond with the experiences theyve had with either company- sometimes people have never tried the other vendor and just jump to the conclusion that 'they suck in this area'. Here is my subjective opinion, I have tried both vendors nearly every generation for 6 years and in my experience I have had a better experience with ATIs drivers than Nvidias. Also things like 2d 'quality' are non existant these days and any differences are probably a placebo effect.

If you want to buy a videocard you must consider a few things:
1. What is your price range?
2. What will you use it for specifically?
3. If for gaming, what resolution+settings+games do you play?

So if you need it for 'demmanding multimedia tasks' what are they exactly? What programs are you using, do they offer GPU acceleration? e.g. Photoshop, Video encoding. Before you answer these questions we cant really help you.

All the best.
 

skaertus

Senior member
Mar 20, 2010
217
28
91
If you google around and visit many uneducated forums you'll get uneducated opinions that are flat out wrong. Examples from your first paragraph being 'ATI drivers are buggy' or 'Nvidia focuses more on GPGPU than gaming'. Both of these are highly subjective. ATI and Nvidia both have solid drivers and if you ask someone whos is better they will respond with the experiences theyve had with either company- sometimes people have never tried the other vendor and just jump to the conclusion that 'they suck in this area'. Here is my subjective opinion, I have tried both vendors nearly every generation for 6 years and in my experience I have had a better experience with ATIs drivers than Nvidias. Also things like 2d 'quality' are non existant these days and any differences are probably a placebo effect.

If you want to buy a videocard you must consider a few things:
1. What is your price range?
2. What will you use it for specifically?
3. If for gaming, what resolution+settings+games do you play?

So if you need it for 'demmanding multimedia tasks' what are they exactly? What programs are you using, do they offer GPU acceleration? e.g. Photoshop, Video encoding. Before you answer these questions we cant really help you.

All the best.

Thank you. You are all being very helpful indeed. I've not purchased a decent video card for quite some time now. As such, I have to rely on the opinion of other people, and I had a very hard time trying to find on the Internet unbiased opinions on the subject. So, the information you are providing is just precious.

As for the questions you asked:

1. The price range may vary, but I do not consider myself a "gamer" and, in principle, I do not want the video card to be more expensive than the processor I use. I am soon buying a new processor (probably a Core i7-860 or a Core i7-930) and I expect the video card to be less expensive than the new processor. However, I am willing to spend more if it is necessary or if the cost/benefit justifies it.

2. General computing. I want to use a dual-monitor configuration; with each one of the two monitors displaying a resolution of 1920x1080, and I intend to run everything with no slow-downs. I consider myself a power user and I make extensive use of multi-tasking. I intend to use virtual machines running Windows; use office applications; browse the Internet (with several instances of the web browser opened simultaneously); some photo editing (with Photoshop); some video encoding; full HD resolution video without giggling; occasional gaming (probably not Crysis, but I would like to run Street Fighter IV, Tales of Monkey Island and similar games and their follow-ups to the maximum speed). And sometimes I would like to run various of these tasks at the same time without losing performance. I would like a video card that fully and easily supports these requirements and which will keep up with my demands for at least two years or so. I've had terrible experiences with integrated graphics (which won't keep up even with PowerPoint at higher resolutions), so I want to make sure my next graphics card will fulfill my needs.

3. The main reason I am buying the video card is not to play games. However, I will sometimes play some not too demanding games (such as beat-'em-ups or adventure games, as I've mentioned - I have no time nor patience for ultra-developed and immersive games). However, I intend to play these games with 1920x1080 resolution and with the maximum level of detail.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
For the things you want to do I would suggest at least 8 gigs of ram and a core i7 cpu.
As for a video card? A 5770 should be plenty and will run 2 monitors just fine for 150$.
 

Dark4ng3l

Diamond Member
Sep 17, 2000
5,061
1
0
Yeah as far as all the 2d stuff you want to do with the HD video and dual screens, etc pretty much any modern video card can do that and more no problem.

For example even the extremely weak by comparison 5450 (like this one http://www.newegg.ca/Product/Product...-340-_-Product ) can do everything you want it to do 5 times over (outside of gaming).

The games you listed are also relatively not demanding compared to the average modern game. A 5750 or 5770 would be fantastic for your needs( for example http://www.newegg.com/Product/Produc...-535-_-Product would be plenty.
 

Sylvanas

Diamond Member
Jan 20, 2004
3,752
0
0
Thank you. You are all being very helpful indeed. I've not purchased a decent video card for quite some time now. As such, I have to rely on the opinion of other people, and I had a very hard time trying to find on the Internet unbiased opinions on the subject. So, the information you are providing is just precious.

As for the questions you asked:

1. The price range may vary, but I do not consider myself a "gamer" and, in principle, I do not want the video card to be more expensive than the processor I use. I am soon buying a new processor (probably a Core i7-860 or a Core i7-930) and I expect the video card to be less expensive than the new processor. However, I am willing to spend more if it is necessary or if the cost/benefit justifies it.

2. General computing. I want to use a dual-monitor configuration; with each one of the two monitors displaying a resolution of 1920x1080, and I intend to run everything with no slow-downs. I consider myself a power user and I make extensive use of multi-tasking. I intend to use virtual machines running Windows; use office applications; browse the Internet (with several instances of the web browser opened simultaneously); some photo editing (with Photoshop); some video encoding; full HD resolution video without giggling; occasional gaming (probably not Crysis, but I would like to run Street Fighter IV, Tales of Monkey Island and similar games and their follow-ups to the maximum speed). And sometimes I would like to run various of these tasks at the same time without losing performance. I would like a video card that fully and easily supports these requirements and which will keep up with my demands for at least two years or so. I've had terrible experiences with integrated graphics (which won't keep up even with PowerPoint at higher resolutions), so I want to make sure my next graphics card will fulfill my needs.

3. The main reason I am buying the video card is not to play games. However, I will sometimes play some not too demanding games (such as beat-'em-ups or adventure games, as I've mentioned - I have no time nor patience for ultra-developed and immersive games). However, I intend to play these games with 1920x1080 resolution and with the maximum level of detail.

Glad to be of help to you. Taking into consideration the things you will be using your computer for and you budget, I'd agree that a 5750/5770 would be ideal for your usage.

For starters, it's reletively cheap coming in at about $170 at Newegg with free shipping and is considerably less than say a Corei7 860. Speaking of CPUs I'd say you've made a good decision with the i7's, HyperThreading becomes very valuable when running multiple windows and indeed for VMs it's a must (seeing as you can schedual one to run on each core/thread).

You meantioned the use of dual monitors with lots of multitasking, granted a decent GPU does help in keeping things snappy but perhaps the best thing you can do for yourself is purchase ample RAM for your system. This is the biggest bottleneck when running multiple VMs and additional application, they eat through memory very fast. For such a setup, 8GB of DDR3 in a P55 mobo would suit your needs very well- 4GB is a bit limiting with all your multitasking. Photoshop to my understanding (and as it seems on their website) makes use of both ATI and Nvidia GPUs for GPU acceleration. Video encoding can of course be run on the CPU (which makes good use of HT aswell) but there are also a few that run on the GPU. There was an Anandtech article a while back comparing Badaboom (Nvidia's video encoding application) and an App that uses ATIs GPUs (can't remember the name sorry) on the whole, Badaboom was the better alternative but really the quality was not on par with what is achieved with the CPU (although of course video encoding on the CPU takes a little longer).

As for dual monitors and gaming, you'll have no problems running the games you mentioned at max with a 5770, and dual monitors is not an issue at all with either ATI or NVidia. The good thing with the 5xxx series and dual monitors is that it supports 'Eyefinity' which you may already know about, but is definitely a huge plus for someone with the need for extensive use of dual monitors. That's about wraps it up :)

All the best
 
Last edited:

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
Yep, 5750/5770 is plenty for what you want. Your usage does not warrant anything close to needing a "high-end video card."
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
dual-monitor configuration; with each one of the two monitors displaying a resolution of 1920x1080 = Nvidia or ATI

and I intend to run everything with no slow-downs. I consider myself a power user and I make extensive use of multi-tasking. I intend to use virtual machines running Windows = Core i7

use office applications = Core i7

browse the Internet (with several instances of the web browser opened simultaneously) = If going to use IE9, that is GPU accelerated and optimized for CUDA. Microsoft has a press release for this. So, Nvidia. If other browsers are used, then Nvidia or ATI

some photo editing (with Photoshop) = Nvidia if his version is optimized for CUDA, all others = Nvidia or ATI

some video encoding = Nvidia if encoding app used is optimized for CUDA, all others = Nvidia or ATI

full HD resolution video without giggling = Nvidia or ATI

occasional gaming (probably not Crysis, but I would like to run Street Fighter IV, Tales of Monkey Island and similar games and their follow-ups to the maximum speed) = Nvidia or ATI

And sometimes I would like to run various of these tasks at the same time without losing performance = Core i7

I would like a video card that fully and easily supports these requirements and which will keep up with my demands for at least two years or so. = higher end Nvidia or ATI card if you want to keep it up for two years. Tough call for this because although gaming isn't your primary thing, there may be a game or three in the next two years that interests you which could really make today's middle of the road card crawl. Especially if you insist on 19x10 res.

I hope this helps. It's as detailed as you can get. From the looks of it, depending on if your Photoshop app and encoding app is or isn't optimized for CUDA, and if you plan on using GPU accelerated IE9, then either Nvidia or ATI would do. If either of them are optimized for CUDA, you would have better performance in those areas with an Nvidia card (8 series or later).

I think that about sums it up.
Good Luck.
 
Last edited:

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
Guys, he did mention Photoshop, and encoding. What do you think, still 5770?

Given that Adobe claim Photoshop acceleration will run on ATI cards (no idea about speed differences), and given that GPU 'encoding' is nowhere at the moment, I don't see why not...?

Anyone who wants to do real video encoding wouldn't be using any of the GPU applications which are available, so that depends on what sort of encoding he will be doing, and it would be interesting to see if anyone has compared $130 cards at video encoding, since the Hexus.net review I found using the Cyberlink app was an HD5870 vs GTX295 (http://www.hexus.net/content/item.php?item=20825&page=5).

But as they comment at the end, there is a lack of control over the process, similar to what has been said about things like Badaboom.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Given that Adobe claim Photoshop acceleration will run on ATI cards (no idea about speed differences), and given that GPU 'encoding' is nowhere at the moment, I don't see why not...?

Anyone who wants to do real video encoding wouldn't be using any of the GPU applications which are available, so that depends on what sort of encoding he will be doing, and it would be interesting to see if anyone has compared $130 cards at video encoding, since the Hexus.net review I found using the Cyberlink app was an HD5870 vs GTX295 (http://www.hexus.net/content/item.php?item=20825&page=5).

But as they comment at the end, there is a lack of control over the process, similar to what has been said about things like Badaboom.

I've edited my post slightly. ^

Also, from your link to Cyberlink Media Show Expresso, I don't think that app is optimized for SLI, which is what a GTX295 uses for it's two GTX275 cores. So most likely, only one GPU on that GTX295 is used for encoding. So the 5870's theorectical compute advantage isn't 50% more than a GTX295. It has a technical advantage of 100% over a GTX275, for a real world 21% gain over said GTX275. (1/2 GTX295)

We would be much better off finding a similar review showing a 5870 against a GTX275 using Cyberlink Media Show Expresso.

Anandtech Review Link:
http://www.anandtech.com/video/showdoc.aspx?i=3578
 
Last edited:

1h4x4s3x

Senior member
Mar 5, 2010
287
0
76
To clear a few things up.

CS4 does not natively support CUDA nor ATI stream nor OpenCL.
Natively, CS4 only supports "Shader Model 3.0 and OpenGL 2.0".
Though, you can buy plugins, a CUDA plugins for acceleration costs afaik $500 (you'll need a Quadro card though). I can't tell you whether it's any good. Some googling will probably do.

I don't know why ppl are so blatantly lying.


Then again, we can take a look into the future, as far as the crystal ball allows it.

CS5 is going to support CUDA natively via its mercury engine. Support for OpenCL (ATI) will be added later one but probably not from the beginning.

Correct me where I'm wrong.
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
I've edited my post slightly. ^

Also, from your link to Cyberlink Media Show Expresso, I don't think that app is optimized for SLI, which is what a GTX295 uses for it's two GTX275 cores. So most likely, only one GPU on that GTX295 is used for encoding. So the 5870's theorectical compute advantage isn't 50% more than a GTX295. It has a technical advantage of 100% over a GTX275, for a real world 21% gain over said GTX275. (1/2 GTX295)

We would be much better off finding a similar review showing a 5870 against a GTX275 using Cyberlink Media Show Expresso.

Anandtech Review Link:
http://www.anandtech.com/video/showdoc.aspx?i=3578

They state in the review that there was no scaling from SLI. Not that it helps anyway when comparing an HD5770 to a G80/G92 based card :p
Better to find a review with at least one relevant card in it, rather than two not relevant cards one of which doesn't scale.

The take home message though is that CPU encoding, while slower, is more widely supported and the image quality and processing from the programs which do support GPU acceleration is spotty at best, and offers limited options for changing things and using customised settings, which is why CPU encoding, while slower, is superior, unless you are just going for fast and dirty and don't care too much.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
They state in the review that there was no scaling from SLI. Not that it helps anyway when comparing an HD5770 to a G80/G92 based card :p
Better to find a review with at least one relevant card in it, rather than two not relevant cards one of which doesn't scale.

The take home message though is that CPU encoding, while slower, is more widely supported and the image quality and processing from the programs which do support GPU acceleration is spotty at best, and offers limited options for changing things and using customised settings, which is why CPU encoding, while slower, is superior, unless you are just going for fast and dirty and don't care too much.

SLI not scaling was exactly my point. So in your review link, they were essentially comparing a 5870 to a GTX275. And the Anandtech review I linked to, clearly compares a 4890 to a GTX275. More in line with each other in every respect. Gives a clearer idea of respective performance.

About your second paragraph here.... A quote from the very article you linked to:
"The difference in image quality between the output from the CUDA-accelerated encode and the ATI Stream-accelerated encode is substantial. Not only does the CUDA-accelerated output maintain greater fine-detail quality in the video but it also displays much better contrast. Indeed, the NVIDIA output appears to maintain so much more detail than the CPU-encoded output that it leads us to believe the CUDA codepath is not using the same coding parameters - and may even be using a more compute-intensive encoding engine.

So Anandtech disagrees with you when you claim CPU encoding is superior. This may very well depend on the application being used, but since you linked to Media Show Espresso, I thought it was worth mentioning. Hey, your link. ::shrugs::
 

Lonyo

Lifer
Aug 10, 2002
21,939
6
81
SLI not scaling was exactly my point. So in your review link, they were essentially comparing a 5870 to a GTX275. And the Anandtech review I linked to, clearly compares a 4890 to a GTX275. More in line with each other in every respect. Gives a clearer idea of respective performance.

About your second paragraph here.... A quote from the very article you linked to:
"The difference in image quality between the output from the CUDA-accelerated encode and the ATI Stream-accelerated encode is substantial. Not only does the CUDA-accelerated output maintain greater fine-detail quality in the video but it also displays much better contrast. Indeed, the NVIDIA output appears to maintain so much more detail than the CPU-encoded output that it leads us to believe the CUDA codepath is not using the same coding parameters - and may even be using a more compute-intensive encoding engine.

So Anandtech disagrees with you when you claim CPU encoding is superior. This may very well depend on the application being used, but since you linked to Media Show Espresso, I thought it was worth mentioning. Hey, your link. ::shrugs::

Actually my link said this on the following page:

The difference in image quality between the output from the CUDA-accelerated encode and the ATI Stream-accelerated encode is substantial. Not only does the CUDA-accelerated output maintain greater fine-detail quality in the video but it also displays much better contrast. Indeed, the NVIDIA output appears to maintain so much more detail than the CPU-encoded output that it leads us to believe the CUDA codepath is not using the same coding parameters - and may even be using a more compute-intensive encoding engine.

It should be noted, however, that the final image quality of the GPU-accelerated output file varies hugely, dependent on the source footage fed into MediaShow Espresso. This was made blatantly obvious when we analysed the image quality of the output files created from a different HD feed. AMD came up trumps by offering a much cleaner (albeit slightly softer) output than the NVIDIA's.

Sometimes AMD-produced video is better, other times, as per the screenshots, NVIDIA's is better.

So yes, you can cherry pick the bit where they say CUDA/NV is better, but equally they point out that it varies hugely, and AMD is sometimes better.
The fact that it varies hugely means that the program is bad for people who really want to do things with a level of control.

the level of customisation on offer is extremely limited. For example, there is no adjustment of the audio output parameters other than the format used. In addition you can only choose a very limited selection of output resolutions, not helped by the lack of bit-rate or frame-rate adjustment. As such the output frame-rate on both NVIDIA and AMD systems is incorrect. It spits out 29fps whereas it should have been 25fps.

CyberLink’s MediaShow Espresso doesn’t allow enough flexibility to achieve the best compromise between file-size and image quality.

My point was that GPU encoding sucks for power users, not that A or B is better. The fact that you tried to use the article to spin NV as being better than AMD when they state that it varies between the two, and the main point seems to be that GPU acceleration is currently more for quick and dirty transcoding and not quality controlled output.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
Actually my link said this on the following page:



So yes, you can cherry pick the bit where they say CUDA/NV is better, but equally they point out that it varies hugely, and AMD is sometimes better.
The fact that it varies hugely means that the program is bad for people who really want to do things with a level of control.



My point was that GPU encoding sucks for power users, not that A or B is better. The fact that you tried to use the article to spin NV as being better than AMD when they state that it varies between the two, and the main point seems to be that GPU acceleration is currently more for quick and dirty transcoding and not quality controlled output.

Lonyo, the only one who is trying REALLY REALLY hard to spin here, and has been for the last week or so, is you.

You can plainly see that I had a disclaimer in saying "depending on application" and the source I linked to as well as yourself, shows in plain english for everyone to read for themselves. Anything I posted was in direct counter of your posts. Provided quotes from the very article you linked to.

I am also certain you knew SLI didn't scale yet felt free to show 5870 vs. GTX295 results in media show espresso. This has been brought up before in this forum and I'm sure you were a part of the conversation. Hows that for cherry picking?

Just go ahead and read that AT article I linked to in full. And while you're at it, go and read the article you linked to in full. I suggest the OP does the exact same thing.
 

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91
To clear a few things up.

CS4 does not natively support CUDA nor ATI stream nor OpenCL.
Natively, CS4 only supports "Shader Model 3.0 and OpenGL 2.0".
Though, you can buy plugins, a CUDA plugins for acceleration costs afaik $500 (you'll need a Quadro card though). I can't tell you whether it's any good. Some googling will probably do.

I don't know why ppl are so blatantly lying.


Then again, we can take a look into the future, as far as the crystal ball allows it.

CS5 is going to support CUDA natively via its mercury engine. Support for OpenCL (ATI) will be added later one but probably not from the beginning.

Correct me where I'm wrong.

"Nvidia’s cuda GPU acceleration pays off

We’ve seen some videos of how much Cuda with Nvidia graphics card can speed up certain effects in Photoshop CS4 and Premiere CS4. This is all a part of newly announced Creative Suite 4 and support for Nvidia’s Cuda.

Nvidia takes over acceleration on its GPU and it can run tremendously and notably faster than these programs accelerated just by the CPU. In Premiere CS4, in one of the plugins, Nvidia claims up to seven times faster performance compared to raw CPU power.

We’ve seen videos of Photoshop CS4 with and without GPU and the performance of real time image rotation, instant zooming and panning are notably faster with GPU.

Brush resizing and brushstroke preview, high-dynamic tone mapping and colour conversion should run faster too.

On Adobe After Effects CS4 you can get effects such as depth of field, bilateral blur effects, turbulent noise such as flowing water or waving flags, and cartoon effects accelerated on GPU.

Adobe Premiere Pro CS4 can also accelerate Video but only with Quadro GPUs. Photoshop works with normal Nvidia GPUs. Adobe Premiere Pro CS4 can use the Quadro GPU to accelerate high-quality video effects such as motion, opacity, color, and image distortion. They will enable faster editing of multiple high-definition video streams and graphic overlays and provide a variety of video output choices for high-quality preview, including display port, component TV, or uncompressed 10 or 12-bit SDI.

If you use Elemental RapiHD CUDA Plug-In for Premiere Pro CS4 you can get up to seven times performance gain over CPU. This plug-in can Render Blu-ray quality AVC/H.264 files in real-time, Scrub multiple streams of AVCHD and HDV video, Accelerated workflow with fast high-quality scaling, cross-dissolves, and transition effects.

As Nvidia wants to keep making money on Quadro graphic card series, Premiere effects works only on this ultra expensive professional cards but we can say that it does make a huge difference at least on video . We are working to try this in our lab environment and justify these claims but from the videos we’ve seen, the GPU accelerated Photoshop CS4 and Premiere CS4 do run notably faster than the CPU accelerated ones.

I guess this is what Intel was afraid in the first place. A senior vice president of Creative Solutions at Adobe made a bold statement saying:"A critical element of CS4 was to capture the enormous power of the GPU. The difference is astounding. Performance is important to creative professionals and with the NVIDIA GPU, they are assured to be able to interact with images and videos in a much faster, smoother, more engaging way."

Let me remind you that this works only on Geforce 8 and 9 series GPUs and it might not run on Radeons as ATI doesn’t support Cuda at this time. We will investigatete about this.

You can check some of the videos here but the Creative Suite 4, Photoshop CS4, and Premiere CS4 that most of you might care about should be out in October. I guess Nvidia was right, GPU can be used for "smart" things not only gaming."
 

dguy6789

Diamond Member
Dec 9, 2002
8,558
3
76
Photoshop CS4 is fully accelerated by Radeons at this time. The article was mistaken(or out of date). They misunderstood that Nvidia GPUs that support Cuda would accelerate Photoshop as Photoshop requiring Cuda or even using Cuda for those acceleration effects. There was a thread about this very topic a year or so ago. There are Cuda plugins and such, but there's nothing in the stock installation of Photoshop that can't be accelerated by Radeons.
 
Last edited:

Dark4ng3l

Diamond Member
Sep 17, 2000
5,061
1
0
Just keep in mind when reading what Keysplayr says that he is a ''Member of Nvidia Focus Group'' Hence his somewhat comical over the top defending of their products. Personally I choose to ignore everything that Focus group fanatics say.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
browse the Internet (with several instances of the web browser opened simultaneously) = If going to use IE9, that is GPU accelerated and optimized for CUDA. Microsoft has a press release for this. So, Nvidia. If other browsers are used, then Nvidia or ATI
Well I'm hard pressed imagining that a core i7 could've problems rendering a webpage, so I don't think that's important for such a powerfull desktop environment

some photo editing (with Photoshop) = Nvidia if his version is optimized for CUDA, all others = Nvidia or ATI
There's no optimized CUDA version of photoshop available at the moment, though Adobe said they wanted to support it for CS5. On the other side they also said they'd want to use DX11 and they also said the same about CUDA support in CS4. Though I'd say if it's important and he'd buy the newest and greates Photoshop (too expensive for me, but I'm just a hobbyist), it couldn't harm.
But if he's using CS4 then you get exactly the same effects with a Radeon as with a Geforce card - they only need Shader Model 3.0, OpenGL2.0 and 512mb VRAM - even a radeon 1900 or 6800 ultra fullfil those requirements..


some video encoding = Nvidia if encoding app used is optimized for CUDA, all others = Nvidia or ATI
Good point, not sure if there are still quality problems, but if not then a GPU would be much, much faster than even a Core i7 could ever be.
 
Last edited: