Nvidia GPUs soon a fading memory?

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Try to avoid overly elitist "the average consumer is so damn stupid, they can't even tell what a GPU is" please. I have seen many companies brought down by taking this approach. And then they find out that people DO know how to use google and can comprehend basic things.

Sure my grandma doesn't know what a video card is, but my grandma doesn't play video games.

Go look around gaming forums and see how many people have IGPs and very low GPUs and are frustrated with their performance.

And I know grandmas that play video games. Urs not cool enough. :p
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Go look around gaming forums and see how many people have IGPs and very low GPUs and are frustrated with their performance.

And I know grandmas that play video games. Urs not cool enough. :p

After my HD4870 heatsink failed, I picked up a HD4350 (same power as an IGP). Quite honestly I have been surprised at how well it plays TF2 @ 12x10 resolution.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
After my HD4870 heatsink failed, I picked up a HD4350 (same power as an IGP). Quite honestly I have been surprised at how well it plays TF2 @ 12x10 resolution.

Yeah but pick an intel IGP and see how good it is.

Although TF2 is quite low req and that 4350 has twice as many SPs as ATi current best IGP.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
And did you buy one in Athlon X2 vs Pentium D era?
Back then I said only an idiot would buy an intel CPU... of course, now I am wiser an would say "only a person ignorant of this specific market would buy intel". There were a good few years where there was no reason whatsoever to buy intel...

nowadays, its very hard to justify an AMD purchase. Power costs are a big deal btw...
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
What about AMD fusion in Laptops?

Assuming no other changes to the system memory (ie, worst case scenario), how much could an under-volted Llano 480 stream core be bottlenecked by the notebooks 128 bit DDR3?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
And did you buy one in Athlon X2 vs Pentium D era?

No actually... I bought an Athlon XP slightly before, and skipped the Athlon64/Pentium4 era altogether.

So now that they are doing you complain?

No, they are NOT doing anything. How many applications can you run on an AMD GPU? 0 (let's not count the AMD-sponsored Folding@Home client as an 'application').
For nVidia there are various GPU-accelerated video encoders, there's various Adobe products, there's PhysX, and soon we will also have Cuda-accelerated virus scanners.
Actual applications that people can actually use, that will actually improve their productivity or experience.
AMD has none of those.

Whenever people are given more resources they will tap into them and use them.

People don't do more with their IGPs because it can't do more. Give people more from their "IGPs" and they will do more.

The problem is the gap that is too large.
Intel IGPs can already do everything non-gaming related. You can run your desktop with full Aero Glass and all that. You can play HD video...
Gaming is the ONLY thing they can't do, but that requires a BIG performance leap. More than what Llano will give.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Actual applications that people can actually use, that will actually improve their productivity or experience.
AMD has none of those.

Why are these productivity applications not happening for AMD?

Is AMD doing anything at the moment to help encourage a change in the lack of this program development?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Why are these productivity applications not happening for AMD?

Because they all use Cuda. Since AMD had no alternative, they are found without support from devs right now.

Is AMD doing anything at the moment to help encourage a change in the lack of this program development?

No. They've been blowing a lot of hot air about OpenCL in the media... but in reality nVidia has had official OpenCL support in their end-user WHQL drivers since November last year. AMD still doesn't offer OpenCL to end-users, and is not planning to do so on short notice.
I've asked AMD devrel time and time again. Eventually I got a BS answer about how they don't want the driver download to be too big, and they want to wait until major OpenCL applications are released.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
No, they are NOT doing anything. How many applications can you run on an AMD GPU? 0 (let's not count the AMD-sponsored Folding@Home client as an 'application').
For nVidia there are various GPU-accelerated video encoders, there's various Adobe products, there's PhysX, and soon we will also have Cuda-accelerated virus scanners.
Actual applications that people can actually use, that will actually improve their productivity or experience.
AMD has none of those.

0. Really?

Really, really 0 applications on ATi Stream?

http://www.cyberlink.com/products/powerdirector/overview_en_US.html and version 7 too

look another http://www.cyberlink.com/products/mediashow/overview_en_US.html .

Look, look

http://www.arcsoft.com/estore/software_title.asp?ProductCode=SIMHD#submenu this one too.

And another one http://www.roxio.com/enu/products/creator/suite/video.html .

And look you can say those run on CUDA too, so no need of Adobe only from now on.

Generally my definition of 0 is none not a very few.

Are there many applications using Stream, not really. But is CUDA a must have now? Not really.

But no need to reduce AMD ATi Stream to the realm of useless, non-programmable crap that nobody uses just because you don't know.

And there are scientific projects using Stream, http://www.neurala.com/neurala_amd_press_release_4_8_2008.html as an example.
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0

Yea, all using a video processing library written by AMD, they haven't developed it themselves (which is the whole point).
By the way, the image quality is not as good as Cuda's:
http://www.pcper.com/article.php?aid=745&type=expert&pid=7

Stream is just not a good alternative to Cuda.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Yea, all using a video processing library written by AMD, they haven't developed it themselves (which is the whole point).


And I though AMD deverl were bad, after all ATi tools are so good they simply use them without developing them themselves.

Of course in the case of physX where devs simply use the effects library that is a good thing.

Truth is GPGPU is such in an infant state that any advantages are insignificant and in a few years things can be completely different.

By the way, the image quality is not as good as Cuda's:
http://www.pcper.com/article.php?aid=745&type=expert&pid=7

Stream is just not a good alternative to Cuda.


Final Thoughts, Performance Rankings, Conclusion

Final Thoughts

After reviewing all the benchmark data as well as the image quality screenshots, both GPGPU technologies had their pros and cons that could affect a consumer's decision to purchase hardware and software that utilizes ATI Stream and/or CUDA. While Stream's transcoding times were slightly better than CUDA in most of our performance tests, CUDA seemed to produce a higher quality image that evened things out a bit. Stream also seemed to be more efficient in using less of the CPU's resources for transcoding while also producing fast transcoding times. However, these transcoding times might be lower because it is outputting lower-quality video files as our subjective image quality tests suggest.

Another interesting item to note is the GPU usage scores we recorded for our Radeon 4770. None of the GPU scores we received went over 23 percent, which indicates there's still a lot of stream processing power available for programmers to take advantage of. Maybe we'll also see some enhancements to future drivers from ATI and NVidia to use the extra GPU muscle too. Unfortunately, we weren't able to record any of our 9800GTX+'s GPU scores, but we did see higher CPU usage numbers that indicate NVidia isn't as concerned with multi-taskers who might like to use their computer for other tasks while they are transcoding video.

Cyberlink's PowerDirector 7 is a full-featured video editing and transcoding application that supports both ATI Stream and CUDA. PowerDirector's only "flaw", if you want to call it that, is that it maxes out the CPU during transcoding and doesn't leave any room for multi-tasking. The other program we got a chance to play with was another title from Cyberlink called MediaShow Expresso. A lot of talk has been buzzing around this particular app and for good reason. The transcoding times we recorded using Expresso were extremely quick. The UI had a Loiloscope feel to it and was intuitive from the moment I opened the program. Choosing different preset profiles was a snap and consumers should have an easy time adapting to Expresso's two-step transcoding process.

Lastly, we were pretty impressed with the simplicity of ATI's Avivo HD benchmark results against Handbrake. The overall transcoding times were exceptional and Avivo kept the CPU usage down for those of us who like to multi-task. The interface was extremely simple to use, but lacked some advanced features we've become accustomed to seeing in video transcoders like video effects and transitions options and better video customization options. ATI confirmed to us that Avivo HD does not support iPod, PSP, VC1, H.264, and MKV video formats at this time. However, it does support MPEG-1, VCD, MPEG-2, DVD/VOB, DVR-MS, DivX (as long as the codec is installed), and WMV formats, which is more than adequate for most users.


Performance rankings

To recap the goals of our review today, we wanted to rank how each GPGPU technology faired in meeting the intent of our testing perimeters for this article. A couple of our perimeters were specific to testing against a CPU-based transcoder,

Parameter 1: Evaluate CPU usage and determine how much of the computing load being handled by the CPU with ATI Stream/CUDA enabled and disabled

Winner: ATI Stream. During our evaluation, we noticed considerable differences in CPU usage between transcoding with ATI Stream and CUDA. CUDA's average CPU usage was in the 80s, while Stream was closer to the high 60s. The extra CPU usage didn't really help CUDA in producing faster transcoding times either. So, the winner would have to be ATI Stream because it used less resources and produced faster transcoding times. It also left enough resources for users to do additional tasks during transcoding.

Parameter 2: What performance differences will consumers notice between using ATI Stream or CUDA?

Winner: ATI Stream. The performance differences between these two GPGPU technologies was a bit mixed because Stream used less CPU power and had better transcoding times, but it seemed to produce lower quality videos. If we strictly viewed just the "performance" portion of our review, ATI Stream would win because of its benchmark results during performance testing. We'll give a slight edge to ATI Stream in this portion of our ranking.

Parameter 3: Subjectively evaluate the image quality of outputted video that was transcoded with ATI Stream and CUDA

Winner: NVidia CUDA. CUDA seemed to produce a higher-quality image in two out of the three video clips we captured screenshots from. ATI Stream's outputted video was a little bit softer in a few parts of the test videos and CUDA's screenshots were brighter, clearer, and showed a little more detail overall. So, we'll give CUDA the image quality crown.



Conclusion

We'd like to thank Cyberlink and AMD (ATI) for providing their respective transcoding software for our review today. GPGPU technology is really still in its infancy and GPU acceleration for video transcoding is just the beginning. I'm sure both AMD (ATI) and NVidia have their sights set on using the GPU for more general tasks and are working with programmers to move toward utilizing stream computing for other types of applications. The benefits of GPU acceleration is undeniable, especially in the video transcoding department. The differences between transcoding with the GPU and CPU in tandem as opposed to using the CPU alone suggest that GPU acceleration plays a large role in outputting video at faster rates. I'm sure we'll see a lot more from the GPGPU realm that consumers and enthusiasts should benefit from not only from performing basic tasks, but with more computing-intensive programs.


Ryan's Thoughts: Let me offer a second opinion on these results. Everything that Steve has written in this piece is correct in terms of speed, performance and CPU utilization. A lot of testing went into this piece and I think he did a very good job of summarizing his thoughts. There is another opinion on this debate though, one that I follow more closely than he does. The truth of the matter is that for video transcoding and encoding, there are two key points: speed and quality. We want the GPU-based applications to increase the SPEED of our video work but we also have expectations of image quality.

An individual user may in fact want different benefits at different times as well: if I am in a rush to catch a flight I might my movie to encode incredibly fast regardless of quality so I don't miss the plane. Or I might have planned ahead that night and decided I want a better quality encode but still have a time crunch.

CPU utilization is an important factor as well for multi-tasking. NVIDIA's implementation is obviously using some extra CPU cycles to improve quality in a way that AMD's implementations are not. Again - two different perspectives on what you want to do with your system.

What I am trying to get at is that for me - I would favor the higher image quality results of the NVIDIA CUDA-based implementations of these GPGPU apps in just about 99% of circumstances. If you are considering UPCONVERTING your content to HD quality, for example, what is the point of getting it done "faster" if it isn't done in the best quality possible? If you are one of those users archiving your DVD content locally then you would also likely desire the better image quality of the NVIDIA CUDA software as opposed to the AMD Stream software.

In the end though, Steve is correct: GPU computing is here in a pretty big way but still has further to go before it is really everything for everyone.

And lets just add that image quality differences were only noticeable(?) when zoomed to 200%. That is exactly what all sane people do when watching a movie.

Our zoomed-in screenshots show a few minute details in the image quality consumers can expect from outputted video from ATI Stream or CUDA enabled applications. The ATI screenshot looks a bit softer than its CUDA counterpart that can be seen in the model's face and hair. The background behind the girl in NVidia's version looks to be a bit brighter too. Everything else looks pretty identical and both had similar file sizes, which is also a consideration when transcoding video for different devices like the PSP or iPhone.

Again we see some small differences that can barely be seen by the naked eye between these two zoomed-in screenshots. There is some softness to the image quality in both screenshots, but for a video that's optimized to be viewed on an iPod it should look a lot better at lower resolutions. There is some jagged lines in the buildings in the background, but that typically happens when you compress a video file to save on file space. In this particular comparison, I don't see many distinguishable differences that consumers should be aware of.

For our final image quality test, we used NVidia's "The Plush Life" to see if we could pinpoint all the small differences in detail, clarity, lighting, and sharpness between the screenshots above. At first glance, the CUDA screenshot on the left looks a bit sharper than the ATI capture. Another item that looks a bit sharper the seat in the bottom left corner of each screenshot. The seat looks clearer and you can see the detail in the seat as opposed to the ATI version that looks a bit blurry.


That being said, I think our CUDA screenshots slightly edged out ATI Stream in the image quality department. Most of the differences we discussed were barely noticable, but the last screenshots really allowed us to see those miniscule details in the character's face and car seat.

"I think... slightly" - solid proof that ATi stream sucks!

Ah! The definitive proof is the detail differences on the "NVidia's "The Plush Life" test clip".

Lastly of course ATi will never improve! The image quality you see is the image quality you will see forever!
 
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
And I though AMD deverl were bad, after all ATi tools are so good they simply use them without developing them themselves.

It's pretty much the only component that AMD offers, and it doesn't have the same quality and robustness as nVidia's video component.
nVidia also offers tons of other Cuda libraries, and things like Matlab integration etc.

Of course in the case of physX where devs simply use the effects library that is a good thing.

They *could* develop their own, if they wanted. Eg Bullet Physics had a Cuda module.
But physics is middleware for most developers.
I'm not saying that using a prefab library is a bad thing...
But with Stream that is the ONLY thing that people are doing.
With Cuda, there are various examples of independent developers building their own applications with Cuda.

Truth is GPGPU is such in an infant state that any advantages are insignificant and in a few years things can be completely different.

On AMD's side yes.
nVidia has a full C++ suite available, very mature. Any C++ developer can get into Cuda in no time.

And lets just add that image quality differences were only noticeable(?) when zoomed to 200%. That is exactly what all sane people do when watching a movie.

Indeed. If I were making a movie I wouldn't settle for low quality. Let alone the encoding bugs that various reviewers encountered.

Lastly of course ATi will never improve! The image quality you see is the image quality you will see forever!

Nope, they've had image quality issues since the first Avivo emerged on Radeon X1000 series. Their software department is just horribly underpowered, underqualified and under-experienced.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Last edited:

Scali

Banned
Dec 3, 2004
2,495
0
0
And still we, regular consumers, have barely any use for it and most of those consumer applications have a stream variant...

You mean only video apps have a Stream variation.
Adobe doesn't, PhysX doesn't.

And of course that AMD will just rely on OpenCL and DirectCompute http://www.xbitlabs.com/news/other/..._Development_Tools_for_Fusion_Processors.html .

Those are nowhere near as userfriendly and powerful as the latest Cuda.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
You mean only video apps have a Stream variation.
Adobe doesn't, PhysX doesn't.

Exactly what I'm saying - it is mature and user friendly and that is what CUDA has to show.

And btw Adobe requires a Quadro GPU. Really consumer friendly....
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
But weren't we talking about consumer applications?

So now we are back to scientific applications?

Everyone is a consumer GH. From the casual gamer, to a team of scientists utilizing the power of dozens/hundreds/thousands of supercomputers. They are all consumers. For people like us, primarily gamers, and video editors, and folders etc. CUDA/Stream may matter to some and not others.
So while you and Scali argue about this til doomsday and your posts dripping with sarcasm, maybe you can just agree to disagree. You both have points, but neither will concede even a nanometer, which is probably the goal here. To get the other to concede your own points. Which apparently isn't going to happen. Pointless to continue IMHO.

/2cents
 
Last edited:

A_Dying_Wren

Member
Apr 30, 2010
98
0
0
Everyone is a consumer GH. From the casual gamer, to a team of scientists utilizing the power of dozens/hundreds/thousands of supercomputers. They are all consumers. For people like us, primarily gamers, and video editors, and folders etc. CUDA/Stream may matter to some and not others.
So while you and Scali argue about this til doomsday and your posts dripping with sarcasm, maybe you can just agree to disagree. You both have points, but neither will concede even a nanometer, which is probably the goal here. To get the other to concede your own points. Which apparently isn't going to happen. Pointless to continue IMHO.

/2cents

There are plenty consumer applications in there aswell... and professional stuff, medical stuff, etc.

Glad to see we are taking well-given advice. :p

On topic though, does actively pursuing GPUGP necessarily come at any expense whatsoever to graphics performance in terms of wasted silicon when gaming or needlessly overcomplicated architecture?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Glad to see we are taking well-given advice. :p

That post wasn't there when I responded.

On topic though, does actively pursuing GPUGP necessarily come at any expense whatsoever to graphics performance in terms of wasted silicon when gaming or needlessly overcomplicated architecture?

Depends on how you look at it.
The introduction of programmable shaders also made cards more complex without any direct performance gain (GeForce3 was barely faster than a GeForce2 Ultra in DX7 apps).
However, once you started using the programmable shaders in games, you got better image quality and better performance, because of less render passes required for the same effects.

The unified shader architecture in the GeForce 8-series actually allowed for more performance on existing code because it could do better load balancing.

Right now, GPGPU goes above-and-beyond the needs of DX11... but future games may use GPGPU to accelerate rendering in some way. Physics being the most obvious example.
Moving to completely different rendering techniques ('software' rendering) could also be an option. So it could pay off just fine. Both nVidia and Intel seem to agree on that.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Everyone is a consumer GH. From the casual gamer, to a team of scientists utilizing the power of dozens/hundreds/thousands of supercomputers. They are all consumers. For people like us, primarily gamers, and video editors, and folders etc. CUDA/Stream may matter to some and not others.
So while you and Scali argue about this til doomsday and your posts dripping with sarcasm, maybe you can just agree to disagree. You both have points, but neither will concede even a nanometer, which is probably the goal here. To get the other to concede your own points. Which apparently isn't going to happen. Pointless to continue IMHO.

/2cents

Since you quote me, I guess that is because you agree with Scali that ATi stream has no potential and AMD might as well close the doors?

Because I believe GPGPU has potential and future, regardless of being AMD or NVIDIA.

Scali doesn't believe AMD has a future.

So there is a difference here.

I also believe that GPGPU still has a long way to go, though.

I don't even remember why I posted in this thread in the first place, because threads like this are simply flame bait, but it was probably because something was said about Llano/fusion which contradicted the late information/rumours, but I was careful enough to declare that I didn't believe that NVIDIA was going down.

I guess in this case here you are posting as member and not as a mod.

So let me remind you that sarcasm is something you use yourself in your own posts.

Maybe you have a preference for CUDA. I have no preference regarding CUDA or ATi Stream. I use none.

But what Scali here as been doing is giving absurd absolute statements to transmit his belief AMD is dead.

It wasn't me making absurd statements like "How many applications can you run on an AMD GPU? 0 (let's not count the AMD-sponsored Folding@Home client as an 'application').
For nVidia there are various GPU-accelerated video encoders, there's various Adobe products, there's PhysX, and soon we will also have Cuda-accelerated virus scanners.
Actual applications that people can actually use, that will actually improve their productivity or experience.".

Scali made this argument which is easily contradicted and then he has been going around with qualifiers and opinions.

Did I in this thread misinformed or came with obviously false information?

I sure did some speculation on Llano GPU performance based on what was written on 1 tomshardware article. And I said I was.

I'll even say more - yes, NVIDIA has spent more resources with CUDA and CUDA is used on more instances. But ATI stream is also used, not as widely as CUDA, far from it, but more than Scali would want us to believe ( http://developer.amd.com/samples/streamshowcase/Pages/default.aspx ).

And additionally, for regular consumers that aren't programming any scientific project, we just reached the conclusion that we have ADOBE (requires quadro, although some ppl had a varying degree of success with hacks) and physX.

So I would appreciate you didn't jump on here and tried to put everything in the same bag and say everything is the same.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Because I believe GPGPU has potential and future, regardless of being AMD or NVIDIA.

Scali doesn't believe AMD has a future.

So there is a difference here.

I also believe that GPGPU still has a long way to go, though.
AMD's GPGPU has the furthest to go though, as it is currently far behind the competition.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
AMD's GPGPU has the furthest to go though, as it is currently far behind the competition.

I agree.

But do you think AMD is out of the race? Do you think nothing at all, "ZERO" applications and science projects/supercomputers use ATi stream?

Do you think CUDA is the main reason people buy NVIDIA GPUs?

How many science projects out there use NVIDIA GPUs, AMD GPUs, Intel CPUs, AMD CPUs and others?

Do you believe that CUDA is, atm, by far the main HPC mover?
 
Status
Not open for further replies.