New era of GPU. Ananadtech needs to do this.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

akugami

Diamond Member
Feb 14, 2005
5,948
2,271
136
For $150? In all honesty, Wreckage gave him a much better answer. He linked test results for numerous encoding programs that fall well within his budget and depending on what he is doing can offer a substantial benefit(even a 260 throttles an i7 in the more demanding tests).

Badaboom seems to have fallen off the face of the earth relatively speaking. There's no pricing that I can find, there's barely mention of it on Elemental's web site. Still a little finicky depending on what formats it will accept.

LoiLoScope seems a little lot more flexible than Badaboom but is even more finicky with what formats it will accept.

Nero is actually interesting. But like the previous two, merely transcoders. Probably the most polished and usable of the bunch for pure transcoding purposes.

Powerdirector is the most complete in terms of what I'd personally need but GPU accelerated performance is decent but not heads and tails over a CPU. An overclocked CPU would likely provide about the same boost in performance.

Basically from what I see in these lower cost CUDA apps, most of them are merely transcoders. Powerdirector could be a good buy for low cost but I don't know if the performance gain would add much to shorten your time at just roughly 20% (plus time savings on the actual editing). Usually it's the longer projects that I queue overnight and this doesn't look like it'll provide enough of a boost to change that. Example being a project that normally took 3 hours to encode may take a little over 2 hours but it's still gonna be a substantial wait.

I've been waiting for a decent GPU accelerated solution for quite a while now. I do think it'll be an nVidia solution, due to their market penetration with CUDA and the fact they'll still work with DirectCompute but likely won't be until a Geforce 500 series.
 

Cattykit

Senior member
Nov 3, 2009
521
0
0
Madcatalas
“we all know at this moment what card is better at what you mentioned”
Not really. People only know that the most expensive card will probably work better than cheaper ones.
Due to lack of proper benchmark and reviews, we can only make that sort of simple generalization and that is why I’m asking for a professional benchmark.

“Since you are so interested in the subject you raise, have you compared the different architectures of AMD and Nvidia? Have you read any such articles? Id really like to read abit on this too you see.”

Currently, in terms of video editing, AMD offers nothing. As I do not game anymore, I wouldn’t even trade my crappy GT 240 to Radeon 5970. Things might change if OpenCL becomes industry standard but it’s all about CUDA now.

“I do understand that it may be the Lack of such articles, that prods you to ask in the first place, but i would think there were some numbers out there showing the different strengts of cypress vs fermi or the 4k series vs the g200 series.”

Again, PP CS5 and a few other video editing programs barely started using GPU-CUDA only solutions.


Biostud
“I think it would be better to test the software that uses CUDA and do an article about that instead of mixing it into every videocard review. Most users are interested in gaming peroformance and not so much in "pro" performace and personally I would rather have an article with focus only on gaming and one only on "pro".”

Though most users here and computer hardware sites are interested in gaming performance, there are tons of different types of users who are interested in other areas. This is the thing that was talking about in the first place. For gamers, you can go to all those hardware sites to see decent reviews to keep you updated. For video editors, there is none and I think AT can shine in this area if they act fast.

(vDSLR is gaining much attention and popularity among masses. It is becoming a standard feature in DSLR cameras. The problem is that it uses high bitrate h.264 codec which chokes any top of the line CPUs. GPU-CUDA is currently the only solution that makes night and day different in this area. Think about all those people who are buying vDSLR cameras. vDSLR market and its user-base is not something that can be ignored. In fact, I wouldn’t be surprised if they outnumber those who buy video cards for gaming purposes.)


Keysplayr

"What could ATI fans possibly say about a CUDA only review? Or should I say, what objections do you think they would have if AT conducted a CUDA review? Bias? Paid by Nvidia to do it? That's a given that some people would react that way, but should they be paid any attention at all? Could always ask AT toi conduct a Stream only review as well. Create a new GPGPU news section. Would any ATI fans care to comment on how you would receive a CUDA only review or Stream only review? And also how you would react if only one was done first and not the other?"

If they must, they should blame Adobe for they decided to go for CUDA only solution. However, nVidia has been pioneering into this area from the beginning while ATI’s focus was on the gaming world. What can AT do for ATI fans when PP CS5 is only using CUDA and more video editing program and plug-in makers are following the same route? I think ATI fans should understand this.
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
When Internet first arrives, it is full of useful information and only universities have access to them. Back then each forums are filled with priceless information and people only make posts that has values or whatever they posted will be ignored.

Nowadays forums are more or less covered with garbages as everyone can access it. Good posts are ignored/buried as trolls really can't argue about it. Rumors, and troll posts are always at the top of the page and that is where people want to see and contribute.

Anandtech can make its forum more informational, simply by categorizing it more. So instead of just Video cards and graphics, they can make 10 sub categories under it. However, the question is, will it attract more people?

First, too much categorization will make information hard to find. Second, most sub forums will be more or less dead as there is nothing to argue about. Third, the general forum will be worst as it will be filled with pointless arguments and people will leave, killing the entire forum.

The only thing that should be done is have moderators to control the mood of the forum, which anandtech has been doing. If a thread has no value, then it will be locked and sink to the bottom. However, mods have to be very careful not to overkill threads or people will leave. It isn't simple.

Back to CUDA and AS5. If you have been following threads/posts here, most people knows Nvidia is better in GPGPU. The argument is, a) ATI is as good, if not better in terms of raw power. b) Nvidia should not have proprietarize (I spelled it wrong) things like that and allow everyone to take advantage about it. c) ATI too has Stream, their version of CUDA. d) Big programs will eventually take advantage of GPU, when they decided to program for themself.

Doesn't matter which side you pick, you can't deny the value embedded within those threads. However, it is the reader's job to verify what is legit and what isn't. CUDA/AS5 has been up several times but they quickly sink to the bottom. There is nothing Anandtech can do, and they really shouldn't change it. If Anandtech believes there are values, they can write an article about it, which that have. Forums are NOT controlled by the moderators, but us, posters. Do not expect someone is going to fix it. If you want it to be fixed, fix it yourself. Lots of us are trying.
 

ModestGamer

Banned
Jun 30, 2010
1,140
0
0
Really? Exception handling, cache hierarchy, cache amount, available resgisters, thread scheduling. Those are five off the top of my head that aren't comparable.

Well yes the core and instruction set have all of those feature at the bios level. the issue is utilization. they are functions that are available. I was able to assembly code a quick program to crunch PII in a few hours and it was very fast at doing so. They lack some of the logic the cuda cores have but that could be adressed on the CPU/OS side.

Reading all the RV700s documentation would certainly give you that impression, so I can understand where it comes from :)

Actually I read the white papers and the Bios instruction set. You know the instructions you send to the GPU. Nvidia and AMD don't differ much there. Not surprising given DirectX compatability.

You really think nV used all those extra transistors on nothing? They came with a steep cost, but the benefits are very real for those that would use them(and that is what this thread is about).

this thread is about using GPU to accelerate alot of computing tasks. My argument is that AMD has the horsepower its just not being used. Nvidia in a few specific ways has a few advantages but they don't amount to much. Namely the way cuda cores handle the scheduling to make it easier to implement.

Move past driver implementation of features and get down to machine language level and its obvious AMD has a plan to bring these features in the products they offer. The issue could be that they aren't courting enough companys or Nvidia has locked up alot of customers for a profitable liscensing scheme.

when it really gets down to it. the OS is the major drawback. If the OS handled the GPU as a accelerant vs a rendering device or shared resource and the API was design to make it easier to implement features. we wouldn't be having this discussion becuase the OS would task the parts CPU/GPU optimally for the task it needs performed.

Its really a OS failing to a large degree.
 
Last edited:

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81

Well, try Cyberlink MediaExpresso, is a nice program that takes advantage of nVidia's CUDA and AMD's Stream, there's a benchmark around showing both in terms of performance.

http://www.legitreviews.com/article/978/2/

Its old but can give you a glimpse, definitively the more robust GPGPU features of current AMD and nVidia architectures will give you even greater benefits. The first tests shows you some nice performance gains from both GPU vendors, the second test has some issues with the HD 4770 card and the latest hotfix drivers, well it should have been ironed out though.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Well yes the core and instruction set have all of those feature at the bios level.
Ahm, all the features he named can't be (or only at a large performance hit) be implemented in SW, so I've got no idea where you got that idea. Whatever white papers you read (and whatever a BIOS ISA is; also I'd be surprised if Ati disclosed their) they sure didn't claim that..

The whole Nvidia architecture is just much more mature (to add another few: recursion, c++ support, divergent threads, not restricted to SIMD) and if you really want to talk about the SW side of things, it looks not much better there. Though that's "easy" to fix if Ati wanted to.
 
Last edited:

Cattykit

Senior member
Nov 3, 2009
521
0
0
When Internet first arrives ... If Anandtech believes there are values, they can write an article about it, which that have. Forums are NOT controlled by the moderators, but us, posters. Do not expect someone is going to fix it. If you want it to be fixed, fix it yourself. Lots of us are trying.

I'm not sure why you're talking about what you're talking about. What's all that about history of Internet, forum controll and fix issues? Where did this "if you want it to be fixed, fix it yourself" come from? :confused:

Well, try Cyberlink MediaExpresso, is a nice program that takes advantage of nVidia's CUDA and AMD's Stream, there's a benchmark around showing both in terms of performance.

http://www.legitreviews.com/article/978/2/

Its old but can give you a glimpse, definitively the more robust GPGPU features of current AMD and nVidia architectures will give you even greater benefits. The first tests shows you some nice performance gains from both GPU vendors, the second test has some issues with the HD 4770 card and the latest hotfix drivers, well it should have been ironed out though.

That's not a video editing program (NLE) but a transcoding tool.
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
I'm not sure why you're talking about what you're talking about. What's all that about history of Internet, forum controll and fix issues? Where did this "if you want it to be fixed, fix it yourself" come from? :confused:

...As a member who has been here since the birth of Anandtech, I think it's about time Anandtech do what they have done: bring out something new and do what others haven't done.
...
Still confused?
 

zokudu

Diamond Member
Nov 11, 2009
4,364
1
81
Doesn't CUDA only work within PP CS5 on the GTX285 and the GTX470? Wouldn't that mean your GT240 is not CUDA enabled within CS5? Or are you using a hack to get it to work?
 

Cattykit

Senior member
Nov 3, 2009
521
0
0
Still confused?

Yes. You see, I was talking about AT, not ATF. I was suggesting AT to bring out an article and do what others haven't done. This has nothing to do with making changes or fixing ATF. However, you, all of the sudden, started to talk about forum issues and that I should fix it myself.:rolleyes:

Doesn't CUDA only work within PP CS5 on the GTX285 and the GTX470? Wouldn't that mean your GT240 is not CUDA enabled within CS5? Or are you using a hack to get it to work?

I'm using a hack to make it work and the hack here is as simple as adding "GeForce GT 240" in a text file.
 

Cattykit

Senior member
Nov 3, 2009
521
0
0
I agree. That does limit the appeal.

True, there aren't huge amount of 'amateurs' who are willing drop that much money. Still, there are huge amount of 'people' who have already dropped that much money on PP CS5 and the market is growing. The market is growing in relation to ever-growing dslr market. vDSLR has been gaining much attention not only from videographers but also from average people.

What should be considered is that this is a very beginning of GPU video editing and PP5 just opened the gate of possibilities. Soon, other cheaper solutions will join and that the new era of practical GPU has begun, I think it's about time to see where we currently stand.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
The market is growing in relation to ever-growing dslr market. vDSLR has been gaining much attention not only from videographers but also from average people.

Sounds interesting to me. I'd like to hear more. Does anyone else agree?
 

ModestGamer

Banned
Jun 30, 2010
1,140
0
0
Ahm, all the features he named can't be (or only at a large performance hit) be implemented in SW, so I've got no idea where you got that idea. Whatever white papers you read (and whatever a BIOS ISA is; also I'd be surprised if Ati disclosed their) they sure didn't claim that..

The whole Nvidia architecture is just much more mature (to add another few: recursion, c++ support, divergent threads, not restricted to SIMD) and if you really want to talk about the SW side of things, it looks not much better there. Though that's "easy" to fix if Ati wanted to.


90% of what Cuda does is done at the driver level and that is why there is generally a fiarly large CPU performance hit to and benchmarks ferret this out.

the BIOS ISA is the BIOS on the card and ISA is in laymans terms the instructions that the card recieve from the PCIE bus.

Yes AMD has these in there white papers. I will link them later tonight.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
90% of what Cuda does is done at the driver level and that is why there is generally a fiarly large CPU performance hit to and benchmarks ferret this out.
What? Let's see what examples he used:
structured exception handling: Needs HW support, OoO architectures don't make that any easier too
cache hierarchy, cache amount, available resgisters: Those are even attributes of the HW itself
thread scheduling/divergent threads: solely HW
MIMD : nc

The same goes for recursion: Without HW support it just won't work reasonably well (if at all).. that's also why although Nv already implemented it in the newest Cuda SDK, it's only for Fermi cards.
Sure the driver also has to work, but there are lots of features that need HW support and things that can be optimized for GPGPU stuff on the HW level..


And I still don't see what the basic Input/Output system has to do with a instruction set architecture. The BIOS is stored in EEPROM on the MB and is used to bootstrap the OS, initialize peripharials and a bit power managment etc... at least that's the terminology I'm used to. Why would a GPU, just like a CPU need a BIOS? It has to use the right interface to communicate with it, but that's about it
 
Last edited:

Seero

Golden Member
Nov 4, 2009
1,456
0
0
...Even hardware gurus on various hardware forums seem to only care about gaming performance. This needs to be changed.

Yes. You see, I was talking about AT, not ATF. I was suggesting AT to bring out an article and do what others haven't done. This has nothing to do with making changes or fixing ATF. However, you, all of the sudden, started to talk about forum issues and that I should fix it myself.:rolleyes:

May be I have missed what you said and all you wanted to say is "Anandtech should have made an article about CUDA." As far as I know, Anandtech has made several articles about GPGPU, just not specifically CUDA. The latest one I believe is the one about flash 10.1, which clearly stated how much GPU can help.
 

ModestGamer

Banned
Jun 30, 2010
1,140
0
0
What? Let's see what examples he used:
structured exception handling: Needs HW support, OoO architectures don't make that any easier too
cache hierarchy, cache amount, available resgisters: Those are even attributes of the HW itself
thread scheduling/divergent threads: solely HW
MIMD : nc

The same goes for recursion: Without HW support it just won't work reasonably well (if at all).. that's also why although Nv already implemented it in the newest Cuda SDK, it's only for Fermi cards.
Sure the driver also has to work, but there are lots of features that need HW support and things that can be optimized for GPGPU stuff on the HW level..


And I still don't see what the basic Input/Output system has to do with a instruction set architecture. The BIOS is stored in EEPROM on the MB and is used to bootstrap the OS, initialize peripharials and a bit power managment etc... at least that's the terminology I'm used to. Why would a GPU, just like a CPU need a BIOS? It has to use the right interface to communicate with it, but that's about it

http://developer.amd.com/gpu/ATIStr...een-Family_ISA_Instructions_and_Microcode.pdf

http://developer.amd.com/gpu/ATIStreamSDK/assets/R600-R700-Evergreen_Assembly_Language_Format.pdf

http://developer.amd.com/gpu/ATIStr...ermediate_Language_(IL)_Specification_v2d.pdf


read these and get back to me on what you think you know.sounds like you've read alot of marketing hoopla.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
May be I have missed what you said and all you wanted to say is "Anandtech should have made an article about CUDA."

Back in 1996 the PC gaming community was getting used to the wonders of point sampled 3d graphics running on software engines. They were pushing upwards of 20fps running at 320x240(and point filtered!). A couple of things happened that year, a young upstart developer released a patch for one of his games to give acceleration to a new piece of hardware. Carmack, GLQuake, Voodoo. That was a major "Holy Crap" moment in gaming history for anyone who remembers it.

For video editors, they are having the same sort of experience now with CS5/Premiere.

What you are talking about would be AT writing a generic article about 3d technology, this would be nothing new and done many times before. The seismic shift when we moved to something entirely different- 4 times the resolution, effectively 16 times the pixel sampling while doubling the framerate- was a major impact and changed the direction of PC gaming(prior to GLQuake/Voodoo1 PCs weren't any better then the consoles on a visual basis in any realistic sense, normally they tended to be rather inferior actually). Obviously those that aren't PC gamers really didn't care much unless they were true technology enthusiasts. The situation right now is much the same. Those who are serious about video editing are having a major revelation. Those that are seriously interested in technology probably would also like to see this topic covered as it is, in relative terms, a "mass market" application(compared to "supercomputing" tasks).


We allow cussing in P&N and OT, not in the tech forums.

Moderator Idontcare
 
Last edited by a moderator:

Keysplayr

Elite Member
Jan 16, 2003
21,209
50
91

And do you claim to have read G80 thru GF100 white papers? You do know that the actual architecture since G80 is what CUDA is. CUDA is not software nor is it a programming language. CUDA is the term used to describe the architecture.
If you have read the white papers, I'm finding it a little hard to believe that the architecture has little or nothing over anything AMD/ATI when it comes to GPGPU features. Actually, I find that pretty much impossible. If anything, and if stream is any indication, AMD/ATI are the ones trying to achieve performance strictly with software to get around the hardware.
I think I would love to see these type of articles here on AT.We are the readers of Anandtech, and I think Anandtech would listen to, and provide, what we would love to read. It cannot hurt to ask them.
 
Last edited:

ModestGamer

Banned
Jun 30, 2010
1,140
0
0
And do you claim to have read G80 thru GF100 white papers? You do know that the actual architecture since G80 is what CUDA is. CUDA is not software nor is it a programming language. CUDA is the term used to describe the architecture.
If you have read the white papers, I'm finding it a little hard to believe that the architecture has little or nothing over anything AMD/ATI when it comes to GPGPU features. Actually, I find that pretty much impossible. If anything, and if stream is any indication, AMD/ATI are the ones trying to achieve performance strictly with software to get around the hardware.

I see you have a camaro in you avatar. does a ford 302 have a significant advantages over a ls1 ??? the answer is no.

I am reading through the nvidia white papers,what little they are releasing. not seeing any hugely significant differences.
 

lopri

Elite Member
Jul 27, 2002
13,212
597
126
When Internet first arrives, it is full of useful information and only universities have access to them. Back then each forums are filled with priceless information and people only make posts that has values or whatever they posted will be ignored.

Nowadays forums are more or less covered with garbages as everyone can access it. Good posts are ignored/buried as trolls really can't argue about it. Rumors, and troll posts are always at the top of the page and that is where people want to see and contribute.

Anandtech can make its forum more informational, simply by categorizing it more. So instead of just Video cards and graphics, they can make 10 sub categories under it. However, the question is, will it attract more people?

First, too much categorization will make information hard to find. Second, most sub forums will be more or less dead as there is nothing to argue about. Third, the general forum will be worst as it will be filled with pointless arguments and people will leave, killing the entire forum.

The only thing that should be done is have moderators to control the mood of the forum, which anandtech has been doing. If a thread has no value, then it will be locked and sink to the bottom. However, mods have to be very careful not to overkill threads or people will leave. It isn't simple.

Back to CUDA and AS5. If you have been following threads/posts here, most people knows Nvidia is better in GPGPU. The argument is, a) ATI is as good, if not better in terms of raw power. b) Nvidia should not have proprietarize (I spelled it wrong) things like that and allow everyone to take advantage about it. c) ATI too has Stream, their version of CUDA. d) Big programs will eventually take advantage of GPU, when they decided to program for themself.

Doesn't matter which side you pick, you can't deny the value embedded within those threads. However, it is the reader's job to verify what is legit and what isn't. CUDA/AS5 has been up several times but they quickly sink to the bottom. There is nothing Anandtech can do, and they really shouldn't change it. If Anandtech believes there are values, they can write an article about it, which that have. Forums are NOT controlled by the moderators, but us, posters. Do not expect someone is going to fix it. If you want it to be fixed, fix it yourself. Lots of us are trying.

Had to quote you because I thought it is a very thoughtful (and thought-provoking) comment. Thank you.
 

Cattykit

Senior member
Nov 3, 2009
521
0
0
Back in 1996 the PC gaming community was getting used to the wonders of point sampled 3d graphics running on software engines. They were pushing upwards of 20fps running at 320x240(and point filtered!). A couple of things happened that year, a young upstart developer released a patch for one of his games to give acceleration to a new piece of hardware. Carmack, GLQuake, Voodoo. That was a major "Holy Crap" moment in gaming history for anyone who remembers it.

For video editors, they are having the same sort of experience now with CS5/Premiere.

What you are talking about would be AT writing a generic article about 3d technology, this would be nothing new and done many times before. The seismic shift when we moved to something entirely different- 4 times the resolution, effectively 16 times the pixel sampling while doubling the framerate- was a major impact and changed the direction of PC gaming(prior to GLQuake/Voodoo1 PCs weren't any better then the consoles on a visual basis in any realistic sense, normally they tended to be rather inferior actually). Obviously those that aren't PC gamers really didn't care much unless they were true technology enthusiasts. The situation right now is much the same. Those who are serious about video editing are having a major revelation. Those that are seriously interested in technology probably would also like to see this topic covered as it is, in relative terms, a "mass market" application(compared to "supercomputing" tasks).

Thank you. That's an excellent example and I couldn't have said it better myself.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
read these and get back to me on what you think you know.sounds like you've read alot of marketing hoopla.
Fun fact: "BIOS" doesn't appear in ANY of those documents even once..

And now, would you please tell us, HOW you implement structured exception handling (of a OoO architecture, no less) in software? Or divergent threads? Oh and I'd love to see how you get MIMD working in a SIMD architecture. Since I've obviously read only marketing hoopla, there must be some amazing solutions to those problems out there, great I can't wait!
 
Last edited:

ModestGamer

Banned
Jun 30, 2010
1,140
0
0
Fun fact: "BIOS" doesn't appear in ANY of those documents even once..

And now, would you please tell us, HOW you implement structured exception handling (of a OoO architecture, no less) in software? Or divergent threads? Oh and I'd love to see how you get MIMD working in a SIMD architecture. Since I've obviously read only marketing hoopla, there must be some amazing solutions to those problems out there, great I can't wait!


don't get me started on your ignorance. Go back to C++.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
don't get me started on your ignorance. Go back to C++.
Sigh, so you acknowledge that we can't emulate some rather important things in SW? Hey we're getting somewhere, great.

Oh and I'm still waiting for your link to the rather interesting definition of "BIOS" that you're using - your other links didn't help there.