How is this card for HD video Editing?

rk4adtch

Member
Oct 18, 2010
59
0
0
Hi all.

I had asked a question about which video card to buy to do HD video editing( no gaming), and received a couple of good suggestions (9800GT and gtx 460). However, I just see this PNY GT 430, would this be sufficient? Also, any experience with PNY rebate? I'm trying to keep the cost under $75 total. I understand that its not enough to just compare GDDR3 vs GDDR5, clock speed is also important, so any feedback about that will also be useful.

I'm planning to use Powerdirector 9 Ultra 64, which says it can use CUDA and Streams (scroll down to see list of supported card)

My build:
AMD X555 BE (unlocked to 4 cores)
8GB RAM
Win 7 64bit
Asus M4A88T-V Evo with Radeon HD4250 on-board graphics
Antec 500W Eaathwatts powersupply

thanks
 
Last edited:

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
For photoshop CS5? Gtx 460, 470 or 480. The 460 is the best bang for buck. Don't waste money on the 430 junk.
 

rk4adtch

Member
Oct 18, 2010
59
0
0
Hi, thanks for quick replies.

I'm only planning to use Photoshop elements 9 or similar for photo editing. main goal is capturing and editing 5-6 years of miniDV videos, and HD videos from one of the digital camcorders in the future(currently AVCHD from Sony HX5V).

I did a googling on '9800GT vs GT430' and everyone agrees that 9800GT, even though older, is better. so I will drop GT430.

I looked at GT460, its very nice, but too expensive. plus some of them are very big!

my current monitor is 19"(not widescreen), supports max 1024x1280. I may upgrade to maybe a 22" widescreen at some point, but nothing fancier, since I don't have space for anything bigger.

actually, I think I must ask better questions, but I have too many now :)

1. for hd video editing does number of stream processors matter(i.e. will more be better)?

2. does amount of mmy matter, i.e. will 1GB GDDR3 be better than 512 GDDR3, assume all other aspects are similar.

3. Does DDR5 vs DDR3 matter i.e. is 512MB GDDR5 better than 1GB DDR3. assuming core clock speed and shader speed is similar

4. How do core clock speed be affected by mmy speed, i.e. can higher core clock speed make up for slower 'effective memory speed'

5. finally, does the memory interface(128 bit vs 256 bit vs etc) make a difference for my usage, i.e. will the video capture/editing/export be affected by it?

6. Finally, I think no need consider features like crossfire, OpenGL 4, DirectX 11. hope this is correct.

I know that for $75-80 (after rebate) i will not get a excellent gaming type card, but I don't need that.

thanks!
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
If it will take advantage of the compute features in a meaningful way, a GTS 450 or 460 should be in the running.

1. Only within the same series/family, at the same speed. FI, 200 cores v. 100 cores on a 400-series Geforce at the same speed should give about double the potential performance. Same with AMD's. However, 1000 on AMD v. 300 on nVidia...not so comparable.

2. Yes. No idea how much you would actually need, though.

3. Maybe. DDR5 is generally much faster than DDR3. DDR3 is used for it being lower cost. OTOH, if whatever you are doing is not limited much by memory performance, it won't matter. All things considered, if you can afford even near $100 fora card, count out DDR3 ones. With $80 or so maxing out your budget, though...

4. It varies. Typically, for a given model of modern card that isn't a pure penny-pincher (like DDR3 models), the two will be balanced pretty well, so that the RAM won't bottleneck the GPU. OTOH, some older cards (like Geforce 9-series) just don't have support for newer DDR5 RAM built in.

5. Yes, but consider it like #1 and #4. A 128-bit card won't have a GPU with enough processing power that you can reasonably give it a workload where it can use the extra memory bandwidth offered by more channels (64 bits/channel, FYI).

6. Pretty much, but newer is generally better, when available.

Finally, no, $80 will not get a great gaming card. It's not that it shouldn't be able to, but the performance/dollar increases up until around $300. It's just a market timing thing.

For your needs, a 1GB Radeon 5670, or GT 440, might do the job, or a used 9800GT. I wouldn't trust the 430/440, myself--they're just so crippled, while you have to get down below the 5670 Radeon to start seeing super duper crippling.

However, it would be handy to know how well they can all accelerate the tasks you may use. Especially with cheaper cards, there could be 2-5x differences in their performance. That said, I *can* find some results for version 8, which show very good (100%+ potential) for the Radeon 5670, and whether or not a GT440 or 430 would be slower or faster with 9, it does bode well for the 5670, and does make it look like it could be worth the cost. Based on that, I'd get a 1GB DDR5-equipped HD 5670 with a big fan and decent MIR (Sapphire or Gigabyte, FI).

To throw an extra wrench in to the confusion, if a 9800GT would work here's one that could fit after an MIR, and could be faster than a 5670 (for your use, it's hard to say):
http://www.newegg.com/Product/Produc...82E16814150513
It will be loud when under any kind of load, though. Oh, yeah, and the GTS 250 is a 9800GT with a shiny new sticker and negligible speed increase.


So...here's my final bit of advice:
http://www.newegg.com/Product/Produc...82E16814500187
http://www.newegg.com/Product/Produc...82E16814127527

If you can spring for that much, and do the MIR, you will have a card suitable to any compute tasks you want to throw at it, without any doubt. Here's one example of typical performance differences for compute code between the 430 and 450:
http://www.anandtech.com/show/3973/nvidias-geforce-gt-430/16
Even in cases where a Radeon HD might perform better, it will perform well enough (and, it is common that the reverse of that situation is not always true, which is one reason nV keeps getting recommended for this sort of use).
 
Last edited:

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
Hi, thanks for quick replies.

I'm only planning to use Photoshop elements 9 or similar for photo editing. main goal is capturing and editing 5-6 years of miniDV videos, and HD videos from one of the digital camcorders in the future(currently AVCHD from Sony HX5V).

I did a googling on '9800GT vs GT430' and everyone agrees that 9800GT, even though older, is better. so I will drop GT430.

I looked at GT460, its very nice, but too expensive. plus some of them are very big!

my current monitor is 19"(not widescreen), supports max 1024x1280. I may upgrade to maybe a 22" widescreen at some point, but nothing fancier, since I don't have space for anything bigger.

actually, I think I must ask better questions, but I have too many now :)

1. for hd video editing does number of stream processors matter(i.e. will more be better)?

2. does amount of mmy matter, i.e. will 1GB GDDR3 be better than 512 GDDR3, assume all other aspects are similar.

3. Does DDR5 vs DDR3 matter i.e. is 512MB GDDR5 better than 1GB DDR3. assuming core clock speed and shader speed is similar

4. How do core clock speed be affected by mmy speed, i.e. can higher core clock speed make up for slower 'effective memory speed'

5. finally, does the memory interface(128 bit vs 256 bit vs etc) make a difference for my usage, i.e. will the video capture/editing/export be affected by it?

6. Finally, I think no need consider features like crossfire, OpenGL 4, DirectX 11. hope this is correct.

I know that for $75-80 (after rebate) i will not get a excellent gaming type card, but I don't need that.

thanks!

did you look at the 460 768mb version?
 

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
If it will take advantage of the compute features in a meaningful way, a GTS 450 or 460 should be in the running.

1. Only within the same series/family, at the same speed. FI, 200 cores v. 100 cores on a 400-series Geforce at the same speed should give about double the potential performance. Same with AMD's. However, 1000 on AMD v. 300 on nVidia...not so comparable.

2. Yes. No idea how much you would actually need, though.

3. Maybe. DDR5 is generally much faster than DDR3. DDR3 is used for it being lower cost. OTOH, if whatever you are doing is not limited much by memory performance, it won't matter. All things considered, if you can afford even near $100 fora card, count out DDR3 ones. With $80 or so maxing out your budget, though...

4. It varies. Typically, for a given model of modern card that isn't a pure penny-pincher (like DDR3 models), the two will be balanced pretty well, so that the RAM won't bottleneck the GPU. OTOH, some older cards (like Geforce 9-series) just don't have support for newer DDR5 RAM built in.

5. Yes, but consider it like #1 and #4. A 128-bit card won't have a GPU with enough processing power that you can reasonably give it a workload where it can use the extra memory bandwidth offered by more channels (64 bits/channel, FYI).

6. Pretty much, but newer is generally better, when available.

Finally, no, $80 will not get a great gaming card. It's not that it shouldn't be able to, but the performance/dollar increases up until around $300. It's just a market timing thing.

For your needs, a 1GB Radeon 5670, or GT 440, might do the job, or a used 9800GT. I wouldn't trust the 430/440, myself--they're just so crippled, while you have to get down below the 5670 Radeon to start seeing super duper crippling (the GTS 450 is about the threshold on the nVidia side of things).

However, it would be handy to know how well they can all accelerate the tasks you may use. Especially with cheaper cards, there could be 2-5x differences in their performance. That said, I *can* find some results for version 8, which show very good (100%+ potential) for the Radeon 5670, and whether or not a GT440 or 430 would be slower or faster with 9, it does bode well for the 5670, and does make it look like it could be worth the cost. Based on that, I'd get a 1GB DDR5-equipped HD 5670 with a big fan and decent MIR (Sapphire or Gigabyte, FI).

To throw an extra wrench in to the confusion, if a 9800GT would work here's one that could fit after an MIR, and could be faster than a 5670 (for your use, it's hard to say):
http://www.newegg.com/Product/Produc...82E16814150513
It will be loud when under any kind of load, though. Oh, yeah, and the GTS 250 is a 9800GT with a shiny new sticker and negligible speed increase.

hardware acceleration with a ATI card?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
hardware acceleration with a ATI card?
Cyberlink claims to support it, and it appears to offer real benefits for a previous version. Without actual comparison benchmarks, though, who to go with is a bit iffy, with $100-150 Geforces having the benefit of being known good for a variety of parallel compute uses. OTOH, we don't even know that the resulting IQ is approximately the same...aargh!
 

rk4adtch

Member
Oct 18, 2010
59
0
0
Hi cerb, thanks for the quick and detailed reply. I think a 5670 is looking like a good fit for me. which one would you suggest from the following:

5670 1GB GDDR5 comparisons


The gigabyte looks pretty good, but only 2 reviews.

I would like the card to be quite and not become very hot.

Thanks!
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I would get this one, or the Gigabyte:
http://www.newegg.com/Product/Produc...82E16814102917

The main difference is that the Sapphire has a Displayport, and a couple of output adapters, so if you want to hook it up to a newer TV, or when/if you go to get a new monitor, what ports you have to use would be less of an issue, and you wouldn't have to track down adapters. That may or may not be worth $5, and I would not consider either of the two better or worse in other ways.

The Asus, OTOH, costs more, for no apparent benefit.

The HIS is kind of interesting, but in a pointless way. An external exhaust HSF, crossfire bridge, and external power, are going to be useful to use two (or more) of them in Crossfire for gaming, especially if overclocking...but that money would be better spent on a faster single card, even looking at how it could be useful. A 5670 does not use enough power to worry about a single one exhausting into the case.
 

rk4adtch

Member
Oct 18, 2010
59
0
0
hi 1h4x4s3x, thank for the link, its very interesting read. But it means I have to switch from AMD to Intel CPU, right?
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Well PNY is one of the worst vendors. Also dont get a 430 , get the 460 768mb edition for around 150 dollars.. the GPU will be faster thus faster rendering. thx
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
hi 1h4x4s3x, thank for the link, its very interesting read. But it means I have to switch from AMD to Intel CPU, right?
For Quicksync, yes. They also found Arcsoft's image quality to vary (which, IMO, means that it is severely flawed). I can't find any info one way or the other about Cyberlink's (TMPGenc is the only one I know which is known to have identical IQ, and it only supports CUDA).
 
Last edited:

rk4adtch

Member
Oct 18, 2010
59
0
0
So if I moves to Intel, its about $100 for mobo + about $250 for Intel i5 with "Intel HD Graphics 3000"(which has the QuickSync ability). maybe better to buy a better card, like 5770?
 

rk4adtch

Member
Oct 18, 2010
59
0
0
Hi Cerb,

I just realise that TmpgEnc that everyone is referrring to is a paid product...i was thinking of the free product from few years ago(which was quite fast, but not so user friendly). I will try out the trial of the commercial product this weekend, to see how good it is, and whether to use that instead of PD9 Ultra 64. Does anyone have a feedback between the two of these?

In my other thread, 'IdontCare' had said he(she) was using TMPGEnc (with CUDA support of GTX460) for 'filtering' while capturing, i assume this means to remove noise etc from the capture video? I notice that when i tried capturing with PD9U64 using my current on-board graphics, the captured AVI looks very grainy. does the filters help to avoid this?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
In my other thread, 'IdontCare' had said he(she) was using TMPGEnc (with CUDA support of GTX460) for 'filtering' while capturing, i assume this means to remove noise etc from the capture video? I notice that when i tried capturing with PD9U64 using my current on-board graphics, the captured AVI looks very grainy. does the filters help to avoid this?

Hi, I'm a "he" :) and yes one of the filters you would use for reducing "grain" is the temporal video noise filter which is CUDA accelerated in TMPGEnc.

This, grain, is one of the artifacts you really have to deal with and negate if you hope to compress your files (low bit-rate) without degrading the IQ.

I use to use free transcoders, then when I bought TMPGEnc based on a friend's recommendation (he had it) I was simply overwhelmed at the difference in IQ at any given bit-rate. Worth every penny for me.
 

rk4adtch

Member
Oct 18, 2010
59
0
0
Thanks, Idontcare, for the explanation!

to understand better, you use TMPGEnc to capture the video (with the appropriate filter), then edit using a different program?

And have you had chance to look at the "TMPGEnc Video Mastering Works 5" software? it says it supports x264, i assume that is using software encoding
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
I use my camcorder/phone to capture video.

TMPGEnc is used to then transcode the video. Either to a lower bit-rate but in the same format or to an entirely new file format.

Take your home videos (avi or mp4) and convert them to DVD mpeg. Or take your DVD's and transcode them to be lower bit-rate so you can make single-DVD compilations.

format_en.jpg


I like to take my kids DVD's and put them on a single DVD disc, converting the bit-rate from say 7500kb/s to 3000kb/s.

Likewise if I am going to upload home videos to youtube I like to transcode them to remove the grain that comes from the cheap CCD detectors in today's video recorders.
 

1h4x4s3x

Senior member
Mar 5, 2010
287
0
76
hi 1h4x4s3x, thank for the link, its very interesting read. But it means I have to switch from AMD to Intel CPU, right?
Depends where your priorities lie.
CPU provides the best image quality, followed by Quick Sync and AMD's APP while CUDA is trailing behind. Speed-wise, it is the other way around, except that Quick Sync is the fastest.

Speaking of TMPGEnc, as far as I understood, CUDA is only used for a few filters and everything else is done on the CPU, thus the CPU-like IQ and speed.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Depends where your priorities lie.
CPU provides the best image quality, followed by Quick Sync and AMD's APP while CUDA is trailing behind. Speed-wise, it is the other way around, except that Quick Sync is the fastest.
That's going to vary, with Quicksync being a possible exception. IQ differences between the CPU doing the work, and nVidia GPU doing the work, and an AMD GPU doing the work, mean that they are doing it wrong (that is, with any given set of inputs, the output should be identical, or so close as to make the differences irrelevant), for the sake of getting higher speed from the GPUs. Is Cyberlink doing this? Who knows?

Speaking of TMPGEnc, as far as I understood, CUDA is only used for a few filters and everything else is done on the CPU, thus the CPU-like IQ and speed.
TMPGenc appears to guarantee the same result either way, which explains a more modest speed boost. Also, Cyberlink's own blurb does not show the GPUs being used during editing. If that is true, then adding one would provide very little use, as it's during that process where they clearly can provide the best time savings (you can leave an encode job going while you sleep, or off at work, or whatever else--not editing).
 

1h4x4s3x

Senior member
Mar 5, 2010
287
0
76
That's going to vary, with Quicksync being a possible exception. IQ differences between the CPU doing the work, and nVidia GPU doing the work, and an AMD GPU doing the work, mean that they are doing it wrong (that is, with any given set of inputs, the output should be identical, or so close as to make the differences irrelevant), for the sake of getting higher speed from the GPUs. Is Cyberlink doing this? Who knows?
Well, ultimately, CPUs are faster and in order to hide that unwanted fact, GPU vendors or software programmers alter the IQ. Intel is basically countering that with its Quick Sync.
That's my understanding at least. :awe:

TMPGenc appears to guarantee the same result either way, which explains a more modest speed boost. Also, Cyberlink's own blurb does not show the GPUs being used during editing. If that is true, then adding one would provide very little use, as it's during that process where they clearly can provide the best time savings (you can leave an encode job going while you sleep, or off at work, or whatever else--not editing).
Doesn't the modest speed boost and good IQ stem from the fact that unlike Arcsoft’s Media Converter, TMPGenc does almost everything on the CPU, as I said above? Their Forum is badly written, can't direct link: Click
 

rk4adtch

Member
Oct 18, 2010
59
0
0
hi IdontCare,

I downloaded and tried out the "TMPGEnc Video Mastering Works 5" software trial yesterday. But I'm not able to use it properly, i think. Is the filtering applied during capture, or after it? I couldn't do it during capture. which version of TMPGEng are you using?

after capture, I loaded up the avi file, then tried to apply 'video denoise' filter, hope that is the correct one. However, with that, the video become kind of mushy looking, plus the audio was completely lost in the final output. of course, this was a clip that was shot indoors, not very bright.

I notice that both PD9 and Premiere Elements 9 now have some video cleaning features(noise, sharpness, other video effects), have you ever used any of that?