• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Sony Vegas GPU accelerated rendering

PrincessFrosty

Platinum Member
I'm doing video editing with Sony Vegas and rendering out to mp4 and trying to decrease render times by using hardware acceleration, I was under the impression that GPUs ought to be able to render a lot faster?

I have a GTX 980 and CUDA is enabled but I'm getting really long render times which are a bit slower than real time video, we're at 1920x1080 and 30fps. I've followed all the guides to enable CUDA but it doesn't seem to make a difference to CPU only.

Looking at GPU usage its fluctuating up and down between about 3% and 5% which is tiny, and CPU is pegged at 100% for all 8 cores (it's a i7 2600k @ 4.9ghz)

Is this just a case of rendering not being something you can parallelize well? In which case is this low GPU usage expected, or should it be higher? I find it hard to believe that Vegas can only use a few percent of the GPU.
 
Is this just a case of rendering not being something you can parallelize well? In which case is this low GPU usage expected, or should it be higher? I find it hard to believe that Vegas can only use a few percent of the GPU.

Rendering is a textbook case of parallelization,but GPU cores are far less accurate then cpu cores so maybe (just maybe) vegas is not using the gpu for the actual rendering but only for secondary stuff so you still end up with a quality transcoding,thus the small utilization.
Maybe you could try with a lower quality setting,or try a different program.

Also read this,you might have to find a codec with cuda support.
https://forums.creativecow.net/thread/24/984273
 
I have a GTX 980 and CUDA is enabled but I'm getting really long render times which are a bit slower than real time video, we're at 1920x1080 and 30fps. I've followed all the guides to enable CUDA but it doesn't seem to make a difference to CPU only.

NV isn't good for this particular application.

75492.png


You would probably be better off with a 5820K @ 4.5ghz and an AMD graphics card for this program. Even a $120 used HD7970 beats a 980Ti in this program.
 
Vegas uses OpenCL, AMD is FAR superior in OpenCL applications. But it sounds like your are not actually using the GPU. So do verify your settings using the link provided above.
 
NV isn't good for this particular application.


You would probably be better off with a 5820K @ 4.5ghz and an AMD graphics card for this program. Even a $120 used HD7970 beats a 980Ti in this program.

There's really not much difference there in the latest cards, though.
 
There's really not much difference there in the latest cards, though.

In % terms of hours of work on a weekly basis over a year? A $120 7970 beats the $470 980 by 21%. My statement stands true that if rendering speed is the priority for this program, 6-8 core Intel i7 + AMD GPU is the way to go. Alternatively, the OP can re-assess his future GPU upgrade path in 2016-2017 based on the performance of future cards in Sony Vegas Pro and their compatibility/performance with Oculus Rift. Right now, everything we hear online is that NV also sucks for VR as its latency is way too high.
 
Last edited:
In % terms of hours of work on a weekly basis over a year? A $120 7970 beats the $470 980 by 21%. My statement stands true that if rendering speed is the priority for this program, 6-8 core Intel i7 + AMD GPU is the way to go. Alternatively, the OP can re-assess his future GPU upgrade path in 2016-2017 based on the performance of future cards in Sony Vegas Pro and their compatibility/performance with Oculus Rift. Right now, everything we hear online is that NV also sucks for VR as its latency is way too high.

I didn't know the OP was running a video rendering business...

For home use, there's really no significant difference, right? It's a few seconds among the latest cards.
 
Are you sure you're not encoding-bound? Vegas can use the GPU to accelerate the rendering process. However most of the encoders make little-to-no use of the GPU.
 
I didn't know the OP was running a video rendering business...

For home use, there's really no significant difference, right? It's a few seconds among the latest cards.

It's 20%-30% faster. Since when is that not significant? Just because that test runs only 20+ seconds doesn't mean that most real world tasks will.
 
Vegas is a great editor but a bad choice for rendering. GPU acceleration is only valid for graphical effects and transitions etc, the rest is done on the CPU.

I'll assume you just want to make a video of gameplay so if you're a masochist you probably want to check out a frameserving option like debugmode frameserver to fake the output to a .avi that can then be encoded with x264 in VirtualDub or something else (handbrake doesn't work).

Be forewarned that the deeper you go into this stuff the more ridiculous it gets but it kills Vegas and Premiere for speed and quality.
 
Last edited:
LOL at the salesmen trying to sell OP a new card...

What version of Sony Vegas are you using, and what format is your video source?
 
LOL at the salesmen trying to sell OP a new card...

What version of Sony Vegas are you using, and what format is your video source?

Yeah I'm not looking at buying a new card, I'm just trying to determine if that's an expected render time or not, the fact its only using a few percent of the GPU could be a bug.

I'm doing 2 encodes first from FRAPS uncompressed AVI's around about 250gb to a ~10-15gb file using either MainConcept AVC/AAC or Sony AVC/MVC, saved to an mp4, this is with Sony Vegas 13.

After that I'll normally chop/edit the 10-15Gb file into smaller 15 mins files which can take about 40 mins to re-render to final vids.

Sony AVC/MVC has options for GPU if available and the check GPU buttons says "no GPU available", and the MainConcept AVC/AAC lists both OpenCL and CUDA as options and the check GPU button gives the result "CUDA detected", I also grabbed the CUDA installer from Nvidias site and ran that to ensure it's the latest and up to date.

The most telling thing is the GPU usage, for such a straight forward GPGPU task like this I'd expect that to shoot to damn near 99% and stay close to that which I'd expect should render at something like 10x faster than the CPU. The CPU usage during GPU rendering is also really high damn near maxed so maybe some is done on the CPU and it's simply bottlenecked. Still a 2600k @ 4.9gz is really fast.
 
Just curious, why not use ShadowPlay instead of FRAPS and get an easy to use MP4 to start with?

FWIW, I see the same results with Sony Movie Studio. Same render times with GPU acceleration enabled and barely and GPU use. I gave up on it since some renders took longer. I decided to let my 4790k do its thing.
 
It's 20%-30% faster. Since when is that not significant? Just because that test runs only 20+ seconds doesn't mean that most real world tasks will.

It's apparently not using the GPU, so there's probably still little difference there for the home user, even at 25% faster.

It's using the CPU.
 
Huh?

GPU 3-5%
CPU totally pegged.

Depending on the Codec being used.

I can get Vegas use to 45-50% on either of GPU's in Vegas 13.

Note my system has both a Radeon and a NV card so I can use either one in vegas.

o7vvcl.png


If you choose automatic if will use both CPU and GPU

293ejc2.png


10qxu0z.png
 
Last edited:
So I ended up playing around with the settings in Movie Studio 13 (which look identical to what is posted above). I used a test clip of some 1080p video captures, rendered in the best quality. With CPU+GPU CUDA and my 980 Ti, I saw 15-20% GPU usage. Render time was 8:15. I went back and did it again with CPU only, all other settings the same. GPU usage was at 1% the whole time and the render finished in 7:30.

So, 45 seconds faster without. Sounds broken in Sony's implementation. Files were exactly the same when finished. I even MD5 hashed and they are identical.
 
So I ended up playing around with the settings in Movie Studio 13 (which look identical to what is posted above). I used a test clip of some 1080p video captures, rendered in the best quality. With CPU+GPU CUDA and my 980 Ti, I saw 15-20% GPU usage. Render time was 8:15. I went back and did it again with CPU only, all other settings the same. GPU usage was at 1% the whole time and the render finished in 7:30.

So, 45 seconds faster without. Sounds broken in Sony's implementation. Files were exactly the same when finished. I even MD5 hashed and they are identical.

Are you able to set it to GPU only then test?
 
So I ended up playing around with the settings in Movie Studio 13 (which look identical to what is posted above). I used a test clip of some 1080p video captures, rendered in the best quality. With CPU+GPU CUDA and my 980 Ti, I saw 15-20% GPU usage. Render time was 8:15. I went back and did it again with CPU only, all other settings the same. GPU usage was at 1% the whole time and the render finished in 7:30.

So, 45 seconds faster without. Sounds broken in Sony's implementation. Files were exactly the same when finished. I even MD5 hashed and they are identical.
There's nothing "broken" about Sony's implementation. You are straight-up encoding bound, and a GPU is not going to help you here.

GPUs are pretty awful on the whole for H.264 encoding; it's a fairly linear task that doesn't mesh well with the thousands of threads model that GPUs use. Context-adaptive binary arithmetic coding in particular really hammers GPUs. As a result the only time GPUs can do well at H.264 encoding is when they use a fixed function block such as VCE/NVENC/QuickSync.

Sony's implementation works correctly. Using a GPU in a programmatic fashion (i.e. without a fixed function encoder) is in fact slower than a modern CPU. You either need to use QuickSync or invest in a different CPU, as the GPU is only useful for rendering, not encoding.
 
Back
Top