- Jun 21, 2005
- 11,944
- 2,173
- 126
Originally posted by: Keysplayr
It would seem that it is assumed that GPU encoding is only faster because it produces lower quality video than a CPU? Not rendering all pixels? I'd like to see on what basis this idea has manifested itself. Are all CPU encoded videos perfect? Or do they themselves require tweaking of various options in the application?
What is good for one is also good for the other. And besides, the CPU cycles spared from using the GPU in some of those apps is astounding. A few utilize high CPU cycles, but I suppose those would be the apps we would stay away from.
Originally posted by: Forumpanda
I don't consider CPU usage a metric to measure GPU encoding applications by, I want my application to use all system ressources to encode as fast as possible, so if it can use the CPU to improve encoding speed then I am all for it.
Originally posted by: thilan29
Originally posted by: Keysplayr
It would seem that it is assumed that GPU encoding is only faster because it produces lower quality video than a CPU? Not rendering all pixels? I'd like to see on what basis this idea has manifested itself. Are all CPU encoded videos perfect? Or do they themselves require tweaking of various options in the application?
What is good for one is also good for the other. And besides, the CPU cycles spared from using the GPU in some of those apps is astounding. A few utilize high CPU cycles, but I suppose those would be the apps we would stay away from.
There are usually lots of things you can tweak with CPU "and GPU encoding"...I usually leave everything at default for something like Ripbot264 if I'm encoding something for PS3 and I don't get any artifacts when starting with a decent quality source.
Originally posted by: Keysplayr
Originally posted by: Forumpanda
I don't consider CPU usage a metric to measure GPU encoding applications by, I want my application to use all system resources to encode as fast as possible, so if it can use the CPU to improve encoding speed then I am all for it.
I would. I use my PC for multiple tasks at the same time. So, I would consider CPU usage as a metric. Some click "encode" and go out to dinner, or see a movie. Others do not, and wish to engage in other tasks. So, if you want those extra CPU cycles, they are available. If you don't, you're not losing out when encoding is so fast anyway.
Originally posted by: Keysplayr
Really? How long have you guys been together?
Seriously though, there will be tens of thousands just like you. Just as there will be tens of thousands who aren't.
Originally posted by: Forumpanda
But multi threading is a very well implemented part of any OS, it is not hard to let all other applications take priority over the encoding and not feel it running at all, regardless of how much 'cpu time' it actually utilizes.
Specially with basically everyone running at least a dual core CPU, if its purely a choice between fast and using the CPU or slower and not then I don't see any reason to make it slower.
I really can't see how you can have a different stance, it is trivial to let programs take CPU priority over encoding.
The much bigger system clog (for application response time) when it comes to encoding is disk access, which neither approach does much for, but flash drives eventually will.
Originally posted by: Hacp
Originally posted by: Keysplayr
Really? How long have you guys been together?
Seriously though, there will be tens of thousands just like you. Just as there will be tens of thousands who aren't.
So I'm guessing GPGPU at least Nvidia's implementation isn't ready for the masses.
Originally posted by: Keysplayr
Originally posted by: Forumpanda
But multi threading is a very well implemented part of any OS, it is not hard to let all other applications take priority over the encoding and not feel it running at all, regardless of how much 'cpu time' it actually utilizes.
Specially with basically everyone running at least a dual core CPU, if its purely a choice between fast and using the CPU or slower and not then I don't see any reason to make it slower.
I really can't see how you can have a different stance, it is trivial to let programs take CPU priority over encoding.
The much bigger system clog (for application response time) when it comes to encoding is disk access, which neither approach does much for, but flash drives eventually will.
You can't see how I have a different stance? Hookay, here goes.
Since the dawn of multicore processors, how many people were enjoying multitasking much smoother. Since the dawn of GPU assisted video playback (avivo/pure video) how many people enjoyed less CPU utilization when the workload was offloaded to the GPU freeing up CPU time for other tasks. How many people watch a movie and browse the web at the same time? Palunty, as indicated by numerous threads on this very subject. And now, finally, we have REAL TIME Video encoding apps that almost reduce the CPU utilization to NIL. What is not to like?
So you can see why I cannot understand your stance on this. For the past few years, people have been complaining that with Avivo and Pure Video, that CPU cycles were still too high. Everyone wanted to see less and less CPU usage for these everyday tasks. And now, they are getting it in droves.
And, if it didn't matter to anyone, why show the charts in this review of CPU utilization vs. offloading to the GPU percent of usage difference? It obviously matters to someone, cause they made the graphs. And saying that people with more than one computer at their disposal is totally unrealistic. The average user has just one computer. Not a farm to pick and choose from when one rig is busy. Only us folks in these types of forums usually have multiple rigs.
Originally posted by: wolf2009
flawed, what about the output quality of the video CPU vs GPU ?
Originally posted by: TheRyuu
Originally posted by: wolf2009
flawed, what about the output quality of the video CPU vs GPU ?
CPU encoding (with x264) is superior, even with relatively fast settings, to any GPU encoding available right now.
Most GPU encoders that are out pretty much suck.
But we have to keep in mind that GPUs where not designed to encode video. Video encoding is integer math. GPU's were designed for floats. Which is why your CPU is going to be your main friend in video encoding for a while still.
Just have to wait and see how gpu encoding evolves.
I suppose one of the better approaches would be to offload parts of the encoder to the GPU (i.e. motion estimation), thereby freeing up time on the CPU to do more of the other things.
Do you want to micro-manage your PC? Live in task manager to set priorities and affinities?
How about encoding video while watching a movie. Ripping MP3's from your CD's. Running Mat-Lab, running compilers, etc. etc.
If none of these things used much CPU utilization, why then did so many users want the cycles offloaded to the GPU?
Originally posted by: WelshBloke
Do you want to micro-manage your PC? Live in task manager to set priorities and affinities?
Why would you need to? Plenty of apps set their priority low if wanted.
How about encoding video while watching a movie. Ripping MP3's from your CD's. Running Mat-Lab, running compilers, etc. etc.
If your doing all that concurrently then you would, surely, want your computer to use all its resources.
If none of these things used much CPU utilization, why then did so many users want the cycles offloaded to the GPU?
Do users want that? Reviewers do because its an easy way to see if the GPU is doing any work, I'd think that users wouldn't care what was doing the work if it was done quickly and with quality.
Originally posted by: Keysplayr
Originally posted by: WelshBloke
Do you want to micro-manage your PC? Live in task manager to set priorities and affinities?
Why would you need to? Plenty of apps set their priority low if wanted.
How about encoding video while watching a movie. Ripping MP3's from your CD's. Running Mat-Lab, running compilers, etc. etc.
If your doing all that concurrently then you would, surely, want your computer to use all its resources.
If none of these things used much CPU utilization, why then did so many users want the cycles offloaded to the GPU?
Do users want that? Reviewers do because its an easy way to see if the GPU is doing any work, I'd think that users wouldn't care what was doing the work if it was done quickly and with quality.
If wanted. Key word.
Exactly.
They surely USED to. And there are several hundred threads to prove that going back years on these forums. You know this, and I know this. Why even ask if that's what they want? You've been on these forums long enough to know that there were many, many discussions on how well the GPU removes a workload from a CPU and how important it was to have this happen for people who multitask.
Have all of you forgotten? Selective memories? LOL. :thumbsup:
Originally posted by: WelshBloke
Originally posted by: Keysplayr
Originally posted by: WelshBloke
Do you want to micro-manage your PC? Live in task manager to set priorities and affinities?
Why would you need to? Plenty of apps set their priority low if wanted.
How about encoding video while watching a movie. Ripping MP3's from your CD's. Running Mat-Lab, running compilers, etc. etc.
If your doing all that concurrently then you would, surely, want your computer to use all its resources.
If none of these things used much CPU utilization, why then did so many users want the cycles offloaded to the GPU?
Do users want that? Reviewers do because its an easy way to see if the GPU is doing any work, I'd think that users wouldn't care what was doing the work if it was done quickly and with quality.
If wanted. Key word.
Exactly.
They surely USED to. And there are several hundred threads to prove that going back years on these forums. You know this, and I know this. Why even ask if that's what they want? You've been on these forums long enough to know that there were many, many discussions on how well the GPU removes a workload from a CPU and how important it was to have this happen for people who multitask.
Have all of you forgotten? Selective memories? LOL. :thumbsup:
But most of those threads were the same pissing contests about 'my GPU can beat up your GPU'. Its easy to pick a 'winner' when all you are looking for is the lowest CPU utilization.
What possible advantage would you have leaving out a large proportion of your computing power when doing intensive tasks?