Sandy Bridge vs. NVidia: Video transcoding

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Honestly for casual users, they won't care.
For 'hardcore' users, it's not about speed, it's about quality.
If you get a speed increase with crap quality or minimal options, they won't care.
For casual users, they probably don't think that hard about exactly how fast, and 'faster' is nice enough.


By this I mean people doing it 'properly' are going to buy a many core CPU to encode as fast as possible in something that they can tweak and set up custom settings etc.
Average people aren't going to go out and buy Sandy Bridge just for the encoding bit, and average people typically aren't going to buy a Geforce for GPGPU still (although it's a minimal purchasing decision for some). For the vast majority of the market it simply won't really matter.
 
Last edited:

happy medium

Lifer
Jun 8, 2003
14,387
480
126

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
GTX260 is a 2 year old card at this point. The fact that it is almost keeping up with Intel's fastest and newest processor says something. There's too many different variables to really say anything though.

Badaboom is not very efficient either, this is really not a good comparison.

I'd like to see Adobe numbers though with their new mercury engine for comparison.
 
Last edited:

LTG

Member
Jun 4, 2007
48
0
0
Actually Jim it's comparing to the relatively new New Fermi 460 GPU.

Even if it comes out a tie power required to encode will likely decrease dramatically. Being that Sandy Bridge is only a 95w part and does not require any GPU to encode at this speed.

regards-


GTX260 is a 2 year old card at this point. The fact that it is almost keeping up with Intel's fastest and newest processor says something. There's too many different variables to really say anything though.

I'd like to see Adobe numbers though with their new mercury engine for comparison.
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
Actually Jim it's comparing to the relatively new New Fermi 460 GPU.

Even if it comes out a tie power required to encode will likely decrease dramatically. Being that Sandy Bridge is only a 95w part and does not require any GPU to encode at this speed.

regards-

Are the graphs mislabeled? They say GTX260.
 

Schmide

Diamond Member
Mar 7, 2002
5,696
941
126
Why the hell did they use different transcode lengths? As far as I can see there are statistical flaws in using 9 seconds for Sandy Bridge and 100+ seconds for the rest. If they're using a DV codec, 3.6mb sec, you could almost attribute the difference to disk seek time.

Edit: Not to mention in 9 seconds you're most likely only seeing 1-2 b-frames and a ton of i-frames.
 
Last edited:

acx

Senior member
Jan 26, 2001
364
0
71
The graphs are an combinations of results of tests/demos done at different times. If you read the entire article, the core i7 and gtx260 results are from some test done at overclockersclub in 2009 and the sandy bridge result is from Anand's stopwatch timed result of an intel demo at IDF. How the two results are comparable and extrapolate to Fermi class gpus.... I don't know.
 

EarthwormJim

Diamond Member
Oct 15, 2003
3,239
0
76
The graphs are an combinations of results of tests/demos done at different times. If you read the entire article, the core i7 and gtx260 results are from some test done at overclockersclub in 2009 and the sandy bridge result is from Anand's stopwatch timed result of an intel demo at IDF. How the two results are comparable and extrapolate to Fermi class gpus.... I don't know.

So as happy_medium summed up, this article is pointless to read.
 

brybir

Senior member
Jun 18, 2009
241
0
0
So as happy_medium summed up, this article is pointless to read.

I found them quite interesting. Sure the tests are not exact comparisons, but with a bit of abstraction you can start to see the direction intel is heading and what its capabilities are.

From Anand's articles today we can see that Intel is:

1. Continuing to develop into the HPC and GPGPU area with variations on its larabee "experiment" i.e. its 32 core x86 part used for raytracing

2. Intel is leveraging its tick/tock strategy to introduce new logic for the areas that their is a sufficient consumer demand i.e. avx instructions and the video transcode logic. My guess is that the CPUs that Intel makes are headed toward a very hybrid nature, where we buy CPUs for the price and performance for what we use them most for. Would be cool to have the option of the Intel Core i12 with 8 general purpose Int/FPU cores, 2 GPGPU cores, and 2 specialized cores for video transcoding and maybe security or whatever else and then if I were a gamer to get a 12 core gaming monster.

3. We can expect a continued convergence of CPU and GPU's in the professional and consumer markets as the solutions for those problems is approached in different ways by different companies.



Would be interesting to see how Nvidia and AMD response if Intel makes a big push in 2010 or 2011 into HPC computing with its own hardware and development tools. My guess is that Intel will develop amazing software tools for use in its HTC...perhaps with or in partnership with MS that will see fast consumer acceptance in the professional markets.

I feel like we are entering another rapid time of change and innovation compared to the past few years....very exciting stuff
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Actually Jim it's comparing to the relatively new New Fermi 460 GPU.

No, it's comparing the old 260, from the OC article the numbers are taken from-

Badaboom and the GTX260 are only 18% faster than i7's eight threads combined with Handbrake when transcoding to a small resolution, but things quickly change when the resolution ramps up. Simply transcoding to the same 1080p resolution took the processor nearly three times longer than the video card's 216 smaller cores. This means the processor took over 18 minutes to transcode a 10 minute movie, while the GTX260 took less than 7 minutes.

http://www.overclockersclub.com/reviews/badaboom_1_1_1/3.htm

Looks like right now Intel's upcoming part in a best case scenario may be able to run with a couple year old video card for transcoding using dedicated logic. Certainly a big improvement, but nothing that I see keeping nV's engineers up at night.

1. Continuing to develop into the HPC and GPGPU area with variations on its larabee "experiment" i.e. its 32 core x86 part used for raytracing

The GTS450 that launched today runs Wolfenstein at 16x10x4 ~20% faster then 4 32 core servers running it at 12x7x0- considering most people are saying that the GTS450 is priced too high at $130 for its' performance level, quad servers may be *a little* too pricey for their performance ;)
 

Aristotelian

Golden Member
Jan 30, 2010
1,246
11
76
Actually Jim it's comparing to the relatively new New Fermi 460 GPU.

Even if it comes out a tie power required to encode will likely decrease dramatically. Being that Sandy Bridge is only a 95w part and does not require any GPU to encode at this speed.

regards-

Your text: "The encoders used were Handbrake (x264) for Core i7, Badaboom for NVidia’s 460 GPU."

From your benchmark link:

Testing Setup:

Video Card: Nvidia Geforce GTX 260.

I remain interested in seeing whether or not Sandybridge might have an effect on discrete video card sales.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Honestly for casual users, they won't care.
For 'hardcore' users, it's not about speed, it's about quality.
If you get a speed increase with crap quality or minimal options, they won't care.
For casual users, they probably don't think that hard about exactly how fast, and 'faster' is nice enough.


By this I mean people doing it 'properly' are going to buy a many core CPU to encode as fast as possible in something that they can tweak and set up custom settings etc.
Average people aren't going to go out and buy Sandy Bridge just for the encoding bit, and average people typically aren't going to buy a Geforce for GPGPU still (although it's a minimal purchasing decision for some). For the vast majority of the market it simply won't really matter.

basically that.
There is simply no market, at all, for the current implementation of nvidia GPGPU encoding. Hardcore encoders refuse to sacrifice one iota of quality for speed (and badaboom produces sub par quality videos so...) while Casual encoders simply don't care period. It is a market-less product...
now, if they managed to accelerate the encoding of top quality codecs like x264 without sacrificing quality... that would have a pretty big impact in the hardcore/professional markets.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
No, it's comparing the old 260, from the OC article the numbers are taken from-



http://www.overclockersclub.com/reviews/badaboom_1_1_1/3.htm

Looks like right now Intel's upcoming part in a best case scenario may be able to run with a couple year old video card for transcoding using dedicated logic. Certainly a big improvement, but nothing that I see keeping nV's engineers up at night.



The GTS450 that launched today runs Wolfenstein at 16x10x4 ~20% faster then 4 32 core servers running it at 12x7x0- considering most people are saying that the GTS450 is priced too high at $130 for its' performance level, quad servers may be *a little* too pricey for their performance ;)

Talk about spin . Look high in the sky whats is that super NV na thats not it . Super intel no thats not it . Its Super FUD.

U left out the 4 servers running 4 32 core knights Ferry processors were in cloud compute real time . The game was running on a seperate notebook I believe and it was done with raytracing .
A link to NV doing the same right now works for me . So if that part of your little speech is pure BS . So must be the rest.

Personal attacks and insults are not acceptable.

Moderator Idontcare
 
Last edited by a moderator:

extra

Golden Member
Dec 18, 1999
1,947
7
81
I scraped together the numbers from today's Sandy Bridge demo and Badaboom and tried to compare apples to apples as much as possible at this point:

http://lee.hdgreetings.com/2010/09/intel-cpu-vs-nvidia-gpu-video-transcoding.html

Is this bad for NVidia?

No, it's irrelevant to Nvidia...we don't have enough info to come to any conclusion at all at this point.

The link is completely invalid due to the extremely small sample time. However, if we take it as gold, then:

1. SB running some sort of optimized binary, producing who the hell knows what kind of quality video (maybe really good maybe bad) is a nice bit, but not a ton, faster than an existing i7 920.

2. An older model mid range Nvidia card (260, NOT A 460) is a bit slower than SB and a bit faster than an existing i7 920. However, that is using Badaboom, which produces really bad looking footage, so who cares how fast it is.

BTW, if just going by a fun numbers game...using DVDFab and CUDA accelerated video transcoding (on a file pre-copied from a regular DVD to SSD for laughs because the drive is too slow to keep the card working very hard, encoding to 720x480 h.264), my oc'd gtx470 transcodes *over five times faster* than that SB sample. Yeah. Woohoo go fermi, and all that--except no, not really...because:

The video looks...well...bad. (Just like video encoded with Badaboom does). So I don't use it, and I don't think really anyone else does either.

So if Intel's optimized transcoding engine is used like a gpu and produces crappy looking footage *golf claps for Intel*.

However, if it's general purpose enough that the handbrake folks can use it and it can get a 25% or more transcoding improvement over an i7, then sweet, job well done.
 

extra

Golden Member
Dec 18, 1999
1,947
7
81
The GTS450 that launched today runs Wolfenstein at 16x10x4 ~20% faster then 4 32 core servers running it at 12x7x0- considering most people are saying that the GTS450 is priced too high at $130 for its' performance level, quad servers may be *a little* too pricey for their performance ;)

The problem with Intel's Ray tracing demos isn't necessarily so much speed, it's that they all look like really bad lol. The graphics might have been impressive 10 years ago. Even avatar and such were done with polygons...The wolfenstein demo looked alright, but was more of the same sadly--it's like the demo has nice looking refraction/reflections and the rest looks really horrible. And really horrible aliasing in some of the scenes (that was bad enough to be noticed even in youtube). :( This would not look acceptable at all in a real game.

What they need to demo is say a high end intel CPU with a high end amd or nvidia gpu, and have it set so that the GPU renders most of the scene really nicely and then the cpu does some reflections/refraction/global illumination setup and such as needed---combine the two.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
U left out the 4 servers running 4 32 core knights Ferry processors were in cloud compute real time .

I think anyone reading these forums can figure out that anything running on 4 different servers isn't running on a desktop PC natively. Perhaps I made a stretch and overestimated, but I really don't think so.

The game was running on a seperate notebook I believe and it was done with raytracing .

They could choose to render using whatever method they saw fit. Using 4 servers they got smoked by a $130 graphics card, badly.

A link to NV doing the same right now works for me .

Making an old game run terrible and look awful? I may be able to find footage of a FX5200 running the game, that may be comparable.

However, that is using Badaboom, which produces really bad looking footage, so who cares how fast it is.

I have the latest version of Handbrake and Badaboom, where do you see major quality differences? I get a slightly smaller file with Handbrake for portable devices, for high res files it seems to depend on the scene, sometimes Badaboom comes out better, sometimes Handbrake(both mainly revolving around softness issues) with comparable file size.
 

Seero

Golden Member
Nov 4, 2009
1,456
0
0

brybir

Senior member
Jun 18, 2009
241
0
0
/snip
They could choose to render using whatever method they saw fit. Using 4 servers they got smoked by a $130 graphics card, badly.

Making an old game run terrible and look awful? I may be able to find footage of a FX5200 running the game, that may be comparable.

/snip

Your statement above is being disingenuous at best. They choose to demonstrate ray tracing. Real time ray tracing in useful applications is a major hurdle and something that many companies are eager to show off for various reasons. Your statement that they got "smoked" by a $130 graphics card is simply false, at least unless you show that a $130 graphics card is capable of better raytracing performance. That was his question, and that would be the comparison.

If you want to get into the debate over whether raytracing is necessary, superior etc to rasterization based methods, fine, but that is an entirely different discussion. Nemesis' point was that he wanted to see what it would require on comparable Nvidia hardware to accomplish the same task (i.e. raytracing that particular game).
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
basically that.
There is simply no market, at all, for the current implementation of nvidia GPGPU encoding. Hardcore encoders refuse to sacrifice one iota of quality for speed (and badaboom produces sub par quality videos so...) while Casual encoders simply don't care period. It is a market-less product...
now, if they managed to accelerate the encoding of top quality codecs like x264 without sacrificing quality... that would have a pretty big impact in the hardcore/professional markets.

+1.

Intel has a lot of influence with encoding and adding this feature to existing codecs (like taltamir advises above) would be HUGE. If I get a SB and most major encoding software gets updated to take advantage of the transcoding capabilities, that would be a huge boon for everyone. It is almost like adding a brand-new instruction set to a processor family. It works differently (of course) but adds a new feature built-in to the family. This has huge implications.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Your statement above is being disingenuous at best. They choose to demonstrate ray tracing. Real time ray tracing in useful applications is a major hurdle and something that many companies are eager to show off for various reasons. Your statement that they got "smoked" by a $130 graphics card is simply false, at least unless you show that a $130 graphics card is capable of better raytracing performance. That was his question, and that would be the comparison.

If you want to get into the debate over whether raytracing is necessary, superior etc to rasterization based methods, fine, but that is an entirely different discussion. Nemesis' point was that he wanted to see what it would require on comparable Nvidia hardware to accomplish the same task (i.e. raytracing that particular game).


Thats a big 10/4
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Honestly for casual users, they won't care.
For 'hardcore' users, it's not about speed, it's about quality.
If you get a speed increase with crap quality or minimal options, they won't care.
For casual users, they probably don't think that hard about exactly how fast, and 'faster' is nice enough.

When I hear about CPU+IGP (they call it CPU+GPU) on the same die, I always get the picture of 2 midgets(AMD &Intel's CPU+IGP) having a slugfest in the ring, while MMA fighters (real GPU's) look on and laugh...
 

rolodomo

Senior member
Mar 19, 2004
269
9
81
basically that.
There is simply no market, at all, for the current implementation of nvidia GPGPU encoding. Hardcore encoders refuse to sacrifice one iota of quality for speed (and badaboom produces sub par quality videos so...) while Casual encoders simply don't care period. It is a market-less product...
now, if they managed to accelerate the encoding of top quality codecs like x264 without sacrificing quality... that would have a pretty big impact in the hardcore/professional markets.

Come on, "no market at all"? At the outset, doesn't that seem a bit unlikely? Adobe Premeire CS5 based on the Mercury (cuda) engine is making waves right now with power users of video editing software, which includes encoding.

Here's an interesting thread, which also discusses more casual software using GPGPU (and not just badaboom): http://forums.anandtech.com/showthread.php?t=2103704