Hulk
Diamond Member
Here's Doom9's conclusion on reference vs mmx encoding:
Conclusions
By now you might be wondering: What is this all about? Well... it's really useful to illustrate my point.
I've been watching these clips a couple of times and I failed to see any noticeable difference. Of course I was watching the clips without sound and with all the lights in my room turned off to have the full movie feeling. I also up-scaled the movies to full-screen (1152x864) in order to be able to spot errors more easily. From previous tests and my bitrate tips you might already know that when it comes to quality I tolerate no compromise. But here, really, I tried hard to spot a difference but I didn't succeed.
As you can see from the screenshots the difference between MMX iDCT and IEEE-1180 reference iDCT is not noticeable. Neither at the 1500kbit/s clips where I've taken the screenshots from, nor at lower or higher bitrates.
On Tom's hardware guide it has been written:
"However, this tends to produce a lot of artifacts in the final MPEG-4 video because all the pixel values of the decoded frames are approximations. Thus when a second DCT transform is applied to convert it to MPEG-4 it tends to approximate again and produce really horrible artifacts in some cases.
Using the IEEE decode eliminates most of these artifacts and produces an output that rivals most DVDs when set to about 20% of the original bit rate (1.5mbps for a 7.5mbps DVD like Matrix). "
That's exactly what I did.. I encoded Matrix at 1500kbit/s. And yet: THERE IS NO DIFFERENCE. Sorry Toby but you're wrong.. My sight is still very good and you can believe me that I spot encoding errors where others won't notice any flaws. But in this case the IEEE reference algorithm provided no noticeable effect except in a more than 3 times longer encoding time.
Or the short version: You can sleep well again in the knowledge that you won't have to reencode all your DVDs and that you won't have to buy a 1.2GHz Athlon just to be able to encode DivX at a lousy 6fps.
What the article on Tom's hardware guide illustrates is only this: AMD has a lot better FPU than Intel. Maybe that will have an impact on MPEG-2 encoding but it sure does not have one on MPEG-4 encoding. I think the reference quality iDCT is simply taken from the reference implementation by the MPEG software simulation group. Usually these implementation are good in terms of quality but they suck pretty badly when it comes to speed. If a software DVD player used the reference iDCT you'd be watching slideshows for the next 5 years... all useful iDCT algorithms have been at least MMX optimized - but which doesn't necessarily mean that the one used in FlaskMpeg is best.. there certainly are better ones.
Let's also consider this: In the initial article about DivX there have been a couple of nasty errors on part of Tom. Blight - www.inmatrix.com - pointed out most of these. Among these was the suggestion to use next neighbor resizing which results in really bad encoding. That has been amended since but they still write that for small filesize you should use that kind of filtering. But believe me, you don't want to use it, it looks terribly. And I don't think that any IEEE reference algorithm can fix what this kind of filtering destroyed.
Here's the link to the site:
Text
Conclusions
By now you might be wondering: What is this all about? Well... it's really useful to illustrate my point.
I've been watching these clips a couple of times and I failed to see any noticeable difference. Of course I was watching the clips without sound and with all the lights in my room turned off to have the full movie feeling. I also up-scaled the movies to full-screen (1152x864) in order to be able to spot errors more easily. From previous tests and my bitrate tips you might already know that when it comes to quality I tolerate no compromise. But here, really, I tried hard to spot a difference but I didn't succeed.
As you can see from the screenshots the difference between MMX iDCT and IEEE-1180 reference iDCT is not noticeable. Neither at the 1500kbit/s clips where I've taken the screenshots from, nor at lower or higher bitrates.
On Tom's hardware guide it has been written:
"However, this tends to produce a lot of artifacts in the final MPEG-4 video because all the pixel values of the decoded frames are approximations. Thus when a second DCT transform is applied to convert it to MPEG-4 it tends to approximate again and produce really horrible artifacts in some cases.
Using the IEEE decode eliminates most of these artifacts and produces an output that rivals most DVDs when set to about 20% of the original bit rate (1.5mbps for a 7.5mbps DVD like Matrix). "
That's exactly what I did.. I encoded Matrix at 1500kbit/s. And yet: THERE IS NO DIFFERENCE. Sorry Toby but you're wrong.. My sight is still very good and you can believe me that I spot encoding errors where others won't notice any flaws. But in this case the IEEE reference algorithm provided no noticeable effect except in a more than 3 times longer encoding time.
Or the short version: You can sleep well again in the knowledge that you won't have to reencode all your DVDs and that you won't have to buy a 1.2GHz Athlon just to be able to encode DivX at a lousy 6fps.
What the article on Tom's hardware guide illustrates is only this: AMD has a lot better FPU than Intel. Maybe that will have an impact on MPEG-2 encoding but it sure does not have one on MPEG-4 encoding. I think the reference quality iDCT is simply taken from the reference implementation by the MPEG software simulation group. Usually these implementation are good in terms of quality but they suck pretty badly when it comes to speed. If a software DVD player used the reference iDCT you'd be watching slideshows for the next 5 years... all useful iDCT algorithms have been at least MMX optimized - but which doesn't necessarily mean that the one used in FlaskMpeg is best.. there certainly are better ones.
Let's also consider this: In the initial article about DivX there have been a couple of nasty errors on part of Tom. Blight - www.inmatrix.com - pointed out most of these. Among these was the suggestion to use next neighbor resizing which results in really bad encoding. That has been amended since but they still write that for small filesize you should use that kind of filtering. But believe me, you don't want to use it, it looks terribly. And I don't think that any IEEE reference algorithm can fix what this kind of filtering destroyed.
Here's the link to the site:
Text