dvd compared to film

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Lossless compression, or decompression and re-encoding, are not data processing in the sense of computing. That's just re-storage of the same data using a different encoding.

Decoding MPEG and re-encoding at a different resolution, and then doing it backwards, will NOT have you end up with the same data. That's because MPEG is an algorithmic, inherently always lossy procedure.

For perfect AD conversion, you don't need "very high" bandwidth, you need infinite bandwidth AND infinite resolution. In the real world, you're converting a continuous signal into discrete points of quantized information.
 

brotherkane

Junior Member
Dec 4, 2004
15
0
0
Imagine if it was possible. It would take FOREVER to rip or burn a film quality dvd. lol

but you wouldnt notice the difference between a hd dvd disc and a mythological film quality dvd unless you have a projector an an enourmous wall to protect it on.
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: brotherkane
Imagine if it was possible. It would take FOREVER to rip or burn a film quality dvd. lol

but you wouldnt notice the difference between a hd dvd disc and a mythological film quality dvd unless you have a projector an an enourmous wall to protect it on.

Film has a grainyness to it you know. Film is NOT perfect and digital will surpass it. You can already do better than 35 mm camera film in digital still shots.
 

Tab

Lifer
Sep 15, 2002
12,145
0
76
Originally posted by: LethalWolfe
Originally posted by: Tabb
So, why did we stop using video cassestes then? That is film right?

We stopped using VHS because it's a horribly bad format. And video casettes are video not film.


Lethal

Its not? What is it exactly? Whats the black ribbon tape stuff then?
 

LethalWolfe

Diamond Member
Apr 14, 2001
3,679
0
0
Originally posted by: Tabb
Originally posted by: LethalWolfe
Originally posted by: Tabb
So, why did we stop using video cassestes then? That is film right?

We stopped using VHS because it's a horribly bad format. And video casettes are video not film.


Lethal

Its not? What is it exactly? Whats the black ribbon tape stuff then?

Magnetic tape. Video tape works, basically, the same way audio tapes (casette tapes) do.


Lethal
 

itachi

Senior member
Aug 17, 2004
390
0
0
digital will never surpass analog. as close as we may get to perfection.. we'll always be a step shy from it. like it has already been said.. the amount of space required to contain an uncompressed film from a perfect translation would be near infinity. if every point in an image was an equivalent distance from another point except for 1 point that was 1 pm off.. our eyes sure as hell wouldn't be able to make out the difference, but a camera would.. and representing that infinitesimal variation would require a resolution with an effective area of 5.6E17 using 35mm film. films are recorded at 24 fps.. with a sampling resolution of 24-bits, the amount of space required to record 1 second would be 40.32 exabytes (4.032E19).

however, the only thing that really matters is what our eyes can and can't see. if you watched a real film and the dvd equivalent on a 20" screen from 10 feet away.. you wouldn't be able to tell the difference.
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: itachi
digital will never surpass analog. as close as we may get to perfection.. we'll always be a step shy from it. like it has already been said.. the amount of space required to contain an uncompressed film from a perfect translation would be near infinity. if every point in an image was an equivalent distance from another point except for 1 point that was 1 pm off.. our eyes sure as hell wouldn't be able to make out the difference, but a camera would.. and representing that infinitesimal variation would require a resolution with an effective area of 5.6E17 using 35mm film. films are recorded at 24 fps.. with a sampling resolution of 24-bits, the amount of space required to record 1 second would be 40.32 exabytes (4.032E19).

however, the only thing that really matters is what our eyes can and can't see. if you watched a real film and the dvd equivalent on a 20" screen from 10 feet away.. you wouldn't be able to tell the difference.

That's WELL off. Film contains grains which are very much larger than the atoms themselves. Furthermore, the capturing ability of film is horrible compared to that of CCDs (maybe 5% detective quantum efficiency for film as opposed to 90% under ideal CCD conditions... but 60% easily). Film is also very non-linear in its capturing response, whereas CCDs are very linear up to 80% of their full well potential.

The silver-halide grains in film are .5-3 microns in size. A good CCD will have square pixels 10 microns on a side... Digital can, partially has, and will easily in the future pass analog recording methods because of the nature of the recording.
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,601
167
111
www.slatebrookfarm.com
Originally posted by: itachi
digital will never surpass analog. as close as we may get to perfection.. we'll always be a step shy from it. like it has already been said.. the amount of space required to contain an uncompressed film from a perfect translation would be near infinity. if every point in an image was an equivalent distance from another point except for 1 point that was 1 pm off.. our eyes sure as hell wouldn't be able to make out the difference, but a camera would.. and representing that infinitesimal variation would require a resolution with an effective area of 5.6E17 using 35mm film. films are recorded at 24 fps.. with a sampling resolution of 24-bits, the amount of space required to record 1 second would be 40.32 exabytes (4.032E19).

however, the only thing that really matters is what our eyes can and can't see. if you watched a real film and the dvd equivalent on a 20" screen from 10 feet away.. you wouldn't be able to tell the difference.

that's not quite the only thing that matters... besides the lens in an eye, we also have the limitation of the lens in whichever camera is used. No camera lens will ever be perfect. Thus, therein is the limitation for both analog and digital. Neither will overcome that limitation, regardless of improvements in CCD's or film. Digital is already past the stage where the difference in quality of lens contributes to a difference in the quality of the image.. Thus, there are 5 megapixel cameras with inferior optics whose image isn't as accurate as some 3 megapixel cameras with high quality lenses.
 

Tab

Lifer
Sep 15, 2002
12,145
0
76
Originally posted by: LethalWolfe
Originally posted by: Tabb
Originally posted by: LethalWolfe
Originally posted by: Tabb
So, why did we stop using video cassestes then? That is film right?

We stopped using VHS because it's a horribly bad format. And video casettes are video not film.


Lethal

Its not? What is it exactly? Whats the black ribbon tape stuff then?

Magnetic tape. Video tape works, basically, the same way audio tapes (casette tapes) do.


Lethal

So, the magnetic tape has 1010101010s on it and the VCR figures it out from there what to put on the screen?
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: DrPizza
Originally posted by: itachi
digital will never surpass analog. as close as we may get to perfection.. we'll always be a step shy from it. like it has already been said.. the amount of space required to contain an uncompressed film from a perfect translation would be near infinity. if every point in an image was an equivalent distance from another point except for 1 point that was 1 pm off.. our eyes sure as hell wouldn't be able to make out the difference, but a camera would.. and representing that infinitesimal variation would require a resolution with an effective area of 5.6E17 using 35mm film. films are recorded at 24 fps.. with a sampling resolution of 24-bits, the amount of space required to record 1 second would be 40.32 exabytes (4.032E19).

however, the only thing that really matters is what our eyes can and can't see. if you watched a real film and the dvd equivalent on a 20" screen from 10 feet away.. you wouldn't be able to tell the difference.

that's not quite the only thing that matters... besides the lens in an eye, we also have the limitation of the lens in whichever camera is used. No camera lens will ever be perfect. Thus, therein is the limitation for both analog and digital. Neither will overcome that limitation, regardless of improvements in CCD's or film. Digital is already past the stage where the difference in quality of lens contributes to a difference in the quality of the image.. Thus, there are 5 megapixel cameras with inferior optics whose image isn't as accurate as some 3 megapixel cameras with high quality lenses.

But you can actually get high enough quality optics to make it so that any distortions are completely negligible. There are a lot of devices where the glass/mirrors are not the limiting factor. Actually, a lot of very high quality devices are diffraction limited, so you'll see an airy profile around point sources on whatever medium you use to record your image.
 

LethalWolfe

Diamond Member
Apr 14, 2001
3,679
0
0
Originally posted by: Tabb
Originally posted by: LethalWolfe
Originally posted by: Tabb
Originally posted by: LethalWolfe
Originally posted by: Tabb
So, why did we stop using video cassestes then? That is film right?

We stopped using VHS because it's a horribly bad format. And video casettes are video not film.


Lethal

Its not? What is it exactly? Whats the black ribbon tape stuff then?

Magnetic tape. Video tape works, basically, the same way audio tapes (casette tapes) do.


Lethal

So, the magnetic tape has 1010101010s on it and the VCR figures it out from there what to put on the screen?

For digital formats like MiniDV you are correct. But for analog formats, like VHS, there are no ones and zeros. To make this very quick and dirty, when you record onto analog tape (like VHS) the videotape the image being recorded is turned into an electronic signal and, by way of a magnet, that signal is "impressed" onto the videotape. When the videotape is played back another magnet reads that impression, which is turned into an electrical signal which is then turned into a visible image on your TV. In this case the difference between analog and digital is that a digital format (like MiniDV)
records/reads 1 & 0's instead of an analog "impression."



Something everyone needs to keep in my mind is that, at least in this thread, the digital/analog debate has left reality and gone well into the realm of theoretical. Assuming you had a "perfect" analog recording device and a "perfect" digital recording the analog device would probably record a better image/sound, but the difference would probably be so insignificant as to be nonexistent. Of course the problem is we'll never create a recording device and medium that are anywhere close to perfect.

The biggest problem facing HD in Hollywood isn't quality it's aesthetics. HD just doesn't look like film. And when people go see a movie they are used to seeing film. Knowingly or not they associate all the pro's and con's of film w/the movie going experience. So now you have video cameras that shoot 24p, have gamma curves closer to that of film, and a host of post productions tips and tricks designed to make video look like film. Not because of inferior quality but because "untreated" video does not look like film.


Lethal
 

halfadder

Golden Member
Dec 5, 2004
1,190
0
0
Many films today are edited and have effects added digitally. Most are shot in film and the negatives are scanned in at 2K or 4K resolution and maniuplated on a computer. Some movies, such as Star Wars Episode 2, are simply shot in HDCAM (1080/24p). But even a 4K scan doesn't compare to the original film when it comes to quality. But there are other factors to consider. Raw images, regardless if they're film or digital, need enhancement. Almost every movie has had significant color correction to make each scene look better and to make back-to-back scenes match up. After all, it takes weeks to months to film a 90 minute movie. When it's all said and done, a 4K movie looks pretty nice and the final 2K digital cinema release is still good. It may not be as sharp as a first run film that's been burnt from 4K resolution... but after a few days of playing and handling by minimum wage teenage multiplex employees, the dust and scratches will really degrade the quality. Digital is the future for projection... but film will probably be the prefered input method for a long time. The move to pure digital probably won't happen until movies move from the current 24 FPS to 48 or 60 FPS. That would be A LOT of film for the cameras!
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Peter
Lossless compression, or decompression and re-encoding, are not data processing in the sense of computing. That's just re-storage of the same data using a different encoding.

OK, well, now it sounds like you're just defining "data processing" as "a computation that causes data loss". That's a bit circular if you ask me. :p

Decoding MPEG and re-encoding at a different resolution, and then doing it backwards, will NOT have you end up with the same data. That's because MPEG is an algorithmic, inherently always lossy procedure.

Perhaps MPEG was a bad example; the kind of filtering it does *is* lossy (and so you wouldn't get the exact same data back out). However, if you're careful with how you do your up- and down-sampling, you can manage it so that you don't lose any information; it's just a matter of establishing a 1-to-1 mapping between the pixels of the lower-resolution and higher-resolution frames. As a trivial example, scaling by point sampling, while ugly, is completely reversible.

For perfect AD conversion, you don't need "very high" bandwidth, you need infinite bandwidth AND infinite resolution. In the real world, you're converting a continuous signal into discrete points of quantized information.

Nyquist theorem? For fully capturing visual information, however, you would theoretically need a sample rate above the frequencies of visible light, which is a tad impractical.

I suppose you can argue that continuous analog signals in the 'real world' have a potentially infinite bandwidth. However, there are limits to the frequencies of light and sound that we are capable of perceiving, and to the effective resolution (and reaction speed) of our eyes and optic nerves. While you may not be able to perfectly reproduce a 'natural' signal down to the most infinitesmal of details, you can certainly reproduce it to the point where it is indistinguishable to human senses.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
DVD is much better than film....more durable, and quality differences YOU WONT NOTICE!!! Seriosuly, for one, there is signal degradation in analog that you dont get in digital (component setup) true, it can only be equal to the original, but the quality differnce is minute.....and moot
 

0roo0roo

No Lifer
Sep 21, 2002
64,795
84
91
but as for digital not being good enough, ever? COMPLETE CRAP. you know the lord of the rings? great looking movie, right? it was shot on film, dumped into a COMPUTER, graded to change the lighting and hues, edited, and then ultimately dumped back out onto film. but it still looks amazing.

well the thing is they scan and render at resolutions far far far beyond even hdtv. so irrelevant for current "digital" displays.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: Matthias99OK, well, now it sounds like you're just defining "data processing" as "a computation that causes data loss". That's a bit circular if you ask me. :p

Not really. Merely converting the same dataset into a different encoding isn't exactly computing anything.

Decoding MPEG and re-encoding at a different resolution, and then doing it backwards, will NOT have you end up with the same data. That's because MPEG is an algorithmic, inherently always lossy procedure.

Perhaps MPEG was a bad example; the kind of filtering it does *is* lossy (and so you wouldn't get the exact same data back out). However, if you're careful with how you do your up- and down-sampling, you can manage it so that you don't lose any information; it's just a matter of establishing a 1-to-1 mapping between the pixels of the lower-resolution and higher-resolution frames. As a trivial example, scaling by point sampling, while ugly, is completely reversible.

Yes. That again is because then you're merely re-encoding, not computing. You're drawing the same image bigger, using bigger pixels.

Data loss inevitably occurs when you actually compute stuff.

In your example of point sampling, as soon as you're interpolating for the upscaling, you're still not losing the original pixels. While this is satisfying, you might miss the subtle fact that you're losing something anyway, namely the information which pixels were the original ones - because you've just added interpolated ones inbetween. You're not adding new information, because the interpolated pixels are results of computation performed on their original neighbours.
Downscale that back, e.g. by an averaging algorithm, and you're messing it up further because you'll then be averaging original pixels with their interpolated offspring. That boils down to a complete (!) loss of the original dataset in two steps.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Peter
Originally posted by: Matthias99OK, well, now it sounds like you're just defining "data processing" as "a computation that causes data loss". That's a bit circular if you ask me. :p

Not really. Merely converting the same dataset into a different encoding isn't exactly computing anything.

Look. I understand what you're getting at, but this is just a goofy definition of 'computing'. First off, it excludes any function or algorithm that has a 1-to-1 mapping (you claim these are just 're-encoding the same data', which I suppose is true, but it ignores the work done in the evaluation of the function). Second, you are under no obligation to lose the original information/input when evaluating a function or running an algorithm. You said "2+2 = 4" causes a loss of information -- well, this is true only if you throw away the inputs (2 and 2) and only keep the output (4).

I just don't see what you're doing other than defining 'computation' as 'a function or algorithm with a many-to-one input->output mapping'.
 

winterlude

Senior member
Jun 6, 2001
225
0
0
All things being equal, 35 mm film is far higher quality than DVD or HDTV. The human eye, which is a lousy peice of optical equipment is far greater than either of them.
HDTV has been in Japan since the 80's. They're already planning to move on to higher resolutions:

SHDTV (3840 x 2160)
UHDTV (7680 x 4320)

Digital will pass 35mm. It's inevitable. It won't be an extention of CMOS and other such technology. That sort of thinking is like thinking a better weapon than a board with a nail in it is a bigger board with a bigger nail. Sure it is, but I'd bet on a guy with a gun.
The Foveon for example is a type of emerging technology that will make a difference since it is digital imaging that acts like film.

35 mm is not that great. It only seems that way, just the way snowy black and white tv seemed great in the 50's. It's human nature to get excited about new tech, and also human nature to compare new tech to old tech. Any TV was better than no TV. HD is better than the 200-250 lines of interlaced feed we get from tv today. (DTV is 400 lines, same as most DVD's). If you've got an interlaced set, or a progressive one for that matter, move right up close to it and you'll notice when you're 2 inches away, how bad the image is. This may be a moot point since no normal person will watch TV, or DVD's this way, but the point is to keep things in perspective. When we look at something from two or three inches distance in the "real" world--the back of your hand for examplewe just see things more clearly, in higher resolution.

70mm, and then IMAX resolution (10 times the size of 35 mm) would be the next milestones for digital when it eventually exceeds 35 mm. Those IMAX producers need it badly. Hauling that equipment around is no walk in the park. It'll be some years for them to wait though.
Does resolution get better than IMAX? Of course it does. Just that IMAX was limited by logistics and cost. One of the most enduring appeals of end to end digital is the convenience. Think of digital cameras. Film is better, particularly concerning professional analogue cameras, but digital is far more convenient.

Also, not all DVD's are created equal. Pop a few of them in your DVD-ROM and look at the size of the files in the video-ts folder. Granted that a lot of it is audio (Dolby 5.1, 2.0, other languages, director's comments all embedded in the VOB files), but some movies that are 100 minutes fit on a 4.35 gig disk, while others barely fit in a 9gig disk. The difference? Compression. Provided that the original 35 mm was in focus and clean etc. the DVD-9 will look much better. One way to examine your DVD resolution is with the zoom. Zoom right in and play and you see how blocky the resolution is on DVDs.
Although HDTV images are cleaner, technology has a long way to go before catching up with biology--20/20 vision.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Look. I understand what you're getting at, but this is just a goofy definition of 'computing'. First off, it excludes any function or algorithm that has a 1-to-1 mapping (you claim these are just 're-encoding the same data', which I suppose is true, but it ignores the work done in the evaluation of the function). Second, you are under no obligation to lose the original information/input when evaluating a function or running an algorithm. You said "2+2 = 4" causes a loss of information -- well, this is true only if you throw away the inputs (2 and 2) and only keep the output (4).

I just don't see what you're doing other than defining 'computation' as 'a function or algorithm with a many-to-one input->output mapping'.[/quote]

I see I lost you there. Originally, we weren't trying to find a definition for "computation", we were about data conversion vs. data processing. Of course, data conversion aka re-encoding also requires a computation. That's why we've been using computers for re-encoding at least since the British cracked Enigma ;)

The reason behind data processing (like 2+2=4) exactly is to not hand the original data out (screen wobbles, crossfades to actual topic) When you're selling a DVD or transmitting a JPG to someone else, you're giving them the processed results. If you passed the original data along, you could leave out the processed data because they're just adding redundancy. If you say "4" to someone, they can't deduce whether you did 2+2 or the 15th root of 32768. Loss of information occurs.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Peter
I see I lost you there. Originally, we weren't trying to find a definition for "computation", we were about data conversion vs. data processing. Of course, data conversion aka re-encoding also requires a computation. That's why we've been using computers for re-encoding at least since the British cracked Enigma ;)

OK. This clarifies your position a bit. :p

The reason behind data processing (like 2+2=4) exactly is to not hand the original data out (screen wobbles, crossfades to actual topic) When you're selling a DVD or transmitting a JPG to someone else, you're giving them the processed results. If you passed the original data along, you could leave out the processed data because they're just adding redundancy. If you say "4" to someone, they can't deduce whether you did 2+2 or the 15th root of 32768. Loss of information occurs.

Frankly, I still think it's just semantics. While it's true that if I tell someone the answer is "4", they can't tell what the question was (in this case, 2+2), this is not implicit in the definition of how you process the data. There's nothing that stops me from giving them the left-hand side of the equation as well -- this does make the answer 'redundant', but in a lot of cases, getting the output takes a huge amount of resources compared to sending both the input and output. And sometimes the output is meaningless without knowing at least something about the input as well.

I guess I just don't "get it". The information loss is only occurring because you're throwing some of the information away, not as a direct result of the computation.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: silverpigThat's WELL off. Film contains grains which are very much larger than the atoms themselves. Furthermore, the capturing ability of film is horrible compared to that of CCDs (maybe 5% detective quantum efficiency for film as opposed to 90% under ideal CCD conditions... but 60% easily). Film is also very non-linear in its capturing response, whereas CCDs are very linear up to 80% of their full well potential.
mm.. shouldn't you be comparing the quality of the image between CCDs and 35/70mm film used in cinematography?
The silver-halide grains in film are .5-3 microns in size. A good CCD will have square pixels 10 microns on a side... Digital can, partially has, and will easily in the future pass analog recording methods because of the nature of the recording.
i'll agree that digital is far more practical, its nature makes it superior in the longrun, and in the future the difference will become negligble (actually, i don't even know what i'm comparing it to right now.. there aren't even any digital projectors for a theater-scale venue). but i don't believe that digital will ever surpass the potential of analog, given their nature.
but there are probably a lot of things wrong with the way i'm thinking.. and i'm sure somebody will point it out. got finals now tho.. so i won't be responding for a while.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Originally posted by: Matthias99I guess I just don't "get it". The information loss is only occurring because you're throwing some of the information away, not as a direct result of the computation.

You're almost there ... the last thing you need to "get" is the fact that you're processing the data exactly because the original data aren't going to go to the receiving end of the entire procedure. Either because the receiver doesn't want to know, or because transmitting/transporting the original data doesn't make sense or is impossible, for whatever reason, or because the receiver doesn't have the capabilities of processing them into the desired result.

You're doing 2+2 and say "that'll be four dollars please" because your listener is too drunk to sum up their bill; videos are being MPEG-compressed to fit onto DVD media; audio is MP3ed to fit a portable flash storage; etc. etc. The receiver of the result doesn't have the original data ... and of course, the reverse case makes just as much sense: If you're going to transmit the original data anyway, then there's no point in processing them beforehand.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Peter
Originally posted by: Matthias99I guess I just don't "get it". The information loss is only occurring because you're throwing some of the information away, not as a direct result of the computation.

You're almost there ... the last thing you need to "get" is the fact that you're processing the data exactly because the original data aren't going to go to the receiving end of the entire procedure. Either because the receiver doesn't want to know, or because transmitting/transporting the original data doesn't make sense or is impossible, for whatever reason, or because the receiver doesn't have the capabilities of processing them into the desired result.

You're doing 2+2 and say "that'll be four dollars please" because your listener is too drunk to sum up their bill; videos are being MPEG-compressed to fit onto DVD media; audio is MP3ed to fit a portable flash storage; etc. etc. The receiver of the result doesn't have the original data ... and of course, the reverse case makes just as much sense: If you're going to transmit the original data anyway, then there's no point in processing them beforehand.

That's why you're throwing away the inputs in these particular cases. It has nothing to do with how you do the processing, or whether or not you are able to provide the inputs. Again, just a matter of perspective and semantics.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Again, if you're going to hand out the original data in retrievable form, then there's no point in handing out data processed from these originals along with them. Hence, you either hand out processed data or the originals, never both. Except when you're trying to demonstrate data processing ;)

Your call: Name me a case where you're handing out original data identifiable as such (!) along with data processed from these originals. Must be a case where this makes sense (explain why).
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: itachi
Originally posted by: silverpigThat's WELL off. Film contains grains which are very much larger than the atoms themselves. Furthermore, the capturing ability of film is horrible compared to that of CCDs (maybe 5% detective quantum efficiency for film as opposed to 90% under ideal CCD conditions... but 60% easily). Film is also very non-linear in its capturing response, whereas CCDs are very linear up to 80% of their full well potential.
mm.. shouldn't you be comparing the quality of the image between CCDs and 35/70mm film used in cinematography?
The silver-halide grains in film are .5-3 microns in size. A good CCD will have square pixels 10 microns on a side... Digital can, partially has, and will easily in the future pass analog recording methods because of the nature of the recording.
i'll agree that digital is far more practical, its nature makes it superior in the longrun, and in the future the difference will become negligble (actually, i don't even know what i'm comparing it to right now.. there aren't even any digital projectors for a theater-scale venue). but i don't believe that digital will ever surpass the potential of analog, given their nature.
but there are probably a lot of things wrong with the way i'm thinking.. and i'm sure somebody will point it out. got finals now tho.. so i won't be responding for a while.

Sure it will. Analog is NOT a perfect copy. Film is grainy, and doesn't copy information perfectly. There is a loss of information and an associated quality limit to film, a limit which can easily be surpassed by digital.

Draw the most accurate circle you can with a rounded off crayon. Your movement is analog, but your circle will be far from perfect.

Get a computer to display a circle on a computer screen. Which circle is better?