Then again, there are some things that digital can't do, period, like 5 hour night exposures.
You just don't know how. I found out something interesting by playing with my cheap webcam.
Digital cameras have way better sensitivity than people give them credit for. When you have an image that's basically black, there's a suprising amount of detail. The apparent "noise" in the image is actually mathematically relevant when you take a bunch of samplings. When you get that low, the chance of a pixel being "on" vs the chance of it being "off" is statistically relevant when you take many exposures.
Say I have a camera with 24 bit color. It's a really dark night, and a small segment of the picture looks like this: (assume these are values out of 16777215. Yeah... EXTREMELY dark)
0 1 0 1 1 0
Taken again, the picture looks like this:
0 0 1 1 0 0
Taken again and again, there's a random sampling one can get from the pictures:
0 1 0 1 1 1
1 0 1 1 1 0
0 0 0 1 0 1
etc.
Well, when you average them together, you get a lot more data:
(1/5) (2/5) (2/5) (5/5) (3/5) (2/5)
And as you increase the number of samples, you increase the accuracy of the test, assuming all your pixels have roughly the same sensitivity. You also get enough differentiation between pictures to start forming grayscales good enough for intelligble pictures.
I tried this with my cheap webcam. It worked great. Once you get the focus, you can take extremely accurate shots of areas that are seemingly black. Each shot shows nothing visible to the naked eye, but together they form a pretty decent picture.
Now.... can somebody write a program (based on TWAIN drivers or image file input) that will automatically average a series of pictures?