Human eye "resolution"

Mrpilot007

Senior member
Jan 5, 2003
227
0
76
I found a very interesting and educating read. It explains that the fundamental limit of

optical resolution is determined by the wavelength of light that is used to illuminate the

object. We cannot see objects or detail that is smaller than a light wavelength. Human

vision spans from 720 nanometers (2.83 microinches) in the red wavelengths of light to

400 nanometers (1.57 microinches) in the blue violet wavelenghts.

Image Resolution Limits
 

Calin

Diamond Member
Apr 9, 2001
3,112
0
0
The resolution would be defined by the number of the vision cells (cone cells and the rod cells).
Now, unlike the resolution of a photo camera, the human eye has no zoom worthy to speak of (there might be some "digital" zoom when looking at details of an image, but that might be just a better image analysis algorithm, not image magnifying.
And one more thing completely different from video and photo camera, the human eye has cone cells to receive colors (they are conceptually the same as the cells in a CCD), and rod cells that are sensible at light only (no colors). This explain why there are no colors in low light.
Also, one more thing is that the distribution and area density of the cells differs in different areas of the eye, with even a completely blind spot. Also, the distribution of cells differ, "color" cells are denser in the center of the retina, and the rods denser on its periphery.
If I remember correctly, the rod cells are more sensitive to movement, so this might be the reason peripheric vision is more sensitive to movement (this might also explain why fluorescent light might seem stable on central vision (looking directly at it) but flickering when seen with the corner of the eye

Calin

And the resolution? I think some 100 Mpixels is the upper limit (but I might be wrong), with a lower limit of some 20+ MPixels
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
We can count pixel receivers on the retina, yes, but we can't say X dpi because we can focus on things near and far. The common figure would be a minimum angle between two dots that will still be seen as two. This resolution is indeed highest around the focus point.

Note that humans vary quite a lot in that figure - from above-average hi-res people like me (I can sit back two meters from a 19" monitor running 1400x1050 and still read 8pt perfectly fine) all the way down to a (rare) vision impairment where you see everything in focus but incredibly low resolution. I know one such person - the guy isn't nearsighted at all and nonetheless has to crawl into monitors and newspapers to read.
 

unipidity

Member
Mar 15, 2004
163
0
0
Is that a function of the rod/cone density, or diffraction through the pupil? I mean, I dont *see* diffracted images, but they must be there at some distance.
 

Witling

Golden Member
Jul 30, 2003
1,448
0
0
There is an additional factor governing visual accuity; the number of nerve connections per receptor. In humans, each optical nerve is connected to several receptors. Raptors, on the other hand, have a one-to-one receptor/connector ratio.
 

dkozloski

Diamond Member
Oct 9, 1999
3,005
0
76
Another interesting factor is that in a manner similar to listening to a rapidly interrupted conversation, the human eye can "fill in the blanks". This is why real time observation is sometime superior to photographs for detection of objects on the ground from aircraft. While staring at a distant object the brain stacks the images of an object. The actual image adds but the distortions do not. Much the same way that a single frame of movie film can be quite blurry but a clip of several frames can be clear.
 

unipidity

Member
Mar 15, 2004
163
0
0
Yeah. Supposedly, the time spent observing an osilliscope trace should be factored into error calculations. ie the brain does a superposition of multiple images to get a better average.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Since the brain, which actually pieces the received information from the eye together, is not a digital device, it's not really appropriate to think in terms of 'resolution.' As others have suggested, it could be dependent on the density of the rods and cones in the fovea (the spot on the retina where the lens focuses incoming images). However, the brain actually performs much more complex treatments of this incoming data to create an analog picture of everything you see. The exact method by which this is accomplished is unknown, but analagous to discrete Fourier transforms of the data. So, I guess the more appropriate question would be 'at what resolution do you no longer achieve noticeably improved pictures'? This is a very subjective question, but I'll submit that we have not yet reached this limit.

As for the time between refreshes of the image by the brain as some of you have mentioned, there is no real, definitive answer. A good rule of thumb is 20 Hz, but this is a first approximation at best. In reality, the time required for you to change focus from near to far or far to near (called accommodation) is a function of age. Your ability to change focus also decreases with age, making it pretty much impossible to quantify a focus time. The combination of these two things is called 'presbyopia' and is why every person needs reading glasses (if previously hyperopic (farsighted)) or bifocals (if previously nearsighted (myopic)). We're currently working on the cure. :p
 

irwincur

Golden Member
Jul 8, 2002
1,899
0
0
You could measure it in LPI but I suspect you would need a really good printer to reach the limits of human vision. I do doubt however that it is technically better than a photo negative that contains on the order of approx. 90 Mega Pixels worth of information.
 

Gannon

Senior member
Jul 29, 2004
527
0
0
I'd say the resolution would have to be pretty high, in terms of pixel resolution, well I dont think we can ever truly know exactly, but we can estimate, the problem you have with light and color and having other vision post processing going on besides just simple light detection, is that a lot of stuff will blend into one another. I think the "dot pitch" is more important then the actual "resolution".

Humans have pretty good sense of "distance" and depth perception in your vision. Dont forget we can look out over a massive horizon, staring at a picture of a horizon in high very high resolution is not going to give the same resolution as standing right there in real time. Also we process color continuously. Also digital resolutions in computers have problems with artifacting from quantinization errors, I imagine the human visual system has similar errors but they are filtered out. Our eyes don't just posess light receptors they also possess adaptive light sensitive-motion detectors, and filters so you dont see your blood vessels in your eyes, its pretty amazing. I went to the eye doctor and he squirted this liqud in my eye and I could see the outline of clear blood vessels in my eyes that I cold never see without that special chemical highlighting them, I had no idea my brain was removing these large vessel artifacts from my vision to make my vision perfectly clear.

The brain does a lot of processing on vision that we have do not understand yet but suffice it to say its pretty amazing. The removal of blood vessel artifacts from my regular vision never knowing my brain was doing that was a pretty big shock to me, how does the brain know exactly that those artifacts exist and to remove just the artifacts from the information stream going into brain?
 

Mark R

Diamond Member
Oct 9, 1999
8,513
14
81
It's perhaps possible to give a rough idea of resolution at the centre of vision - corresponding to the anatomical structure known as the foveola.

It is this area that is responsible for the highest resolution images. At the foveola cone cells are densely packed - almost perfectly hexagonally close packed. The diameter of the cone receptors in this area is about 3µm and therefore each subtends approx 30 seconds of arc. Also in this area there is less low-level signal processing by the retina - there is almost a perfect 1-to-1 connection from receptor to nerve fibre leading to the brain.

What does this resolution mean? It means if you look at the full moon it's image will cover an area of the retina equal to approx 60 receptors in diameter.

As has been mentioned before, such precision is maintained only at the very centre of vision. As you move to the periphery, the receptors become larger, less tightly spaced, and more heavily interconnected. There is additional degradation because all the retina support structures (blood vessels, nerve connections) lie in front of the receptors, but are able to skirt around the outside of the very central part, leaving it a clear view. [*]

It's also worth noting that resolutions are not equal for different colours. Blue cones are relatively rare on the retina (accounting for only about 1-2% of cone receptors). There would, in fact, be little point in spacing them closer because the cornea and lens have significant chromatic aberration, which causes blue light to be poorly focussed onto the retina.

The other amazing thing about the eye is its dynamic range - which is simply extraordinary. A typical modern digicam sensor can distinguish about 3-4 log levels of intensity - and correct exposure with shutter and aperture adjustments is essential. The retina, by contrast, has over 10 log levels of dynamic range (although some time is required for it fully to adjust to an overall intensity). So not only, can it provide high detail, contrasty images even at high-noon on Miami beech or when skiiing, but when fully dark adapted has single photon detection capability.

[*] - you can see the main blood vessels in your eye with the following technique. Take piece of Al foil and punch a pinhole in the centre. Stare at a brightly illuminated featureless area (e.g. a wall or ceiling) and place the pinhole directly in front of your eye, so that you are looking through it. Now, very gently move the foil around.
 
Jun 15, 2005
11
0
0
out of all the human-bio mumbo-jumbo (which i'm too lazy to read) i'm not sure if anyone came up with the answer... are u (BionicSniper) wondering what MP the human eye sees at? I'm 95% sure I heard somewhere reliable it was 11-12 MP (no where near the suggested 100 that whats-his-name said - no offence!)... not a bad res, but ah i know canon make 16MP pro digitalcams so yeah. we're 'behind' hehe... hope that helps!
 

Mark R

Diamond Member
Oct 9, 1999
8,513
14
81
I'm 95% sure I heard somewhere reliable it was 11-12 MP

The problem is that the quality changes from area to area.

10 Million is probably a reasonable enough estimate for the total number of signals received from each eye, but the detail seen at the centre of vision is nearly 100x as high as that seen closer to the edge. If the resolution at the centre of vision was maintained across the whole visual field the total resolution would be about 160 Mpx.



 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Mark R
The problem is that the quality changes from area to area.

10 Million is probably a reasonable enough estimate for the total number of signals received from each eye, but the detail seen at the centre of vision is nearly 100x as high as that seen closer to the edge. If the resolution at the centre of vision was maintained across the whole visual field the total resolution would be about 160 Mpx.
Looking at the eye as an optical system, it's possible that one might need a high-resolution peripheral image. The image will essentially be downsampled by the eye's optics, due to aberrations and so on, before it reaches the retina. Thus, if you feed it a low resolution image, it could become very garbled by the time it makes it all the way to the retina.

The problem with trying to assign a finite resolution in terms of pixels (as many here are trying to do) is that when you consider a number of megapixels, that really just describes a resolution in a plane with a well-defined surface area. However, this isn't how the eye perceives things. The number of pixels you would need for a 'perfect' full-field image for the eye would be well beyond the capabilities of any existing digital cameras.
 

Bona Fide

Banned
Jun 21, 2005
1,901
0
0
I don't think you can just measure the resolution of an eye, as you would with a camera. There are too many variables. The biggest one is of course, the person's vision. Assuming they had 20/20 vision, it would eliminate one variable. But people without 20/20 vision have the rough equivalent of fitting an 400x300 picture on your desktop, if your screen resolution is 1600x1200. You'll be able to make out shapes and colors and relative depth, but everything will be extremely blurry.

Also, the factor of optical receptors (sp?) comes in. Each person, genetically, has a different amount of receptors, just like none of us have the exact same height, weight, hair color, and eye color. So, each person's resolution depends on how much their brain can translate at once. Granted, the difference is probably no more than 5% across the board, in all able human beings, but it still presents another uncontrollable variable.

Then you have to factor in the eye's focusing power. As previously mentioned, modern optics technology pales in comparison to the focal power of the human eye. Take out your digital camera and look out your window at the farthest object you can clearly see. Now, bring the camera quickly down to point at the windowsill or wall. It'll take a few seconds for the lens to adjust and re-attain focus. The human eye does this in nanoseconds, if that. That factors into the DPI, also as previously mentioned.

Just a little something I'd add...[to make myself feel smart :p]
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Bona Fide
Then you have to factor in the eye's focusing power. As previously mentioned, modern optics technology pales in comparison to the focal power of the human eye. Take out your digital camera and look out your window at the farthest object you can clearly see. Now, bring the camera quickly down to point at the windowsill or wall. It'll take a few seconds for the lens to adjust and re-attain focus. The human eye does this in nanoseconds, if that. That factors into the DPI, also as previously mentioned.
Well, the eye doesn't accommodate (change focus) THAT fast. For starters, it takes about 75 ms for the brain to tell the eyes to move (latency) - this is relatively independent of age. Once they start to move, the age of the person becomes the controlling factor. When you're young, it might take 100 ms (hypothetically - I'm still working on pig eyes when determining these numbers, so take them with a grain of salt :p). When you get older (say, 40), not only does the magnitude of accommodation decrease, but it can take up to a full second to refocus. This is clinically how the loss of accommodation (presbyopia) is typically discovered by the patient. By the time you're 65+, you have pretty much lost the ability to accommodate altogether.
 

mdchesne

Banned
Feb 27, 2005
2,810
1
0
so can we get a relative xxx X xxxx resolution of each eye by experiementation? Or is it all based on lightwaves? Note: we do have a blind spot, but since our vision overlaps this spot, would the resolution be xXx or xXx-<blindspot res>
 

bobsmith1492

Diamond Member
Feb 21, 2004
3,875
3
81
Well, my mom will look at a picture from a 5 megapixel camera displayed on a 1024x768 screen and say, "That looks fake." (i.e. computer-generated) It usually isn't too hard to tell the difference between a completely analog signal (records, tape, film) and digital (digital photos, music.)
 

ox1111

Junior Member
Sep 12, 2005
5
0
0
MY guess is pretty low. meg's or dpi. maybe .25 megs or 300 dpi. A 5meg camera can take and reproduce a clear and detailed photo up to a fairly large size where as the human eye can not. If you look at sight as a picture and take a quick snap shot it is extremly poor. the focused part of vision is extremely small it might amaze you. It just like a camera changes with distance, but look at your computer screen a random long words and see how big of a word you can see and make out as a pic with out reading it, meaning scanning the word from left to right. You will notice larger words you will scan. from this distance your focus is only about 5/8 of an inch. Visionin that 5/8 is extremly good but if you focus on something steady you will see that the rest of the field of vision is poor so as an average vision is poor. Your brain proccess little clear bits of sight very fast to make a big clear pic like a tv exept in stead of one dot at a time it is a block say 1 sq inch with 2 megs. .25 megs and 300dpi for an average of sight meaning the whole field of vision is very very very generous. 2 megs for the focused area is even more.
 

yosuke188

Platinum Member
Apr 19, 2005
2,726
2
0
How could we say x pixels when this world is not made up of pixels? Am I missing something?