What resolution does the human eye see in?

DannyBoy

Diamond Member
Nov 27, 2002
8,820
2
81
www.danj.me
I read somewhere that there's no point in increasing video resolution past 1080p (1920x1080) because the human eye won't notice the difference and that future technological advancements for displays are focusing on colour reproduction / depth because of this.

Is this true?

-D
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
DPI is what is important. Number of pixels and resolution are pretty meaningless without knowing anything about the size of the display. 1080p at 13" and 1080p at 106" are two entirely different things.

If you play back 1080p in a cinema you can clearly observe it isn't "high enough".
 
Last edited:

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
The human eye has about 100,000,000 neurons in its optic nerve. Each can be thought of more or less as a pixel of a digital camera, though some only transmit grayscale intensity information (rods), while others transmit color/intensity information (cones). I don't recall the distribution of rods:cones, but there are many more rods than cones, perhaps by a factor of 5-10. The brain then does a lot of fancy image processing to further improve the apparent resolution of what your eye sees: this is the role of ~80% of your brain, so it's pretty serious and not necessarily well understood (at least by me).

So, the simple answer is that you'd need 10^8 pixels covering your entire visual field to correspond to the 10^8 neurons. You can use simple geometry to compute what fraction of your visual field is occupied by the display, then calculate the number of neurons that might be in that vicinity. Unfortunately, this will be inaccurate because receptor density is much higher in the center of your vision and less dense peripherally. The receptors are also distributed more laterally than vertically, which is why your eye likes widescreen. Does that not answer your question?
 

0roo0roo

No Lifer
Sep 21, 2002
64,862
84
91
plus you don't stare straight ahead. eyes flit the center of focus with its higher resolution from place to place constantly and unconsciously while watching stuff. its why scientists can study folks with eye tracking while making them watch stuff.
 

silverpig

Lifer
Jul 29, 2001
27,709
11
81
It's not necessarily about resolution though. Even though your eyes can't detect individual pixels, there is still information transmitted by them that you can pick up on.
 
Nov 20, 2009
10,046
2,573
136
This is interesting in that a) the human eye is more sensitive to vertical resolution than horizontal, and b) some metal-dye manufacturers have researched the naked eye's ability to discern dyed cracks down to about 1100 line pairs.

A line pair is a line of information adjacent to a line of non-information. Two line pairs afford the two information lines to be identified separately. Also, keep in mind (no pun) that the number of rods and cones are not linear in nature when talking about their density.

Have a look at this. You can see that 'resolution' isn't a monolithic concept, but there is pixel resolution and then color resolution. Is it any wonder that the current ATSC specification is running near the 1100-line pairs of the human eye? :biggrin:

Now, the better question might be at what resolution, both pixel and color, does the human mind start to not tell apart virtual from reality.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,039
431
126
Also don't forget that the eye's "resolution" is in degrees of an arc. The retina is a curved surface, and so is the eye. This means the DPI that is discernible changes depending on how close or far away the object is to the eye (because the angle of differentiation between the points gets smaller the farther the object(s) is from the eye, and larger when the object(s) is closer).

So if you are looking for trying to determine the optimal screen resolution that will produce a picture indistinguishable from real life, the resolution chosen will be limited by the distance from the eye.

The next issue with making a realistic display resolution has to do with focal length. Until we develop a 3D display which allows the eye to change its focus on individual objects in the Z plain, we will still easily detect that the display is a "display" and not reality.
 
Last edited:

DannyBoy

Diamond Member
Nov 27, 2002
8,820
2
81
www.danj.me
The human eye has about 100,000,000 neurons in its optic nerve. Each can be thought of more or less as a pixel of a digital camera, though some only transmit grayscale intensity information (rods), while others transmit color/intensity information (cones). I don't recall the distribution of rods:cones, but there are many more rods than cones, perhaps by a factor of 5-10. The brain then does a lot of fancy image processing to further improve the apparent resolution of what your eye sees: this is the role of ~80% of your brain, so it's pretty serious and not necessarily well understood (at least by me).

So, the simple answer is that you'd need 10^8 pixels covering your entire visual field to correspond to the 10^8 neurons. You can use simple geometry to compute what fraction of your visual field is occupied by the display, then calculate the number of neurons that might be in that vicinity. Unfortunately, this will be inaccurate because receptor density is much higher in the center of your vision and less dense peripherally. The receptors are also distributed more laterally than vertically, which is why your eye likes widescreen. Does that not answer your question?

So in theory, any resolution + DPI photograph or television from a far enough distance would eventually saturate the available viewing capacity of the human eye?
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
So in theory, any resolution + DPI photograph or television from a far enough distance would eventually saturate the available viewing capacity of the human eye?
Yes. Even an old SDTV will be visually "perfect" if viewed from far enough away. It's just that that distance is very, very far away.
 
Nov 20, 2009
10,046
2,573
136
Also don't forget that the eye's "resolution" is in degrees of an arc. The retina is a curved surface, and so is the eye. This means the DPI that is discernible changes depending on how close or far away the object is to the eye (because the angle of differentiation between the points gets smaller the farther the object(s) is from the eye, and larger when the object(s) is closer).

So if you are looking for trying to determine the optimal screen resolution that will produce a picture indistinguishable from real life, the resolution chosen will be limited by the distance from the eye.

The next issue with making a realistic display resolution has to do with focal length. Until we develop a 3D display which allows the eye to change its focus on individual objects in the Z plain, we will still easily detect that the display is a "display" and not reality.

I read somewhere that the arc is one arc second, which would be º/3600. Knowing this angle, the distance between the retina and lens, and the density of cones (or rods) within the retina's field of highest density equated to approximately 2200 lines.

But, do to the ability to tell the difference between a line of information and a line of non-information they are paired. hence, the term line-pairs. And in the human eye's case, it is approximately 1100 line pairs.

Of course, one must recognize that the human eye is more sensitive to vertical resolution (e.g. lines and line-pairs), than horizontal resolution (e.g. bars). This is why early adoption of capturing HD content in 1440x1080 was stretched to 1920x1080.

Of course the above is only useful if a) you have 20/20 vision (corrected or natural). and b) you are looking at something of similar resolution within its viewing domain.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Of course the above is only useful if a) you have 20/20 vision (corrected or natural). and b) you are looking at something of similar resolution within its viewing domain.
This is a good point that I forgot to mention. In any person's eye, the real limit of resolution is generally limited by the optics rather than the retina.
 

lyssword

Diamond Member
Dec 15, 2005
5,761
25
91
If I sit 2-3 feet away from 20" monitor, I can easily see artifacts in 1080p video.
 

fleabag

Banned
Oct 1, 2007
2,450
1
0
If I sit 2-3 feet away from 20" monitor, I can easily see artifacts in 1080p video.
Those are compression artifacts... That has absolutely nothing to do with the resolution and more about the encoding of the video itself. That's why comcast HD looks like garbage but OTA HD looks amazingly good...
 

Pandamonium

Golden Member
Aug 19, 2001
1,628
0
76
This reply is far from scientific or technical. I was wondering this very question years ago, and I stopped after I read a source that seemed somewhat trustworthy stating that 4000dpi was close to the upper limit for the average person's eye. I don't know where to begin looking it up again, but that's what I found. I'll take that as the truth until I take neurology and can back it up with textbook/literature refs.
 
May 11, 2008
19,560
1,195
126
This is interesting in that a) the human eye is more sensitive to vertical resolution than horizontal, and b) some metal-dye manufacturers have researched the naked eye's ability to discern dyed cracks down to about 1100 line pairs.

A line pair is a line of information adjacent to a line of non-information. Two line pairs afford the two information lines to be identified separately. Also, keep in mind (no pun) that the number of rods and cones are not linear in nature when talking about their density.

Have a look at this. You can see that 'resolution' isn't a monolithic concept, but there is pixel resolution and then color resolution. Is it any wonder that the current ATSC specification is running near the 1100-line pairs of the human eye? :biggrin:

Now, the better question might be at what resolution, both pixel and color, does the human mind start to not tell apart virtual from reality.

When looking at the picture...
Is that not amazing, that the light information has to travel through various cells before it reaches the actual cell that registers the light ?

According to some people, if we would have the eyes of an octopus we would have spectacular vision.

http://pandasthumb.org/archives/2006/11/denton-vs-squid.html


Now consider the eye of squids, cuttlefish and octopi. Their retinas are “rightway round”, that is the photoreceptors face the light, and the wiring and the blood vessels facing the back (1). Squid and octopi have no blind spot; they can also have high visual acuity. The octopus also has a fovea-equivalent structure, which it makes by packing more (or longer) photoreceptors into a given area (1). Because it doesn’t have to create a hole in the supporting tissue it can have arbitrarily large “fovea”, and greater visual acuity. Cuttlefish have better visual acuity than cats (2) and because of their “rightway round” retinas; this level of acuity covers nearly the entire retina (1,2) unlike vertebrates where it is confined to the small spot of the fovea.

The vertebrate retina is a prime example of historically quirky “design”. The vertebrate retina is backwards because the development of the retina was first elaborated in rather small chordates, where issues of acuity and blind spots were non-existent; all subsequent vertebrates got stuck with this “design”. Vertebrates do very well with the limitations of the design of the eye, but it is clear that this is no system a competent designer would make. Naturally, this annoys the proponents of an Intelligent Designer, and they have been looking for ways to put a better spin on the kludged design of the vertebrate eye.

On a side note, in my opinion the denser packing of cones in the focus point of the eye also has an automatic advantage. This distribution of cells would allow a hardware compression of information. What you focus on has the highest information ratio. While the surroundings are being seen with a lower resolution.

I wonder if there is ever build a cmos or ccd sensor with a layout similair to that of the human eye. I think some "simple" addition and/or averaging logic would be suffcient to process the surrounding view, thereby reducing the amount of information to store. While at the same time using the signal before it is added together but after averaging together as a means to detect movement in combination with some local memory to hold previous results.

I wonder how the vision works of birds. I hear some eagles and hawks have amazing vision.
 
Last edited:
May 11, 2008
19,560
1,195
126
The mantis shrimp seems to have the best eyes. Possible that it even can detect the polarization of light.

http://en.wikipedia.org/wiki/Mantis_shrimp#The_eyes

From wikipedia.

The midband region of the mantis shrimps eye is made up of six rows of specialized ommatidia. Four rows carry 16 differing sorts of photoreceptor pigments, 12 for color sensitivity, others for color filtering. The mantis shrimp has such good eyes it can perceive both polarized light, and hyperspectral colour vision [10]. Their eyes (both mounted on mobile stalks and constantly moving about independently of each other) are similarly variably coloured, and are considered to be the most complex eyes in the animal kingdom.[11][12] They permit both serial and parallel analysis of visual stimuli.

Each compound eye is made up of up to 10,000 separate ommatidia of the apposition type. Each eye consists of two flattened hemispheres separated by six parallel rows of highly specialised ommatidia, collectively called the midband, which divides the eye into three regions. This is a design which makes it possible for mantis shrimp to see objects with three different parts of the same eye. In other words, each individual eye possesses trinocular vision and depth perception. The upper and lower hemispheres are used primarily for recognition of forms and motion, not colour vision, like the eyes of many other crustaceans.

A colorful stomatopod the peacock mantis shrimp (Odontodactylus scyllarus) seen in the Andaman Sea off Thailand.

Rows 1-4 of the midband are specialised for colour vision, from ultra-violet to infra-red. The optical elements in these rows have eight different classes of visual pigments and the rhabdom is divided into three different pigmented layers (tiers), each adapted for different wavelengths. The three tiers in rows 2 and 3 are separated by colour filters (intrarhabdomal filters) that can be divided into four distinct classes, two classes in each row. It is organised like a sandwich; a tier, a colour filter of one class, a tier again, a colour filter of another class, and then a last tier. Rows 5-6 are segregated into different tiers too, but have only one class of visual pigment (a ninth class) and are specialised for polarisation vision. They can detect different planes of polarised light. A tenth class of visual pigment is found in the dorsal and ventral hemispheres of the eye.

The midband only covers a small area of about 5°–10° of the visual field at any given instant, but like in most crustaceans, the eyes are mounted on stalks. In mantis shrimps the movement of the stalked eye is unusually free, and can be driven in all possible axes, up to at least 70°, of movement by eight individual eyecup muscles divided into six functional groups. By using these muscles to scan the surroundings with the midband, they can add information about forms, shapes and landscape which cannot be detected by the upper and lower hemisphere of the eye. They can also track moving objects using large, rapid eye movements where the two eyes move independently. By combining different techniques, including saccadic movements, the midband can cover a very wide range of the visual field.

Some species have at least 16 different photoreceptor types, which are divided into four classes (their spectral sensitivity is further tuned by colour filters in the retinas), 12 of them for colour analysis in the different wavelengths (including four which are sensitive to ultraviolet light) and four of them for analysing polarised light. By comparison, humans have only four visual pigments, three dedicated to see colour. The visual information leaving the retina seems to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels.

At least two species have been reported to be able to detect circular polarized light[13][14] and in some cases their biological quarter-wave plates perform more uniformly over the entire visual spectrum than any current man-made polarizing optics [15][16]. The species Gonodactylus smithii is the first - and only - organism known to simultaneously detect the four linear, and two circular, polarization components required for Stokes parameters, which yield a full description of polarization. It is thus believed to have optimal polarization vision

Another link :

http://www.blueboard.com/mantis/bio/vision.htm


It is interesting, for example :
Through polarization, you can tell if light comes from the sun directly or from a reflective surface.
 
Last edited:

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
wonder if our brains could handle it if our eyes could pick up that much information (genetically engineered eyes?) or would our brains have to be altered too. hmmm
 
May 11, 2008
19,560
1,195
126
wonder if our brains could handle it if our eyes could pick up that much information (genetically engineered eyes?) or would our brains have to be altered too. hmmm

I am wondering about that too. When it comes to robotic limbs and artifical prostates , the brain at least from our cousins seems to be almost to a scary level adaptive.

The monkeys only needed a few days of practice...

http://www.nytimes.com/2008/05/29/science/29brain.html?_r=1
http://www.youtube.com/watch?v=TK1WBA9Xl3c

And i read somewhere long ago that our brains can adapt to new forms of motoric movement quickly. These scientists glued some electrodes on the skin of a subjects head. eeg or ecg. I am not sure, i have to look it up.
The scientists then let a computer analyze the electric fields given off and used that information to control some robotics.
After some practice the brain had learned that a certain thought like a thinking of applejuice could trigger movement from the robot. After some more practice , just thinking about moving the robot was enough. The brain had learned that the neuron pattern for applejuice and robotic movement where the same.

To come back to the eyes :

http://babylon.acad.cai.cam.ac.uk/people/rhsc/oculo.html

I would really not be surprised that our brains could adapt to new sensors automatically.

A good example is tactile processing for blind people. A blind person received a device that was build up of a grid of spikes on his skin . Each spike would push out with a tiny force. This force depends on the light intensity received by a camera. More light is more force. The camera has pixels placed in a grid too.

Here is some info on a system using the tongue.

http://www.4to40.com/health/index.asp?id=339


A bionic eye. Only sixteen pixels but enough to see movement, light intensity and shapes.

http://www.guardian.co.uk/science/2007/feb/17/medicineandhealth.uknews

I think the brain can actually use it.

In my opinion:
Some psychiatric diseases may arise from the brains desire for sensory input.
Maybe with some of these people the brain lowers it's threshold for information to much creating phantom data.

I read something about a method of hallucination.
When you lay in a bath with water the same temperature as your skin. And there is no sound. No light, totally dark. No smell. Then within minutes you will start to feel things, see things hear things and even smell things. Things that are not there. But your brain is tuning up the amplifiers and lowering the thresholds of it's neurons in parts of your brain. Because your brain needs sensory input to function.

If no data >> seek data :)

I do think you would have to use some overlay principle. Like a rattle snake uses it's infrared pit organs. It maps the infrared data over it's visual data.

http://en.wikipedia.org/wiki/Infrared_sensing_in_snakes

It seems the brain just wants data and has some central sensor input like a main station where all the data is gathered. if that is the case, then the brain could use any form of sensor as long as it is given in the right format.

Visual thinkers for example would have it easy. Their brains already process data faster then they can speak or hear it. Speech is just to slow.
"1 picture means more than a thousand words" is a common used phrase. It is very true for visual thinkers. A lot of people do not think in words or language but in images and just translate that into words.
i think the next renaissance in humanity will occur when we can connect to eachother similair as the internet.

Robot controlled by cultivated rat neurons.

http://www.youtube.com/watch?v=1-0eZytv6Qk&feature=related
 
Last edited:

0roo0roo

No Lifer
Sep 21, 2002
64,862
84
91
I read somewhere that there's no point in increasing video resolution past 1080p (1920x1080) because the human eye won't notice the difference and that future technological advancements for displays are focusing on colour reproduction / depth because of this.

Is this true?

-D

nope else you wouldn't be able to tell if a print was cr@p or from a sweet laser printer or whatever.
lcd resolution density is sh*t compared to human vision.
sure the density of receptors is concentrated mostly a the center but the way human eyes work is to unconsciously flit from area of interest to area of interest, focusing the entire screen at once is not important, it matters the area in focus has detail.

future development focus on color only is because hd resolution is the most popular, and you don't get any bonus for putting out a 2048p screen:p plus increasing the resolution is far harder...more dead pixel defects possible.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,277
125
106
Before we start talking about higher resolutions, I would rather talk about better compression methods. Many companies are really using some crap media compression algorithms which leads to poor looking quality even at HD. A better compression system would result in cleaner pictures at roughly the same bandwidth.

I've been watching DirectTV recently, and while the compression artifacts aren't terrible, they are noticeable. Once we can move those to the realm of not noticeable then I'll be willing to discuss moving up to higher resolutions.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
I wonder how much of the human ability to tell the difference in images is the resolution of the image and how much is the brains interpretation of the image.

I could take a giga-pixel image and change one pixel and people would not detect it, but change that one pixel over time and in different areas and people can see it. So I think it also has a lot to do with the brain, not just the 'sensor'.