What resolution does the human eye see in?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
May 11, 2008
22,551
1,471
126
I wonder how much of the human ability to tell the difference in images is the resolution of the image and how much is the brains interpretation of the image.

I could take a giga-pixel image and change one pixel and people would not detect it, but change that one pixel over time and in different areas and people can see it. So I think it also has a lot to do with the brain, not just the 'sensor'.

I Think the brain uses a very high form of compression based on repetition. If i would see a person for the first time i would remember that persons face with all the clothes, texture and the color of the clothes and the surroundings aka direct environment. As long as all are some geometric shape that the brain can code and compress. But i would not remember the wrinkles on the sleeve unless these wrinkles have some geometric form. I also would not remember the texture if the texture does not have some geometric form. Forms like circles, triangles, boxes and so on. So part of the processing of my eye and part of my brain do nothing but just that : pull the scene apart into geometric shapes and count the amounts. A strong form of compression of color is used as well. I have this very strong idea that the color information increases when there are geometric shapes present.
Now if i meet that same person over and over again with different clothes, my brain has to perform tasks to eliminate the recognition of that same person with a certain set of clothes(clothes that where worn the first time). My brain will now only while in close proximity scan all the details from that person face since that seems to be the the only information of that person that is fairly constant. Now this does reminds me of the way neurons strengthening the connection when a signal or a stimulation is over and over again present. Better know as conditioning.

Sidenote :
IMO, in a more general way, this also explains something else : When we do something over and over again, we get dull. That is because of the neuron nature to just keep strengthening the connection of a certain stimulation until nothing is left. This does also explains to me why we need emotions.
Emotions cause fluctuations, keeping us fresh. The natural fluctutations of being happy and being sad causes our brains to stay fresh. But as always, to much is a burden, as is to less.

I recently found some pdf about pattern recognition on insects and octopuses. Although it was very superficial and not going into details i think understand what is happening.

http://web.mit.edu/9.670/www/lecture2_insectsFeb82001.pdf
Page 7 and 8 are interesting. It's about coding of shapes in a pulse width signal. At least that's what i have to think about when i see it. And neurons seem to like a layout in a regular pattern. And neurons give of signals that looks like a pulse coded modulation signal.

http://en.wikipedia.org/wiki/Retinotopy

In order to to your post, i agree.
It is based on a threshold build up of inclusive or information : it is based on a geometric shape, or it is based on the intensity of the color difference, or the intensity on a black and white scale. I am sure i forgot something though.. Now this is all, the automatic process. When our brain decodes the information back into our inner virtual reality, and we start to focus on what we find interesting in our inner VR we start to look with great detail to that object what we find interesting in the physical world. This can also enhance the level of detail we store.

Now to come back to that filter in our inner VR that decides for us what we find interesting, i am willing to bet that that is what in psychology is called character. The basis of who you are and what defines you.

And afcourse i am not taking into account the information from our other senses that has to be stored in the same way as well.

This is all in my opinion afcourse.
 

Tech_savy

Member
Nov 18, 2009
111
0
0
We can differentiate four lines that are drawn within 1-milimeter-length. That means we can see objects of 1/4 mm which is 250 micrometer.
However below 200 micrometer, it is hard to distinguish an object from another. Thus, the resolution of human eye
is considered as 200 micrometer.
 
May 11, 2008
22,551
1,471
126
This has nothing to do with the eyes directly but nevertheless i find it an enormous achievement and thought it fits in this posting:

In their study, the researchers tested the technology on a 26-year-old male who had a brain stem stroke at age 16. The brain stem stroke caused a lesion between the volunteer’s motor neurons that carry out actions and the rest of the brain; while his consciousness and cognitive abilities are intact, he is paralyzed except for slow vertical movement of the eyes. The rare condition is called locked-in syndrome.

Five years ago, when the volunteer was 21 years old, the scientists implanted an electrode near the boundary between the speech-related premotor and primary motor cortex (specifically, the left ventral premotor cortex). Neurites began growing into the electrode and, in three or four months, the neurites produced signaling patterns on the electrode wires that have been maintained indefinitely.

Three years after implantation, the researchers began testing the brain-machine interface for real-time synthetic speech production. The system is “telemetric” - it requires no wires or connectors passing through the skin, eliminating the risk of infection. Instead, the electrode amplifies and converts neural signals into frequency modulated (FM) radio signals. These signals are wirelessly transmitted across the scalp to two coils, which are attached to the volunteer’s head using a water-soluble paste. The coils act as receiving antenna for the RF signals. The implanted electrode is powered by an induction power supply via a power coil, which is also attached to the head.

The signals are then routed to an electrophysiological recording system that digitizes and sorts them. The sorted spikes, which contain the relevant data, are sent to a neural decoder that runs on a desktop computer. The neural decoder’s output becomes the input to a speech synthesizer, also running on the computer. Finally, the speech synthesizer generates synthetic speech (in the current study, only three vowel sounds were tested). The entire process takes an average of 50 milliseconds.

As the scientists explained, there are no previous electrophysiological studies of neuronal firing in speech motor areas. In order to develop an accurate neural coding scheme, they had to rely on an established neurocomputational model of speech motor control. According to this model, neurons in the left ventral premotor cortex represent intended speech sounds in terms of “formant frequency trajectories.”

In an intact brain, these frequency trajectories are sent to the primary motor cortex where they are transformed into motor commands to the speech articulators. However, in the current study, the researchers had to interpret these frequency trajectories in order to translate them into speech. To do this, the scientists developed a two-dimensional formant frequency space, in which different vowel sounds can be plotted based on two formant frequencies (whose values are represented on the x and y axes).

“The study supported our hypothesis (based on the DIVA model, our neural network model of speech) that the premotor cortex represents intended speech as an ‘auditory trajectory,’ that is, as a set of key frequencies (formant frequencies) that vary with time in the acoustic signal we hear as speech,” Guenther said. “In other words, we could predict the intended sound directly from neural activity in the premotor cortex, rather than try to predict the positions of all the speech articulators individually and then try to reconstruct the intended sound (a much more difficult problem given the small number of neurons from which we recorded). This result provides our first insight into how neurons in the brain represent speech, something that has not been investigated before since there is no animal model for speech.”

To confirm that the neurons in the implanted area were able to carry speech information in the form of formant frequency trajectories, the researchers asked the volunteer to attempt to speak in synchrony with a vowel sequence that was presented auditorily. In later experiments, the volunteer received real-time auditory feedback from the speech synthesizer. During 25 sessions over a five-month period, the volunteer significantly improved the thought-to-speech accuracy. His average hit rate increased from 45% to 70% across sessions, reaching a high of 89% in the last session.

http://www.physorg.com/news180620740.html
 
May 11, 2008
22,551
1,471
126
How about an eye that is augmented to see magnetic fields ...

Was running through some bookmarks and remembered this...
Birds seem to be able to see the magnetic field as bright and dark spots...

I thought it would be a nice addition.^_^

How could a radical pair reaction lead to a magnetic compass sense? Suppose that the products of a radical pair reaction in the retina of a bird could in some way affect the sensitivity of light receptors in the eye, so that modulation of the reaction products by a magnetic field would lead to modulation of the bird's visual sense, producing brighter or darker regions in the bird's field of view. (The last supposition must be understood to be speculative; the particular way in which the radical pair mechanism interfaces with the bird's perception is not well understood.) When the bird moves its head, changing the angle between its head and the earth's magnetic field, the pattern of dark spots would move across its field of vision and it could use that pattern to orient itself with respect to the magnetic field. This idea is explored in detail by Ritz et al (see below). Interestingly, studies have shown that migratory birds exhibit a head-scanning behavior when using the magnetic field to orient that would be consistent with such a picture. Such a vision-based radical-pair-based model would explain several of the unique characteristics of the avian compass, e.g., that it is light-dependent, inclination-only, and linked with the eye of the bird. It is also consistent with experiments involving the effects of low-intensity radio frequency radiation on bird orientation, as suggested by Canfield et al.


http://www.ks.uiuc.edu/Research/cryptochrome/


And found something new about butterflies who seem to use this magnetic sensing as well :

One of the most exciting aspects of the work was showing that each of the two forms of butterfly Cry have the molecular capability to sense magnetic fields. Reppert's group is now developing behavioral assays to show that monarchs can actually use geomagnetic fields during their spectacular fall migration. "We believe we are on the trail of an important directional cue for migrating monarchs," states Reppert, "in addition to their well-defined use of a sun compass."

Reppert, who is also the Higgins Family Professor of Neuroscience at UMMS, has been a pioneering force in the effort to understand monarch butterfly navigation and migration. Earlier this year, he and colleagues demonstrated that a key mechanism of the sun compass that helps steer the butterflies to their ultimate destination resides not in the insects' brains, as previously thought, but in their antennae, a surprising discovery that provided an entirely new perspective of the antenna's role in insect migration.

http://www.physorg.com/news183643173.html
 
May 11, 2008
22,551
1,471
126
Does this mean in the future it will be possible to control another person's body once you implanted the RF receiver between their brain and limbs?


I don't think so. What it should mean in my opinion is until scientists solve the mysteries of why neural pathways are so heavily inhibited to grow, this would be a good step in between to give paralyzed people the ability to move around or at least express them selves. A good example is a newborn child or toddler. It wants to express it self but it cannot in a way that we understand it directly. Hence the crying and the wild behaviour. I can imagine that if people cannot express themselves and other people misinterpret them and think they are retarded that it is very frustrating for the paralyzed person.