How soon after a sine wave reaches the eardrum can the brain determine what its frequency is?

Status
Not open for further replies.

Howard

Lifer
Oct 14, 1999
47,989
10
81
It certainly can't be instantaneous... or can it? I imagine that the brain needs to "follow" the amplitude for a little bit before being able to find its pitch.

Would it be half a period?
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Frequency is encoded by location within the ear. Thus, certain cells within the ear detect certain frequencies, so there is no latency from trying to detect the frequency by waveform analysis. I'm not too familiar with the exact latency from stimulus to action potential, but it's very low (probably on the order of 10 ms if I had to guess).
 

Biftheunderstudy

Senior member
Aug 15, 2006
375
1
81
So your ears work like a spectrum analyzer with separate filters for each frequency? That's awesome, who needs a fast fourier transform anyway...
Anyhow, it still takes on the order of a few milliseconds to get to a part of the brain that can actually do something with it. Signal transfer through axions is quite slow, compared to electronic circuits that is.
 

Paperdoc

Platinum Member
Aug 17, 2006
2,307
278
126
Last I learned about this, the hearing system does not do anything like spectral analysis. The ear contains a large array of sensor "hairs" in the inner ear of varying lengths. So each has its own natural resonant frequency. Basically, the brain simply needs to know which one hair is vibrating and hence sending out a signal on its neuron.

Now, I strongly suspect that there are not 20,000 different sensor hairs and separate neurons in each ear. I don't know exactly how the structure is, but it may be based on a more sophisticated system of fewer sensors / neuron connections and an assessment by the brain of the relative amplitudes of vibrations in several sensor hairs of similar resonant frequencies. But that's just my speculation. Anyone know the detailed real truth?

But for your original question about the response time from presentation of the sound sine wave until the brain can identify the frequency, I'm sure the others are right. The time is probably determined by the speed of signal propogation along the neurons from ear to brain, whatever that is. Processing time within the brain over short signalling distances I would expect is less important.

In fact, there may be two questions here. One is: how much of a sine wave is required for the brain to recognize the frequency: a fraction of one period, or several periods, or a fixed time irrespective of frequency? And the other is, how much time does it take for the recognition process to be completed?
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Paperdoc
Last I learned about this, the hearing system does not do anything like spectral analysis. The ear contains a large array of sensor "hairs" in the inner ear of varying lengths. So each has its own natural resonant frequency. Basically, the brain simply needs to know which one hair is vibrating and hence sending out a signal on its neuron.

Now, I strongly suspect that there are not 20,000 different sensor hairs and separate neurons in each ear. I don't know exactly how the structure is, but it may be based on a more sophisticated system of fewer sensors / neuron connections and an assessment by the brain of the relative amplitudes of vibrations in several sensor hairs of similar resonant frequencies. But that's just my speculation. Anyone know the detailed real truth?
I'm not really sure about the details. I have a bunch of papers on the subject from a neurophysiology lab I helped teach a couple years ago, but I never read them since I was teaching a different experiment and I was pretty busy that year. :p I'll have to see if I can track them down.
But for your original question about the response time from presentation of the sound sine wave until the brain can identify the frequency, I'm sure the others are right. The time is probably determined by the speed of signal propogation along the neurons from ear to brain, whatever that is. Processing time within the brain over short signalling distances I would expect is less important.

In fact, there may be two questions here. One is: how much of a sine wave is required for the brain to recognize the frequency: a fraction of one period, or several periods, or a fixed time irrespective of frequency? And the other is, how much time does it take for the recognition process to be completed?
I think there is little doubt that the time required to recognize the signal is negligible relative to other steps in the process except at low frequencies (i.e. frequencies <500 Hz, where a sine wave would have a period of 2 ms or more). That said, a material with a certain resonant frequency will begin to oscillate on contact with the signal - it doesn't wait to make sure that the whole sine wave is present. :p I'd really have to go read up on the details of this to make sure I know what I'm talking about before I say much else though. The only similar system that I'm very familiar with is phototransduction, but that occurs by a completely different mechanism and has chemical reactions which are the major cause of latency that I don't think the ear has to worry about.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Human hearing is not optimized for single frequency detection. The detection time is roughly equivalent to the frequency hitting the eardrum, the middle ear amplifying the signal through hydraulic multiplication, the inner ear to detect the frequency from specific hairs which vibrate, the auditory nerve to send those signals back to the brain, and finally the brain to interpret the frequency. Not only that, the brain doesn't "know" the frequency. People who have perfect pitch rely on the memory of the frequency with an associated label but even then, the frequency itself is unknown, just the memory of the frequency.

This is a very slow process compared to a digital system. I'd guess it is on the order of several ms. Now, when it comes to loud sounds, like gunshots, causing a fast response of covering the hand over the ears while crouching, that is an example of the reflex system which does not utilize the brain in efforts to save time.

What the human hearing excels at is signal processing, filtering and compression which trump any digital system we are capable of making.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
14
81
In the context of a pure tone, the frequency is determined mechanically by the ear. The basilar membrane essentially plays the role of the diaphragm in a microphone, but it has a tapered shape, and variable stiffness, so that the resonant frequency changes according to position. As a result, the membrane itself performs a mechanical Fourier transform of the acoustic signal that enters the cochlea.

Positioned on the basilar membrane, are 'hair cells' that detect vibration. Because of the mechanical nature of the membrane, a specific frequency sound will cause significant vibration only in one specific region, causing only those cells to be activated. The auditory nerve provides a massively parallel connection from the millions of hair cells to the brain - there is a limited amount of signal processing before the signal from the cells reaches the auditory nerve - e.g. combining signals for a few adjacent cells onto a single nerve fiber.

So, in the case of a pure tone, the pitch has already been determined before the signal reaches the brain, because of the specific tuning of individual nerve fibers to specific frequencies. All the brain has to do is see which fibers are active - in much the same way as you can tell whether something has touched you on your thumb or little finger, because the different regions are connected to the brain via different nerve fibers. There is a little more subtlety here, in that the tuning isn't perfect, and the cells/membrane have a finite, albeit quite narrow bandwidth, so a pure tone will cause activation of a bunch of adjacent nerve fibers - so the brain would determine the pitch as being at the center of the bundle.

There is a more fundamental problem when talking about determination of pitch, or frequency, and latency; uncertainty. The precision to which a frequency can be measured is determined by the duration of the signal (or of the observation). If you suddenly switch on a sine wave, then when the sine wave begins, there will be a discontinuity in the amplitude waveform which will contain all frequencies. The more gradually the signal is switched on (or off), the smaller the discontinuity, and the narrower the band of frequencies that it contains. This is a fundamental mathematical problem, not a biological one. Measuring the latency of pitch determination is therefore very difficult, as you must gradually switch on the sound, otherwise, you spray a whole bunch of frequencies into the ear, and you cannot be sure that what you are measuring is genuine. It's been a very long time since I did my research into pitch processing in the ear, but I seem to recall that we used 50 ms attack/decay ramps precisely to avoid this 'spectral splatter' problem.

What is much more interesting is how the brain determines pitch for a sound without a defined frequency.... e.g. A signal containing frequencies 1000 Hz, 1500 Hz and 2000 Hz is heard with a pitch of 500 Hz, even though there is no 500 Hz component. That's a much more complicated problem, and I'm not sure that it is one that has been fully solved.
 

Biftheunderstudy

Senior member
Aug 15, 2006
375
1
81
That problem most likely has something to do with heterodyne beats, when 2 frequencies are overlayed there is a beat frequency equal to the difference between the frequencies. Interestingly this is the technique used to calibrate lasers relative to an ultrastable absolute frequency laser. A more mundane example is piano tuning.
That being said, I would pin it to the brain processing things in a strange way.
 

alpha88

Senior member
Dec 29, 2000
877
0
76
Also interesting: The brain sends commands to the ear to sensitize/dampen at specific frequences (via tiny muscles that bring individual 'hair cells' closer or further from the vibrating membrane). This lets the brain filter sound at the level of the sensor.
 

theMan

Diamond Member
Mar 17, 2005
4,386
0
0
This is the way I learned it. At low frequencies, all the hair cells vibrate at the frequency of the sound, creating action potentials at the same frequency. At middle frequencies, the different hairs vibrate at different frequencies, and the action potentials created sum to create higher frequencies. At high frequencies, only specific, high frequency cells are able to vibrate, which tells the brain the specific frequency.
 
Status
Not open for further replies.