Posted by izhak 3 days ago
I know of vocoders in the military hardware that encode voices to resemble something more simple for compression (a low-tone male voice), smaller packets that take less bandwidth. This evolution of the ear to must also have evolved with our vocal chords and mouth to occupy available frequencies for transmission and reception for optimal communication.
The parallels with waveforms don't end there. Waveforms are also optimized for different terrains (urban, jungle).
Are languages organic waveforms optimized to ethnicity and terrain?
Cool article indeed.
The poor man's conversion of finite to equivalent infinite time is if you assume an infinite signal where the initial finite one is repeated infinately to the past and the future.
I found this quite interesting, as I have noticed that I can detect voices in high-noise environments. E.g. HF Radio where noise is almost a constant if you don't use a digital mode.
Neuroanatomy, Auditory Pathway
https://www.ncbi.nlm.nih.gov/books/NBK532311/
Cochlear nerve and central auditory pathways
https://www.britannica.com/science/ear/Cochlear-nerve-and-ce...
Molecular Aspects of the Development and Function of Auditory Neurons
neural signaling by action potential, is also a representation of intensity by frequency.
the cochlea is where you can begin to talk about bio-FT phenomenon.
however the format "changes" along the signal path, whenever a synapse occurs.
Are you perhaps experiencing some high frequency hearing loss?
In the middle range (say, A2 through A6) neither of these issues apply, so it is - by far - the easiest to tune.
Which is why we can hear individual instruments in a mix.
And this ability to separate sources can be trained. Just as pitch perception can be trained, with varying results from increased acuity up to full perfect pitch.
A component near the bottom of all that is range-based perception of consonance and dissonance, based on the relationships between beat frequencies and fundamentals.
Instead of a vanilla Fourier transform, frequencies are divided into multiple critical bands (q.v.) with different properties and effects.
What's interesting is that the critical bands seem to be dynamic, so they can be tuned to some extent depending on what's being heard.
Most audio theory has a vanilla EE take on all of this, with concepts like SNR, dynamic range, and frequency resolution.
But the experience of audio is hugely more complex. The brain-ear system is an intelligent system which actively classifies, models, and predicts sounds, speech, and music as they're being heard, at various perceptual levels, all in real time.
That's a side note, the rest of what you wrote was very informative!