Information

Why do adults lose hearing at high frequencies?

Why do adults lose hearing at high frequencies?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Children and teens can hear high pitched sounds that adults can't hear anymore. Why do adults lose high-frequency hearing?


Hearing declines with age and, typically, high frequencies are affected first. Age-related hearing loss (or presbyacusis (Kujawa & Liberman, 2006)) is progressive and starts at the highest frequencies and as a person ages the lower frequencies are affected. The reduction in hearing sensitivity is caused by the loss of hair cells. Hair cells are sensory cells in the inner ear that convert acoustic vibrations into electrical signals. These signals are transmitted to the brain via the auditory nerve.

Because of the tonotopic organization of the inner ear (the cochlea) one can conclude that presbyacusis develops from the basal to apical region. The following schematic of the cochlea from Encyclopædia Britannica, Inc. nicely shows the cochlear tonotopy (i.e., high frequencies in the base, low frequencies in the apex);

As to your question why the basal region is more sensitive to aging - that is a very good and interesting question that has not been answered, as far as I am aware. What is known, however, is that the basal regions of the cochlea are also more sensitive to the damaging effects of ototoxic antibiotics such as kanamycin (Stronks et al., 2011) and other aminoglycosides, and some chemotherapeutics such as cisplatin (Schacht et al., 2012). In low doses these compounds specifically target the basal regions over the apical ones (at high doses the whole cochlea is affected).

Perhaps the specific anatomy and circulation causes the basal region to be affected more by these chemicals. For example, blood flow may be enhanced basally whereby environmental and food-based chemicals may enter the basal region more readily (educated guesses, I can dig up references of authors posing these possibilities if necessary). This scenario may also contribute to presbyacusis, since it is quite imaginable that years of exposure to various substances may cause accumulation damage specifically to the basal parts of the cochlea.

In addition, since high frequencies are coded in the basal region, and given that sound enters the cochlea in the basal region (see picture below), all sounds, including damaging loud low-frequency stimuli first travel to the base on their way to their point of processing.

Because sound is analyzed as a travelling wave, sound travel stops at that point where the frequency matches the cochlear characteristic frequency. There the wave quickly dies out due to cochlear filtering (Mather, 2006); see the first picture above. In effect, more apical parts of the cochlea are exposed less frequently to (damaging) sound stimuli, while the base of the cochlea is exposed to all sounds entering the inner ear.

Hence, chemical exposure and acoustic wear and tear may be more prominent in the cochlear base, which can explain why presbyacusis starts at the very high frequencies and progresses to low frequencies (upward in the cochlea) from there.

References
- Encyclopedia Brittanica* The-analysis-of-sound-frequencies-by-the-basilar-membrane
- Kujawa & Liberman, J Neurosci 2006; 26:2115-23
- Mather, Foundations of Perception, 2009
- Schacht et al, Anat Rec Hoboken 2013; 295:1837-50
- Stronks et al, Hear Res 2011; 272:95-107


Sound travels in waves and is measured in frequency and amplitude.

Amplitude is the measurement of how forceful a wave is. Measured in decibels (dB), the louder the sound is, the higher the decibel number will be. Normal conversations are measured at about 65 dB.

  • Exposure to sound over 85 dB (busy Thousand Oaks traffic) can cause damage within 8 hours.
  • Exposure to sound over 100 dB (a motorcycle) can cause damage within 15 minutes.
  • Exposure to sound over 120 dB (a chainsaw) can cause damage

Frequency is the measurement of the number of sound vibrations in one second. Measured in hertz (Hz), a healthy ear can hear a wide range of frequencies, from very low (20 Hz) to very high (20,000 Hz).


How sound is measured

The loudness of sound is primarily measured in units called decibels (dB). For example, here are decibel levels for some common sounds:

  • Breathing: 10 dB
  • Normal conversation: 40-60 dB
  • Lawnmower: 90 dB
  • Rock concert: 120 dB
  • Gunshot: 140 dB

Prolonged exposure to sounds louder than 85 dB can cause damage to your hearing sound at 120 dB is uncomfortable and 140 dB is the threshold of pain. This is known as noise-induced hearing loss.

The other way sound is measured is frequency, or pitch. It's measured in Hertz (Hz). When hearing ability is tested, a range of 250 Hz to 8000 Hz is measured because it encompasses the speech frequencies, the most important range for communication.

When measured together, decibels and hertz tell the degree of hearing loss you have in each ear.


Diagnosing high-frequency hearing loss

Diagnosis of high-frequency hearing loss is made after a hearing test in a sound-treated booth at a hearing clinic. A hearing instrument specialist or audiologist usually will conduct the test. The results are plotted on an audiogram. If a person has high-frequency hearing loss, the audiogram will show a slope to the right, indicating a person has trouble hearing frequencies between 2,000 and 8,000 Hz.

A person may have mild, moderate, moderately severe, severe or profound hearing loss. (See degrees of hearing loss to learn hearing loss severity is measured.) In the example below, the person has moderately severe high-frequency hearing loss that is slightly worse in the right ear.


How Hearing Declines With Age

When hearing decline begins depends partly on genetic factors and partly on long-term noise exposure.

Just as it becomes more difficult to see clearly at a close distance or to have perfect recall of names as you get older, certain hearing changes become noticeable as the decades pass.

“If you live long enough, you’re going to suffer some hearing loss — it’s part of the normal aging process,” says Sean McMenomey, M.D., a professor of otolaryngology, head and neck surgery and neurological surgery at the New York University Langone Medical Center. One study, published in the March 1, 2017, issue of JAMA Otolaryngology-Head & Neck Surgery, found that while hearing loss is declining slightly among adults between the ages of 20 and 69, age is the biggest risk factor for hearing impairment — 39 percent of adults ages 60 to 69 have trouble hearing speech clearly.

Age-related hearing loss, called presbycusis, occurs gradually, usually in both ears. When the decline begins depends partly on genetic factors and partly on long-term noise exposure, McMenomey says. “Noise exposure that you had as a kid is the gift that keeps on giving — it cannot be reversed.”

But a decline in hearing also can develop from age-related changes in the inner ear or changes in the nerve pathways from the ear to the brain, according to the National Institute on Deafness and Other Communication Disorders. Certain medical conditions such as high blood pressure, thyroid problems and diabetes can exacerbate the decline, along with medications such as chemotherapy drugs and some antibiotics.

Can you hear this?

When a decline in hearing does happen, the first thing to go is the ability to clearly hear high-pitched sounds such as women’s and children’s voices, especially in situations where there’s considerable background noise. You also may have trouble picking out consonant-heavy words: Consonants such as s, t, k, p and f are softer and higher pitched — as is the th sound — so they can be more difficult to distinguish than vowels, says Todd Ricketts, a professor of hearing and speech sciences at the Vanderbilt University Medical Center in Nashville, Tenn. As a result, it may sound like people are mumbling. It also may be difficult to hear a high-pitched doorbell or the clothes dryer buzzing, he adds.

The inability to hear high-frequency sounds will then worsen as it “works its way into the lower frequency,” Ricketts says. Then you may have trouble understanding what people are saying even in quieter environments because you’ve lost clarity in your hearing. “In more extreme cases, tonal quality gets worse and music may sound flatter,” Ricketts adds. “Hearing loss is a progressive disorder — the prevalence and degree goes up over time, as people get older.”

Testing

The U.S. Preventive Services Task Force doesn’t recommend routine hearing screening for adults ages 50 and older because, it says, “the current evidence is insufficient to assess the balance of benefits and harms of screening for hearing loss.” But the American Speech-Language-Hearing Association has a different opinion, calling for adults to be screened at least every decade through age 50, then every three years after that. If you haven’t kept up with that protocol, it’s best to get a baseline hearing test in your 50s, especially if you have ear-related symptoms such as tinnitus (ringing in the ears) or vertigo, McMenomey says. If it turns out that you have hearing loss, it’s often recommended that you have annual screenings.

“People aren’t particularly good at self-diagnosing hearing loss — that’s why screening tests are important,” Ricketts says. People whose hearing tests show that they are in the normal range commonly think they’re having trouble, he says, and those whose hearing has declined often think it is normal. A study involving 19,642 Korean adults ages 20 and older, published in the August 8, 2017, issue of PLoS One, found that older adults and those with tinnitus are among those who have the highest rates of overestimating or underestimating their hearing abilities.

If you notice signs of decline or feel you are struggling to hear clearly, a hearing test can help identify whether you have a problem. And Ricketts notes that the technology behind hearing assistance has come a long way. “There are interventions you can try, from hearing aids to hearing assisted technologies that are targeted to specific situations such as using the phone or TV.”


The Benefits of Nonlinear Frequency Compression for a Wide Range of Hearing Losses

Nonlinear frequency compression is a proven technique for improving the ability of people with hearing impairment to detect and recognize high-frequency sounds. As difficulty perceiving such sounds is one of the most common characteristics of hearing loss, the practical success of frequency compression is a significant advance in the field of hearing instruments. SoundRecover, a Phonak proprietary algorithm implementing nonlinear frequency compression, is now available in a wide range of Phonak hearing instruments. Extensive trials have demonstrated the benefits of using SoundRecover for many adults and children with severe to profound hearing impairment. Similar benefits may also be obtained by users of SoundRecover who have less severe losses.

A wide range of frequency-lowering schemes have been developed and tested experimentally. At present, several kinds of hearing aids from a number of manufacturers provide various types of frequency lowering (McDermott, Dorkos, Dean, & Ching, 1999Simpson, Hersbach, & McDermott, 2005Kuk, et al., 2006Glista, et al., 2009Kuk, Keenan, Korhonen, & Lau, 2009). However, these schemes are not all the same. The SoundRecover scheme introduced in Phonak Naida Ultra Power hearing instruments is not only technically distinct, but also one of the few schemes for which perceptual benefits have been evident in well-controlled, independent trials.

The rationale for developing these schemes is based on the observation that most people with hearing impairment have poorer perception of sounds at high frequencies. In many cases, high-frequency hearing sensitivity is so deficient that conventional amplification cannot make those sounds comfortably audible. Even when audibility can be achieved, it is still often difficult for people with moderately-severe or worse hearing loss to discriminate high-frequency sounds (Hogan & Turner, 1998). This is an important problem, because many complex sounds, including several phonemes that are often used in speech, contain significant high-frequency components. Furthermore, children learning spoken language experience particular difficulty when attempting to produce phonemes that they cannot hear adequately.

For some researchers in hearing and speech science, frequency lowering has been one potential solution to this problem. Although numerous schemes have been devised and evaluated experimentally, the outcomes have been mixed. With some schemes, the recognition of certain speech phonemes has improved, but at the expense of poorer identification of other phonemes. A few existing schemes have been shown to improve high-frequency sound perception, but the quality of the processed signal is marred by artifacts including clicks, other noises, and unintended changes in pitch. As a result, frequency-lowering schemes in general have not been widely accepted until recently as the advent of sophisticated digital signal processing capabilities in hearing aids has enabled innovative schemes to be implemented that avoid many of the technical problems that were encountered previously.

One scheme, introduced several years ago by Widex, is known as the Audibility Extender (Kuk, Keenan, Korhonen, & Lau, 2009). It implements linear frequency transposition that functions in the following way. Initially, two frequency regions are defined, designated respectively the source octave and the target octave. Sound signals within the source octave are analyzed continuously in real time, a dominant peak is selected, and the frequency of that peak is determined. All signal components in the source octave are then shifted down in frequency by a constant amount. The size of the shift is such that the selected peak is lowered in frequency by typically one octave (although different transposition parameters can be chosen during fitting of the aid to each user). Because the size of the shift applied to all signals present in the source octave is defined as a fixed number of Hertz, other components near the peak may be lowered by an amount that is not exactly one octave. For example, if the source octave contained a dominant peak at 4 kHz, that peak would be lowered by one octave to 2 kHz. As the size of this shift is 2000 Hz, all signal components in the source octave will be lowered simultaneously by the same amount. This means, for instance, that an input signal component with a frequency of 3.5 kHz will be lowered to 1.5 kHz, which is slightly more than a one-octave shift. It can be inferred that the application of this linear frequency-shifting process to each frequency in the source octave may result in a target bandwidth that is wider than one octave. To compensate for this effect, the output signals from the lowering process are filtered to limit the bandwidth to one octave. The transposed signals in the target octave are mixed with any signals already present in that frequency region. Amplification and other signal-processing functions are applied subsequently as usual. Once enabled during fitting, the transposition processing is active all the time.

In contrast, the nonlinear frequency-compression scheme introduced by Phonak, known as SoundRecover, does not involve any mixing of frequency-shifted signals with other signals already present at lower frequencies (Simpson, Hersbach, & McDermott, 2005Glista, et al., 2009). Furthermore, the processing does not depend on detecting specific features of incoming sounds, such as the dominant peak frequency in the source octave of the linear transposition scheme. Instead, all frequencies above a so-called cut-off frequency are lowered by a progressively increasing amount. The growth in the amount of shifting across frequency is determined by a second parameter, the frequency-compression ratio. For example, if the cut-off parameter is set to 2 kHz, and the ratio is 2:1, each octave range of input frequencies above 2 kHz will be compressed into a half-octave range. Thus an input frequency range of 2-4 kHz, which is one octave wide, will become 2-2.8 kHz, or half an octave wide. All frequencies below the cut-off are unchanged by the processing. Although there is no overlap of shifted frequencies with any lower frequencies that may be present at the same time, frequency compression reduces the overall bandwidth of sounds at the output of the hearing instrument in comparison with the bandwidth at the input. When SoundRecover is enabled during fitting of the hearing instrument, values for the cut-off frequency and compression ratio parameters are preselected automatically based on the user's audiogram. As with linear frequency transposition, the SoundRecover non-linear frequency compression scheme operates all the time after initial activation.

Although the benefits of some modern frequency-lowering schemes have been demonstrated experimentally, a belief persists that such schemes are appropriate and effective only for peo¬ple with profound hearing loss in the high frequencies. However, recent research and technological developments have indicated that frequency lowering can be beneficial even for people with relatively good hearing sensitivity across most of the normally audible frequency range (Boretzki & Kegel, 2009). To what extent a perceptual benefit can be obtained depends both on the technical function of the frequency-lowering scheme, and on the way the variable parameters of the scheme are fitted to the individual hearing instrument user. These factors are discussed briefly here.

The importance of perceiving high frequencies

Speech and environmental sounds often contain, or are dominated by, high-frequency components. Being able to detect and identify such sounds is at least as important for people with hearing impairment as it is for people with normal hearing.

Figure 1 shows a spectrogram of forest sounds, including birds singing. In the spectrogram, time increases from left to right, and frequency from bottom to top. Lighter colors represent sound signals of higher intensity. In the birdsongs, the most intense acoustic components are at frequencies above approximately 2 kHz. Most of the sounds are concentrated in the range from 2 kHz to 5 kHz, although there are some components at even higher frequencies. Such sounds can be difficult for people with high-frequency hearing impairment to detect and discriminate.

Figure 1. A spectrogram of forest sounds including birds singing. Lighter colors represent sound signals of higher intensity.

Figure 2 is a spectrogram illustrating two speech sounds that are dominated by intense high-frequency components. The sequence of phonemes, which was produced by a female speaker, is /a-s-a-sh-a/. Note that the vertical frequency axis in this figure extends to 16 kHz. Although vowels have the most energy below approximately 4 kHz, the fricative consonants /s/ and /sh/ contain components that cover a wide range of generally higher frequencies. The /s/ sound has a broad peak from about 7 kHz to over 12 kHz, whereas the /sh/ sound covers a somewhat lower frequency range (approximately 3 kHz to 8 kHz).

Figure 2. A spectrogram of the nonsense utterance /a-s-a-sh-a/. The two consonants /s/ and /sh/ are dominated by intense high-frequency components.

Stelmachowicz, Pittman, Hoover, and Lewis (2002) showed the phonemes /s/ and /z/ have the most spectral energy above 3 kHz for both male and female speakers. In fact, they found that the broad energy peak for the female speaker had a lower edge at approxi¬mately 5 kHz, and an upper edge above the highest frequency available in their analysis (8 kHz). These measurements illustrate that frequency regions significantly above 5 kHz often contain high levels of acoustic signals, especially when the speaker is female. Other studies have reported similar observations for child speakers. This information about the spectral characteristics of speech clearly reinforces the need for hearing instruments to deliver high frequencies to people with hearing impairment in order to maximize their understanding of speech.

How do conventional hearing aids process high frequencies?

The importance of high-frequency audibility for adequate speech understanding has been understood since the early days of amplification. In general, both the design and the fitting of hearing aids result in the provision of more gain at high frequencies. However, there are several potential practical limitations to the benefit available from high-frequency amplification, including: feedback in the hearing aidhearing sensitivity being too poor for amplification to be usablediscomfort sometimes experienced from amplified high-frequency soundsand the presence of dead regions in the cochlea.

The last point refers to a condition in which large numbers of certain hair-cells in the cochlea are absent or non-functional (Moore, 2001). Dead regions are more common in the basal parts of the cochlea, where high-frequencies are detected and converted into neural activity, than in apical cochlear locations. This condition is often associated with severe or profound loss of hearing sensitivity, but may occur at frequencies where hearing thresholds would suggest that hearing aid amplification would be appropriate. Individuals with extensive high-frequency dead regions may have abnormal perception of sounds containing high frequencies, even when they are made audible. This results in a poorer ability to identify such sounds than the audiogram would suggest.

Application of frequency compression to less-severe hearing loss

If the deterioration of hearing sensitivity in the high frequencies is not extreme, conventional amplification should be able to make sounds audible. However, although audibility is a necessary condition for sound recognition, it is not a sufficient condition. Many people with sensorineural hearing impairment cannot easily discriminate high-frequency sounds, even when they are fully audible. Therefore, the fundamental principle of frequency compression, that lowering significant frequency components will make them easier to perceive accurately, is applicable to a broad range of audiogram configurations, not just those showing minimal sensitivity at high frequencies. Furthermore, the preservation of sound quality achieved by Phonak's SoundRecover algorithm suggests that many users of hearing instruments who have relatively good high-frequency hearing would readily accept and benefit from frequency compression (Glista, et al., 2009). In fact, studies are confirming that such hearing-instrument users often find SoundRecover to be helpful, and that they generally prefer frequency compression to be enabled rather than disabled in blind trials (Nyffeler, 2008). The main challenge for successful use of SoundRecover is to ensure that the fitting is optimized for each individual.

Fitting and adjustment of SoundRecover

The initial fitting of frequency compression is based on the audiogram of the hearing instrument user. Two adjustable parameters, the cut-off frequency (the point above which the frequency compression and lowering is applied) and the frequency compression ratio (the amount of compression applied to frequencies above the cutoff), are preset within restricted ranges of 1.5 to 6 kHz, and 1.5:1 to 4:1, respectively. The values of these two parameters are automatically selected according to a rule that operates on the audiogram data. In brief, the cut-off frequency is set to a low value within the range if the audiogram shows relatively severe hearing impairment, or a relatively steep decline of hearing sensitivity towards higher frequencies. Conversely, the cut-off frequency has a relatively high initial setting when the hearing thresholds are not as severe, or the shape of the audiogram is flatter or slightly upward-sloping. Sounds below the cutoff frequency are not affected by SoundRecover, so there is no mixing or overlapping of sounds having different input frequencies.

After initial fitting, the amount of frequency compression that is applied can be adjusted for each user by means of a single adjustment slider in the fitting software. When the strength is varied, the cut-off frequency and the compression ratio are changed together based on an internal calculation. For example, if the user's audiogram slopes fairly uniformly from a threshold level of 60 dB HL at 250 Hz down to 95 dB HL at 4 kHz, the automatic initial fitting will set the cut-off frequency to 2.5 kHz and the compression ratio to 1.8:1. Subsequently, if the strength is decreased, the cut-off frequency will increase, resulting in frequency compression being applied across a narrower range of high frequencies. Conversely, if the strength is increased, the cut-off frequency will decrease. In the previous example, an increase in strength by two steps from the initial default setting will lower the cut-off frequency to 1.8 kHz. In general, the compression ratio changes in the same direction as the cut-off frequency, resulting in a smooth variation in the perceptual effect of frequency compression when the strength is adjusted. However, when the cut-off frequency reaches either the upper limit or the lower limit of the range (1.5 or 6.0 kHz), further strength adjustments result in changes made only to the compression ratio.

Current research suggests that even users of hearing instruments with mild losses find that SoundRecover can provide comfortable listening if the cut-off frequency is set relatively high, above 4 kHz (Boretzki & Kegel, 2009). This is not surprising, because there is little or no harmonic pitch information present in the high frequencies affected by frequency compression with such settings. On the other hand, there is useful information present in some high-frequency sounds, particularly the fricative consonants of speech. It is plausible that the perception of those sounds would be improved by limited application of frequency compression. Even people with normal hearing could theoretically benefit under certain listening conditions. In particular, when using a telephone, which has an upper frequency limit below 4 kHz, it can be difficult to understand unfamiliar words if they contain certain high frequency phonemes. For example, over the telephone /s/ is easily confused with /f/, and in many cases is not audible at all. Under these conditions, some frequency compression above a relatively high cut-off frequency could improve the listener's ability to hear and to discriminate such speech sounds. Therefore, it is highly likely that SoundRecover, when appropriately fit, could provide benefit to a majority of hearing instrument users.

Boretzki, M. & Kegel, A. (2009, May). SoundRecover - The benefits of SoundRecover for mild hearing loss. Retrieved on October, 30, 2009, from Phonak Field Study News: www.phonak.co.nz/com_fsn_srmildhl_may09-xx.pdf

Glista, D., Scollie, S., Bagatto, M., Seewald, R., Parsa, V., & Johnson, A. (2009). Evaluation of nonlinear frequency compression: Clinical outcomes. International Journal of Audiology, 48(9), 632-644.

Hogan, C.A. & Turner, C.W. (1998). High-frequency audibility: Benefits for hearing-impaired listeners. Journal of the Acoustical Society of America, 104, 432-441.

Kuk, F., Korhonen, P., Peeters, H., Keenan, D., Jessen, A., & Andersen, H. (2006). Linear frequency transposition: Extending the audibility of high frequency information. The Hearing Review, 13(10). Retrieved on October 30, 2009, from www.hearingreview.com/issues/articles/ 2006-10_08.asp.

Kuk, F., Keenan, D., Korhonen, P., & Lau, C. (2009). Efficacy of linear frequency transposition on consonant identification in quiet and in noise. Journal of the American Academy of Audiology, 20(8), 465-479.

McDermott, H.J., Dorkos, V.P., Dean, M.R., & Ching, T.Y. (1999). Improvements in speech perception with use of the AVR TranSonic frequency-transposing hearing aid. Journal of Speech, Language, and Hearing Research, 42,1323-1335.

Moore, B.C.J. (2001). Dead regions in the cochlea: Diagnosis, perceptual consequences, and implications for the fitting of hearing aids. Trends in Amplification, 5,1-34.

Nyffeler, M. (2008) Study finds that non-linear frequency compression boosts speech intelligibility. The Hearing Journal, 61(12), 22-26.


Do You Hear Perpetual High Pitched Frequencies?

Many people are hearing high pitched frequencies which are not related to a Vitamin D deficiency or tinnitus. From my research, I’ve found out that these frequencies are associated with your spiritual awakening process, more specifically, in remembering and RE-REMEMBERING of who you are.

History has repeatedly shown us that we have experienced spontaneous DNA upgrades though out our lineage, without any gradual changes in between lineages. These high pitched sounds and frequencies may be associated with a DNA upgrade.

Our bodies are mainly compromised of water. Recent studies have indicated that sounds and frequencies have an affect on water molecules, as evidenced by Masaru Emoto’s work with sound and the restructuring of water molecules. Dr. Leonard Horowitz’s work with the holy harmony solfeggios has also indicated there are certain frequencies that affect our genetic structure and ability to heal from within. On a cellular level, these frequencies and sounds are being converted to their most concentrated form of energy, vibration.

Our entire solar system is currently experiencing a dramatic climate change, including the outer planets in our galaxy. This has baffled scientists because we are currently in a solar minimum. As we travel further into the photon belt, the energies will continue to increase. If these photon energies are associated with any type of minimal radiation, it will affect the genetic structure of our DNA.

Past life regression hypnotherapist Dolores Cannon believes these high pitched noises are associated with your body rising in frequency as the Earth shifts in to a new dimension.

Many consider these high pitched sounds to be part of our genetic reprogramming as the entire universe is being upgraded on a galactic level. Rest assured that these sounds are beneficial to your genetic and spiritual development.

In the meanwhile, enjoy the ride!

In5D Addendum March 27, 2019

Gregg Prescott, M.S.
Founder, Webmaster, & Editor, In5D.com

Every day, I write an article about today’s high pitched frequency. As a musician, I can hear the specific note and correlate that the what that note means, which chakra it’s related to and which celestial body rules over that note/chakra. These note and frequencies change virtually every day, sometimes multiple times throughout the day. You can find these reports on In5D under the “ENERGY UPDATES” category or by simply visiting the home page of In5D.com for the latest high pitched frequency update.

For example, if the high pitched frequency is in the key of E Major, this is what it means:

E Major: (Solar Plexus, “I do”, the Sun, confidence, optimism) Noisy shouts of joy, laughing pleasure and not yet complete, full delight lies in E Major.

I go into much further depth on these meanings and what you can do to help clear whichever chakra is being represented for that day.

10 years after writing this article, I can see the importance and relevance of how the Photon Belt is influencing so many different things in our lives, such as the Schumann Resonance, our DNA/RNA, and even Pluto in Capricorn’s rising truth vibration.

There is no turning back and we’re barely on the edge of the Photon Belt. Expect more and more people to start hearing these frequencies on a regular basis!

Sending you all infinite LOVE and Light!

Click here for more articles by Gregg Prescott!

Gregg Prescott, M.S. is the founder and editor of In5D, Zentasia, and BodyMindSoulSpirit. He co-owns In5D Club with his fiancé, Alison Janes. You can find every episode of “The BIGGER Picture with Gregg Prescott” on Bitchute while all of his In5D Radio shows are on the In5D Youtube channel. He is a visionary, author, a transformational speaker, and promotes spiritual, metaphysical and esoteric conferences in the United States through In5dEvents. Please like and follow In5D on Facebook and Twitter!


Symptoms of high-frequency hearing loss

People with a high-frequency hearing loss may have trouble understanding female and children’s voices and experience difficulties hearing birds singing or other high-pitched sounds, e.g. treble sounds when listing to music.

A high frequency hearing loss also makes it difficult to hear conversations in larger groups, in noisy places or in places with background noise. People with high-frequency hearing loss may also struggle to understand normal speech because they can have problems hearing consonant letters, such as F, H, S.

Can you pass our hearing test? Try our quick hearing test.


Older people more over-sensitive to sounds

The study revealed that when young adults are in a loud environment – such as a rock concert – their brains become less sensitive to relatively quiet sounds. This allows the listener to hear the relevant sounds (like a guitar riff) better without being distracted by irrelevant sounds.

However, as a person ages, researchers found that older listeners become over-sensitive to sounds, hearing both quiet and loud sounds without the ability to ignore or tune out irrelevant auditory information. Without the ability to reduce sensitivity to irrelevant sounds, the individual experiences hearing challenges.

“When the sound environment is loud, the brain activity in younger adults loses sensitivity to really quiet sounds because they’re not that important,” Herrmann said. “Whereas older individuals still stay sensitive to these relatively quiet sounds, even though they’re not important at the time.”


Does the range of sound we can hear decrease as we age?

It's a well-known disease/condition, medically known as presbycusis:

The term presbycusis refers to sensorineural hearing impairment in elderly individuals. Characteristically, presbycusis involves bilateral high-frequency hearing loss associated with difficulty in speech discrimination and central auditory processing of information.

The disease is well studied and there are four known factors, two of which are related to the loss of high frequencies:

Sensory presbycusis: This refers to epithelial atrophy with loss of sensory hair cells and supporting cells in the organ of Corti. This process originates in the basal turn of the cochlea and slowly progresses toward the apex. These changes correlate with a precipitous drop in the high-frequency thresholds, which begins after middle age. The abrupt downward slope of the audiogram begins above the speech frequencies therefore, speech discrimination is often preserved. Histologically, the atrophy may be limited to only the first few millimeters of the basal end of the cochlea. The process is slowly progressive over time. One theory proposes that these changes are due to the accumulation of lipofuscin pigment granules.

Mechanical (ie, cochlear conductive) presbycusis: This condition results from thickening and secondary stiffening of the basilar membrane of the cochlea. The thickening is more severe in the basal turn of the cochlea where the basilar membrane is narrow. This correlates with a gradually sloping high-frequency sensorineural hearing loss that is slowly progressive. Speech discrimination is average for the given pure-tone average.

There is a full ISO specification (ISO 7029) which describe exactly the amount of hearing loss (up to 8 kHz) starting from age 20:

Although age-related hearing loss is not the norm, it's a quite common condition:

Results showed that the prevalence of central presbyacusis increased with age and that the highest prevalence was a striking 95 percent in the 80+ year age group.

The rest of my answer is based on this book, some parts of which exist online (which I'll quote). I'll be referring to this part of the website specifically.

Sklivvz already answered your first two questions, but here is a graph to illustrate it as well:

Consequently, patients with age-related hearing loss often have normal sensitivity at low frequencies, but progressively poorer sensitivity for higher frequencies, as shown here:

As you can see from the graph, severity of age-related hearing loss depends on sound frequency: older people need high-pitched sounds to be displayed more loudly, in order to hear them.

3. Does that mean high frequency sound become less harmful as we become older and older?

It actually does! But that's because the harm has already been done.

It's not described much on the website, but the book explains that there is a relationship between how loud a sound has to be to harm you, and which frequency it is. Sounds that are near our auditory thresholds (under 20 Hz and above 20.000 Hz in the normal, healthy hearing range) can't harm us even if they are extremely loud. We don't hear them because they cause no mechanical change in our ear that responds to those frequencies, and consequently no harm is done. In the case of age-related hearing loss, the hair cells in the inner ear that should respond to high-frequency sounds, have already stopped responding, so there is nothing left to harm.

Here's a plot of the connection between loudness and frequency in auditory perception, in which you can see that sounds around 4kHz can most easily harm us. Human speech is at this range, so we're particularly sensitive to it:

Does this explain that when I was younger, I was more sensitive to sound and slept less well?

Probably not, except if it was mosquitoes that were keeping you up.

Does this fact indicate some trend of our change of music taste over the time?


Watch the video: ΚΕΡΑΥΝΟΙ ΑΠΟ ΤΟΝ Κ ΑΡΑΝΙΤΗ! ΤΟ ΘΕΜΑ ΔΕΝ ΕΙΝΑΙ ΜΟΝΟ Η ΔΙΑΡΚΕΙΑ ΤΗΣ ΑΝΟΣΙΑΣ ΠΟΥ ΠΡΟΣΦΕΡΟΥΝ ΤΑ.. (August 2022).