April 15, 2024

Is the musical instinct innate? The AI ​​model suggests so

Summary: Researchers made a significant discovery using an artificial neural network model, suggesting that musical instinct may arise naturally from the human brain. By analyzing various natural sounds through Google’s AudioSet, the team discovered that certain neurons in the network responded selectively to music, mimicking the behavior of the auditory cortex in real brains.

This spontaneous generation of music-selective neurons indicates that our ability to process music may be an innate cognitive function, formed as an evolutionary adaptation to better process the sounds of nature.

Key facts:

  1. The study used an artificial neural network to demonstrate that music-selective neurons can develop spontaneously without being taught music.
  2. These neurons showed similar behavior to those of the human auditory cortex, responding selectively to various forms of music from different genres.
  3. The research implies that musical ability may be an instinctive brain function, evolved to improve the processing of natural sounds.

Fountain: kaist

Music, often called the universal language, is known to be a common component in all cultures. So could “musical instinct” be something shared to some extent despite large environmental differences between cultures?

On January 16, a KAIST research team led by Professor Hawoong Jung of the Department of Physics announced that it had identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model.

Neurons in the artificial neural network model showed reactive behaviors similar to those in the auditory cortex of a real brain. Credit: Neuroscience News

Previously, many researchers have attempted to identify the similarities and differences between music that exists in several different cultures, and have tried to understand the origin of universality.

An article published in Science in 2019 he had revealed that music is produced in all ethnographically distinct cultures and that similar forms of rhythms and melodies are used. Neuroscientists have also previously discovered that a specific part of the human brain, the auditory cortex, is responsible for processing musical information.

Professor Jung’s team used an artificial neural network model to demonstrate that the cognitive functions of music are formed spontaneously as a result of processing auditory information received from nature, without being taught music.

The research team used AudioSet, a large-scale collection of sound data provided by Google, and taught the artificial neural network to learn the different sounds. Interestingly, the research team discovered that certain neurons within the network model would respond selectively to music.

In other words, they observed the spontaneous generation of neurons that reacted minimally to other sounds such as those of animals, nature or machines, but showed high levels of response to various forms of music, including instrumental and vocal.

Neurons in the artificial neural network model showed reactive behaviors similar to those in the auditory cortex of a real brain. For example, the artificial neurons responded less to the sound of music that was clipped into short intervals and rearranged.

This indicates that spontaneously generated music-selective neurons encode the temporal structure of music. This property was not limited to a specific musical genre, but emerged in 25 different genres, including classical, pop, rock, jazz, and electronica.

Furthermore, suppressing the activity of music-selective neurons was found to greatly impede the cognitive accuracy of other natural sounds. That is, the neural function that processes musical information helps process other sounds, and that “musical ability” may be an instinct formed as a result of an acquired evolutionary adaptation to better process the sounds of nature.

Professor Hawoong Jung, who advised the research, said: “The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.”

Regarding the importance of the research, he explained: “We hope that this artificially constructed model with human-like musicality will become an original model for various applications, including AI music generation, music therapy, and cognition research. musical”.

He also commented on its limitations, adding: “However, this research does not take into consideration the developmental process that follows music learning, and it should be noted that this is a study on the basis of musical information processing in development.” early”.

This research, conducted by first author Dr. Gwangsu Kim of the KAIST Department of Physics (current affiliation: MIT Department of Brain and Cognitive Sciences) and Dr. Dong-Kyum Kim (current affiliation: IBS) was published in Nature Communications under the title “Spontaneous emergence of rudimentary music detectors in deep neural networks.”

Money: This research was supported by the National Research Foundation of Korea.

About this news about AI and music research

Author: Yoonju Hong
Fountain: kaist
Contact: Yoonju Hong – KAIST
Image: Image is credited to Neuroscience News.

Original research: Open access.
“Spontaneous emergence of rudimentary music detectors in deep neural networks” by Hawoong Jeong et al. Nature Communications


Spontaneous emergence of rudimentary music detectors in deep neural networks

Music exists in almost all societies, has universal acoustic characteristics and is processed by different neural circuits in humans even without experience in musical training.

However, it is still unclear how these innate characteristics arise and what functions they serve. Here, using an artificial deep neural network that models the brain’s auditory information processing, we show that music-tuned units can emerge spontaneously by learning natural sound detection, even without learning music.

Music-selective units encoded the temporal structure of music on multiple time scales, following population-level response characteristics observed in the brain.

We found that the generalization process is fundamental for the emergence of musical selectivity and that musical selectivity can function as a functional basis for natural sound generalization, thus clarifying its origin.

These findings suggest that evolutionary adaptation to process natural sounds may provide an initial model for our sense of music.

Leave a Reply

Your email address will not be published. Required fields are marked *