Musical mind reading: Just with the help of brain scans and EEG data, it is possible to decode what music a test person is listening to, as an experiment shows. A suitably trained artificial intelligence identified the right piece of music based on the non-invasively recorded neuronal signals with a hit rate of 71.8 percent. The results could be a first step on the way to non-invasively reading out speech from brain waves.

Music is deeply rooted in our nature. When we hear familiar sounds, our brain identifies them within fractions of a second. Brainwave measurements show that music can trigger a veritable firework of signals, accompanied by strong emotions and goosebumps. Various research teams have already looked at what the brain waves can reveal when listening to music – for example about the emotions of the test subjects or about the music itself.

Ian Daly from the School of Computer Science and Electronic Engineering at the University of Essex in Great Britain has now shown that the brain waves can be used to tell what music a person is listening to. For example, while previous studies to read speech from brain activity often used invasive methods such as electrocorticography (EKoG), for which electrodes are placed in the skull, Daly used data from non-invasive electroencephalography (EEG) measurements.

To increase the accuracy of the predictions, Daly combined the EEG data with functional magnetic resonance imaging (fMRI) measurements, which show blood flow in the brain and thus provide information about which brain regions are particularly active when listening to music in a given person. The researcher used this information to select precisely those EEG data for further analysis that corresponded to these areas.

The data came from a previous study that originally focused on the emotions of music listeners. The 18 subjects included in the analysis had listened to 36 short pieces of piano music while their brain activity was recorded by fMRI and EEG. Daly then trained a deep learning model to decode the patterns in the EEG in such a way that it could reconstruct the respective piece of music that the test subject had heard during the measurement.

In fact, the model was able to partially reproduce the tempo, rhythm and amplitude of the music. The similarity to the original pieces of music was high enough for the algorithm to be able to predict which of the 36 pieces of music the person had heard with a hit rate of 71.8 percent.

To validate the results, Daly used an independent sample of 19 other subjects who had also heard the corresponding pieces of music. Since only EEG data and no fMRI data were available from these individuals, Daly used the information from the first sample to determine the relevant EEG data.

“Even in the absence of person-specific fMRI data, we were able to identify the music we were listening to from the EEG data alone,” reports Daly. However, he points out that the localization of the relevant brain reactions to music differs from person to person. Accordingly, if the model could not be fitted with person-specific fMRI data, it was less accurate and only achieved a hit rate of 59.2 percent.

Daly sees his model as a first step towards greater goals. “This method has many potential applications,” he says. “We have shown that we can decode music, which indicates that we may one day be able to decode speech from the brain.” Although this is already possible to some extent, as experiments show. So far, however, this has only worked with invasive technology such as electrodes in the brain.

For people with locked-in syndrome who are unable to communicate with other people due to paralysis, this could open a gateway to the outside world. “Obviously there’s still a long way to go, but we hope that one day if we can successfully decode language, we can use that to build communication tools,” says Daly. (Scientific Reports, 2023, doi: 10.1038/s41598-022-27361-x)

Quelle: University of Essex

This article was written by Nadja Podbregar

If you are one of the many Germans who will soon receive a letter from the contribution service of the public broadcaster (formerly GEZ), you should definitely reply: Otherwise, in the worst case, the bailiff will soon be at the door.

The deadline for filing property tax returns is coming to an end. Many real estate owners are frustrated because they don’t know exactly how much they will have to deduct from 2025. A key factor here is the municipal tax rate. And it’s apparently going up now.

The original to this post “Your brain waves reveal what music you’re listening to” comes from scinexx.