Decoding Brain Language | Meta AI WhatsApp | WhatsApp Meta Download | | Turtles AI

Decoding Brain Language
New AI discoveries to improve communication through non-invasive brain signals
Isabella V16 February 2025

 

AI is opening up new avenues for decoding human language from non-invasive brain recordings. Thanks to innovations developed by the Meta FAIR laboratory in Paris in collaboration with the Basque Center on Cognition, Brain and Language in San Sebastián, researchers have been able to decode sentences directly from brain signals, leading to a significant improvement in the understanding of cognitive and communication mechanisms. These studies could have significant impacts in the medical field, for the rehabilitation of patients with communication disabilities caused by brain injuries. Thanks to AI, the boundary between science and medicine is progressively decreasing, with promising applications also for non-invasive neural devices.

Key points:

  • Decoding brain language through non-invasive signals.
  • International collaborations between neuroscience and AI.
  • Future impacts for medicine, with potential communication devices for people with disabilities.
  • The contribution of open source in accelerating innovation in healthcare.

In the field of neuroscience, research on human language has always been a topic of great interest and at the same time difficult to approach. Decoding the brain activity that accompanies the production of language has been an ambitious goal, both due to the complexity of the process and the technical limitations. Today, however, a significant step forward has been made thanks to the use of advanced technologies such as Magnetoencephalography (MEG) and Electroencephalography (EEG), which allow brain activity to be recorded without the need for invasive interventions. The innovation comes from the Meta FAIR laboratory, which has developed an AI-based system to reconstruct sentences from brain signals recorded during typing. The joint use of MEG and EEG made it possible to decode up to 80% of the letters typed by participants, an extraordinary result when compared to previous technologies, which obtained inferior results.

This discovery offers new opportunities for clinical applications, such as restoring the ability to communicate for those who have suffered brain damage. Currently, the most common solutions for restoring communication in people with severe brain injuries are based on neuroprosthetic devices, but these require invasive techniques and surgery. The introduction of non-invasive techniques, such as those explored by the study conducted in San Sebastián, could be a significant step forward, making the technology more accessible and less dangerous. However, there are still many obstacles to overcome, especially in terms of improving performance and logistical difficulties, such as the need for magnetically shielded rooms for MEG use, which limits the practicality of daily application.

Another key area of ​​research concerns understanding the neural mechanisms underlying language production. Studying the human brain during the process of generating language has always been difficult, especially because the motor actions involved, such as mouth and tongue movement, interfere with neuroimaging techniques. Meta’s new study has revealed how AI can interpret brain signals generated while participants type on a keyboard. The experiments have allowed us to pinpoint, with unprecedented precision, the exact moment in which thoughts are transformed into words and letters, paving the way for understanding how the brain “constructs” a thought and converts it into a concrete action.

In the context of language production, one of the most interesting aspects is the concept of “dynamic neural code”, which emerged from the study. This code allows the brain to chain successive representations, maintaining a continuous flow that connects the abstract meaning of a sentence with the movement of the fingers on the keyboard. Decoding this code represents one of the most difficult challenges for both neuroscience and AI, but the progress made in this field could lead to extraordinary improvements in brain-computer interface technologies.

At the same time, research is contributing to the world of medicine with practical applications thanks to open source AI. For example, the French company BrightHeart has used models such as DINOv2 developed by Meta to analyze fetal heart ultrasounds and identify congenital defects, an application that recently obtained FDA approval. Additionally, another company, Virgo, has leveraged these models to achieve breakthrough results in endoscopic video analysis, improving the diagnosis of intestinal diseases such as ulcerative colitis and the detection of polyps.

The combination of neuroscience and AI is not only opening up new avenues in treatment and care, but is also creating new horizons for technological and healthcare innovation, improving the outlook for patients around the world.