The technology and its advances allow us to reach milestones that a few years ago seemed impossible, such as the case of Ann, an American woman who with 30 years had a stroke that affected the base connected with the spinal cord. He lost mobility in arms and legs and even in the muscles that move the vocal strings, which leaves it without speech, but 20 years later, the AI has caused Ann to “speak” in real time with a synthesized voice.
Within the misfortune of the effusion for Ann, “the luck” that this woman had was that The motor cortex region dedicated to speech was not damaged. Thanks to this fact, a group of researchers, neuroscientists and experts in AI began working in a machine interface (BCI) that could return the speech.
They are not the first, nor will they be the last to work on such a project, but their approach is different from other neuroscience projects, since the project focuses on getting that The machine captures the signs that the brain sends when a person is going to say a word, that is, the intention of saying something, rather than focusing on specific areas of language as other projects do.
To put it in a simpler way, this BCI She activates when she tries to say “Hello” more than when she thinks about saying it. The achievement that 20 years later Ann managed to say what he meant, is due to years of training of artificial intelligence and a machine connected to his brain capable of registering his neuronal activity.
For this, this BCI has A 253 electrode mesh connected to your brain that decodes the neuronal connections of the speech motor cortex. According to Cheol Jun Cho, Berkeley and main co -author of the study, “basicallywe intercept the signal where thought becomes articulation. “
As you can read in the University of California in San Francisco (USA), “what we decode is after the idea has emerged, after having decided what to say, after having decided what words to use and how to move the muscles of the vocal tract”, which we do instantaneously.
But it is not that this BCI is hypersonic to capture these connections, but Ann has been training for years with that AI. At first Ann had to Look at a phrase on the screen, like the phrase: “Hello, how are you?”, And then try to pronounce that phrase in silence. “This provided us The neuronal map that is generated when reading and trying to replicate a phrase“The researchers explain.
https://www.youtube.com/watch?v=mgsokgbbxkYouTube video
Another of the most prominent aspects is that in previous models Decoding suffered a latency of about eight seconds, but now the system is able to capture the intention in a secondmaking speech reproduce practically in real time.
The only one hits this achievement, is that still This type of technology is still very invasive In the sense that a connected machine has to have and that you cannot go out with it, for example, but little by little researchers will continue to work on this aspect.
#years #voice #woman #speak #machine #connected #brain