Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Saturday, 12 January 2019

AI Can Now Decode Words Directly from Brain Waves

Neuroscientists are teaching computers to read words straight out of people's brains.

Kelly Servick, writing for Science, reported this week on three papers posted to the preprint server bioRxiv in which three different teams of researchers demonstrated that they could decode speech from recordings of neurons firing. In each study, electrodes placed directly on the brain recorded neural activity while brain-surgery patients listened to speech or read words out loud. Then, researchers tried to figure out what the patients were hearing or saying. In each case, researchers were able to convert the brain's electrical activity into at least somewhat-intelligible sound files.

The first paper, posted to bioRxiv on Oct. 10, 2018, describes an experiment in which researchers played recordings of speech to patients with epilepsy who were in the middle of brain surgery. (The neural recordings taken in the experiment had to be very detailed to be interpreted. And that level of detail is available only during the rare circumstances when a brain is exposed to the air and electrodes are placed on it directly, such as in brain surgery.)

As the patients listened to the sound files, the researchers recorded neurons firing in the parts of the patients' brains that process sound. The scientists tried a number of different methods for turning that neuronal firing data into speech and found that "deep learning" — in which a computer tries to solve a problem more or less unsupervised — worked best. When they played the results through a vocoder, which synthesizes human voices, for a group of 11 listeners, those individuals were able to correctly interpret the words 75 percent of the time.

You can listen to audio from this experiment here.

The second paper, posted Nov. 27, 2018, relied on neural recordings from people undergoing surgery to remove brain tumors. As the patients read single-syllable words out loud, the researchers recorded both the sounds coming out of the participants' mouths and the neurons firing in the speech-producing regions of their brains. Instead of training computers deeply on each patient, these researchers taught an artificial neural network to convert the neural recordings into audio, showing that the results were at least reasonably intelligible and similar to the recordings made by the microphones. (The audio from this experiment is here but has to be downloaded as a zip file.)

The third paper, posted Aug. 9, 2018, relied on recording the part of the brain that converts specific words that a person decides to speak into muscle movements. While no recording from this experiment is available online, the researchers reported that they were able to reconstruct entire sentences (also recorded during brain surgery on patients with epilepsy) and that people who listened to the sentences were able to correctly interpret them on a multiple choice test (out of 10 choices) 83 percent of the time. That experiment's method relied on identifying the patterns involved in producing individual syllables, rather than whole words.

The goal in all of these experiments is to one day make it possible for people who've lost the ability to speak (due to amyotrophic lateral sclerosis or similar conditions) to speak through a computer-to-brain interface. However, the science for that application isn't there yet.

Interpreting the neural patterns of a person just imagining speech is more complicated than interpreting the patterns of someone listening to or producing speech, Science reported. (However, the authors of the second paper said that interpreting the brain activity of someone imagining speech may be possible.)

It's also important to keep in mind that these are small studies. The first paper relied on data taken from just five patients, while the second looked at six patients and the third only three. And none of the neural recordings lasted more than an hour.

Still, the science is moving forward, and artificial-speech devices hooked up directly to the brain seem like a real possibility at some point down the road.

Human Relative Was Half-Man, Half-Ape

In a recent study, scientists compared the skull of Little Foot (shown here) with that of other hominins.
Credit: Photo courtesy of the University of the Witwatersrand

The brain of one of the oldest Australopithecus individuals ever found was a little bit ape-like and a little bit human.

In a new study, researchers scanned the interior of a very rare, nearly complete skull of this ancient hominin ancestor. Hominins include modern and extinct humans and all their direct ancestors, including Australopithecus, which lived between about 4 million and 2 million years ago in Africa, and early humans of the genus Homo would eventually evolve from Australopithecus ancestors.

The modern human brain owes a lot to these small, hairy human ancestors, but we know very little about their brains, said Amélie Beaudet, a paleontologist at the University of the Witwatersrand in South Africa.

Between ape and human

Beaudet and her colleagues used micro-computed tomography (micro-CT), a very sensitive version of the same sort of technology a surgeon might use to scan a bum knee. With this tool,the researchers reconstructed the interior of the skull of a very old Australopithecus.

The skull belongs to a fossil dubbed "Little Foot," first found two decades ago in Sterkfontein Caves near Johannesburg. At 3.67 million years old, Little Foot is among the oldest of any Australopithecus ever found, and its skull is nearly intact. The fossil's discoverers think it may belong to an entirely new Australopithecus species, Live Science reported.

With micro-CT, the research team could see very fine imprints of where the brain once lay against Little Foot's skull, including a record of the paths of veins and arteries, Beaudet told Live Science. Using the skull to infer brain shape in this way is called making an endocast.

Virtual rendering of the brain endocast of "Little Foot," possibly a new species of Australopithecus.
Credit: M. Lotter and R.J. Clarke/Wits University

I was expecting something quite similar to the other endocasts we knew from Australopithecus, but Little Foot turned out to be a bit different, in accordance with its great age," Beaudet said.

Today's chimpanzees and humans share an ancestor older than Little Foot: some long-lost ape that gave rise to both lineages. Little Foot's brain looks a lot like that predicted ancestor's should look, Beaudet said, more ape-like than human. Little Foot's visual cortex, in particular, took up a greater proportion of its brain than that area does in the human brain.

In humans, Beaudet said, the visual cortex has been pushed aside to accommodate the expansion of the parietal cortex, an area involved in complex activities like toolmaking.

Changing brains

Little Foot's brain was asymmetrical, with slightly differing protrusions on each side, the researchers found. This is a feature shared with both humans and apes, and it probably indicates that Australopithecus had brain lateralization, meaning that the two sides of its brain performed different functions. The finding means that brain lateralization evolved very early in the primate lineage.

Little Foot's brain was different from later Australopithecus specimens, Beaudet said. The visual cortex, in particular, was larger compared to later Australopithecus brains. These differences hint that brain evolution was a piecemeal process, occurring in fits and starts across the brain. .

The findings will appear in a special issue on Little Foot being published in the Journal of Human Evolution.