Medical

Brain-machine interfaces treat neurological disease

18th October 2017
Enaie Azambuja
0

Since the 19th century at least, humans have wondered what could be accomplished by linking our brains – smart and flexible but prone to disease and disarray – directly to technology in all its cold, hard precision. Writers of the time dreamed up intelligence enhanced by implanted clockwork and a starship controlled by a transplanted brain. While these remain inconceivably far-fetched, the melding of brains and machines for treating disease and improving human health is now a reality.

Brain-machine interfaces that connect computers and the nervous system can now restore rudimentary vision in people who have lost the ability to see, treat the symptoms of Parkinson’s disease and prevent some epileptic seizures. And there’s more to come.

But the biggest challenge in each of those cases may not be the hardware that science-fiction writers once dwelled on. Instead, it’s trying to understand, on some level at least, what the brain is trying to tell us – and how to speak to it in return.

Like linguists piecing together the first bits of an alien language, researchers must search for signals that indicate an oncoming seizure or where a person wants to move a robotic arm. Improving that communication in parallel with the hardware, researchers say, will drive advances in treating disease or even enhancing our normal capabilities.

The scientific interest in connecting the brain with machines began in earnest in the early 1970s, when computer scientist Jacques Vidal embarked on what he called the Brain Computer Interface project.

As he described in a 1973 review paper, it comprised an electroencephalogram, or EEG, for recording electrical signals from the brain and a series of computers to process that information and translate it into some sort of action, such as playing a simple video game. In the long run, Vidal imagined brain-machine interfaces could control “such external apparatus as prosthetic devices or spaceships.”

Although brain-controlled spaceships remain in the realm of science fiction, the prosthetic device is not. Stanford researchers including Krishna Shenoy, a professor of electrical engineering, and Jaimie Henderson, a professor of neurosurgery, are bringing neural prosthetics closer to clinical reality.

Over the course of nearly two decades, Shenoy, the Hong Seh and Vivian W. M. Lim Professor in the School of Engineering, and Henderson, the John and Jene Blume–Robert and Ruth Halperin Professor, developed a device that, in a clinical research study, gave people paralysed by accident or disease a way to move a pointer on a computer screen and use it to type out messages. In similar research studies, people were able to move robotic arms with signals from the brain.

Reaching those milestones took work on many fronts, including developing the hardware and surgical techniques needed to physically connect the brain to an external computer.

But there was always another equally important challenge, one that Vidal anticipated: taking the brain’s startlingly complex language, encoded in the electrical and chemical signals sent from one of the brain’s billions of neurons on to the next, and extracting messages a computer could understand.

On top of that, researchers like Shenoy and Henderson needed to do all that in real time, so that when a subject’s brain signals the desire to move a pointer on a computer screen, the pointer moves right then, and not a second later.

One of the people that challenge fell to was Paul Nuyujukian, now an assistant professor of bioengineering and neurosurgery. First as a graduate student with Shenoy’s research group and then a postdoctoral fellow with the lab jointly led by Henderson and Shenoy. Nuyujukian helped to build and refine the software algorithms, termed decoders, that translate brain signals into cursor movements.

Actually, “translate” may be too strong a word – the task, as Nuyujukian put it, was a bit like listening to a hundred people speaking a hundred different languages all at once and then trying to find something, anything, in the resulting din one could correlate with a person’s intentions.

Yet as daunting as that sounds, Nuyujukian and his colleagues found some ingeniously simple ways to solve the problem, first in experiments with monkeys. For example, Nuyujukian and fellow graduate student Vikash Gilja showed that they could better pick out a voice in the crowd if they paid attention to where a monkey was being asked to move the cursor.

“Design insights like that turned out to have a huge impact on performance of the decoder,” said Nuyujukian, who is also a member of Stanford Bio-X and the Stanford Neurosciences Institute.

In fact, it more than doubled the system’s performance in monkeys, and the algorithm the team developed remains the basis of the highest-performing system to date. Nuyujukian went on to adapt those insights to people in a clinical study – a significant challenge in its own right – resulting in devices that helped people with paralysis type at 12 words per minute, a record rate.

Although there’s a lot of important work left to do on prosthetics, Nuyujukian said he believes “there are other very real and pressing needs that brain-machine interfaces can solve,” such as the treatment of epilepsy and stroke – conditions in which the brain speaks a language scientists are only beginning to understand.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier