Scientists have enabled a paralysed man to grasp, move and drop objects just by using his thoughts.
Researchers in San Francisco developed a robot arm that receives signals from the brain to a computer, allowing a man who could not speak or move to interact with objects.
The device, known as a brain-computer interface, worked for a record seven months without needing to be adjusted. Previous attempts of using such a device have only worked for a day or two.
It works by using AI that adjusts to the small changes that happen when a person repeats a movement — or in this case, simply imagines a movement — and then learns to do it in a more refined way.
“This blending of learning between humans and AI is the next phase for these brain-computer interfaces,” said professor of neurology Karunesh Ganguly.
“It’s what we need to achieve sophisticated, life-like function,” he added.

As part of the study published in the journal Cell, the participant had tiny sensors implanted on the surface of his brain that could pick up brain activity when he simply thought about moving.
Researchers Professor Ganguly and neurology researcher Nikhilesh Natraj previously discovered that patterns of brain activity in animals represent specific movements, and saw these patterns changed day-to-day as the animal learned.
Prof Ganguly suspected that the same thing was happening in humans, and that was why these brain-computer interfaces stopped working so quickly and were unable to recognise these patterns.
To see whether the participant’s brain patterns changed over time, Prof Ganguly asked the man to imagine moving different parts of his body, like his hands, feet, or head.
Although he couldn’t actually move, the participant’s brain could still produce signals for a movement he was imagining. The sensors recorded the brain’s representations of these movements shifted in the brain slightly day-to-day.
Researchers then asked the man to imagine himself making simple movements with his fingers, hands, or thumbs over the course of two weeks, while the sensors recorded his brain activity to train the AI.
However, when the participant tried to use the robotic arm and hand, the movements were not precise.
To fix this, researchers used a virtual robot that gave the participant feedback on the accuracy of his visualisations and eventually the arm did what he wanted.
It only took a few practices for him to use his new arm in the real world, making it pick up blocks, turn them and move them to new locations. He was also able to open a cabinet, take out a cup and hold it up to a water dispenser.
Months later, the participant was still able to control the robotic arm after a “tune-up” to adjust for how his movement representation in his brain had shifted since he began using the device.
Researchers are now refining the AI models to make the robotic arm faster and ready for a home environment.
Prof Ganguly said: “I’m very confident that we’ve learned how to build the system now, and that we can make this work.”