Biomedical engineers at Newcastle University in the UK have developed a prosthetic limb that can “see” objects before it grasps them and can adjust its grip depending on the size and shape of the object.
Although we often talk about our limbs sometimes having a mind of their own, science has in fact made the figure of speech a reality with the creation of a new bionic hand which uses artificial intelligence and a camera fitted to the back of a prosthetic hand to manipulate objects at faster speeds than current prosthetic models.
The new bionic hand uses deep learning, that branch of AI and machine learning which incorporates artificial neural networking to make decisions based on large data sets, with researchers “training” the computer vision system by inputting images of over 500 graspable objects, each one scanned from 72 perspectives. The system was programmed to classify objects based on four possible grasping motions: a two-digit pinch, a three-digit tripod, a sideways full-hand grasp (used when holding a cup, for example) and a palm-down grasp (used to pick up an apple).
The system was tested with two volunteers with trans-radial (below the elbow) amputations, who after training, were able to pick up and move target objects with an 88 per cent success rate. The researchers also found that participants displayed improved speed in accomplishing the tasks with more practice.
“Using computer vision, we have developed a bionic hand which can respond automatically,” said co-author Kianoush Nazarpour, senior lecturer in Biomedical Engineering. “In fact, just like a real hand, the user can reach out and pick up a cup or a biscuit with nothing more than a quick glance in the right direction.”
The new creation is essentially a change of course in the development of prosthetics. Current models are directed by electrical nerve signals coming from either the user’s limb stump or directly from brain activity, an approach which does allow user control over the artificial limb but which is often seen as too slow and ineffective in comparison to natural bodily movements. The problem is that those prosthetics need users to be consciously and directly thinking about the object in front of them in order for the signal to be sent, something that humans and their bodies do not normally do, instead relying on muscle memory and unconscious behaviour to accomplish the task.
The camera- and AI-assisted bionic arm bypasses that human element needed to figure out how to grab a given object and substitutes it with machine learning. “The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands,” say the study’s authors, whose research is published in the Journal of Neural Engineering. “We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.”
There are over 200,000 people living with limb loss in Canada and almost two million in the United States.
“It’s a stepping stone towards our ultimate goal,” Nazarpour said. “But importantly, it’s cheap and it can be implemented soon because it doesn’t require new prosthetics — we can just adapt the ones we have.”
Below: Real-time multi-modal prosthetic hand control