The English idiom is the direct result of the marketing efforts of AT&T, with the help of an iconic Diana Ross song in 1987. The phrase has since been used (not always appropriately) for countless differing things. But lately the phrase may come to have a whole new meaning for the prosthetic industry, because of the inclusion of AI into the newest in prosthetic hands.

We don’t rationally think about how hard we squeeze when we pick up a grape or a bowling ball. Our muscles and brain automatically do the necessary calculations to produce the exact amount of pressure needed for the object we want to pick up. However, in the case of prosthetic hands, this complex system of innate mathematics is not present. In past systems, prosthetic hands required some type of strap mechanism in order to regulate grip. In modern prosthetics, muscle sensors are used to read the activity of muscles under the skin in order to determine the strength of the grip. The most advanced systems have small sensors placed directly into the muscles themselves to determine grip strength.

However, a new idea is starting to catch on with a research team at Newcastle University. They’ve been building a prosthetic arm with an AI camera mounted onto the top of the hand. The camera uses a computer vision system that has recently been developed, while the team is using deep learning to help the camera recognize different types of objects (right now approximately 500). When the person wearing the hand moves to grab an object, the camera recognizes the object through a picture. It moves the hand into the correct position to grab it (pinch style for a pencil or vertical to grab a water bottle) which is then confirmed by the wearer using muscle sensors.

The really remarkable piece of this new system is that the hand is really responding to the selection made by the user. When the user wants to pick something up, he or she simply places in the right direction first and then the hand responds and the user confirms with muscle response, all within seconds. It’s the closest thing to a real hand on the market, and it will continue to learn with objects that are new to the camera. It moves 10 times faster than other systems that are currently on the market which are dependent only on muscle response.

The elegant part of the whole system is the price tag. The camera is a low grade Logitech webcam, and the AI software can be trained very cheaply. In fact the entire system is a widely affordable and immense improvement on anything the market can offer now. Dr. Kainoush Nazarpour said, “The beauty of this system is that it’s much more flexible and the hand is able to pick up novel objects — which is crucial since in everyday life people effortlessly pick up a variety of objects that they have never seen before.”  Perhaps with this advancement, the Diana Ross lyric may become a reality for everyone.

Read more news about Artificial Intelligence, here.