Neural Network Helps These Robots Learn Dexterity

Continuous feedback from neural network learning system has reduced 14 robots’ failure rate from 34 percent to 18 percent.

This Google Research robot has unprecedented levels of coordination. That means they can manipulate small objects with precision and dexterity, and they do it by spending a long time learning.

At Google Research’s labs, 14 different grasping robots were tasked with picking up and moving objects. Data from all 14, including cameras and sensors in the hardware, was then fed into a deep convolutional neural network (CNN), which was tasked with predicting the results of an attempted grasp.

“Our work is predicated on the hypothesis that large datasets will have a transformative effect on robot capability,” researcher Sergey Levine told IEEE.

The robot arm uses a monocular RBG camera to “see” its surroundings, and a gripper mounted on an arm with seven degrees of freedom in order to grab them. Robots need hand-eye coordination in order to successfully ‘feel’ the objects in front of them, and letting them practice allows them to teach themselves to connect the actions they’re performing with the sensor data that results. This ‘cause-and-effect’ process isn’t intuitive for the robots, so having multiple bots work on the same problem for a long time allows them to lean by doing. They ‘see’ the objects using monocular visual servoing, then pick them up with a two-fingered gripping ‘hand.’

The 14 robots perform 800,000 grasps, or 3,000 hours of practice. Continuous feedback from the neural network learning system has reduced the robots’ failure rate from 34 percent to 18 percent.

The feedback also generated unexpected, naturally learned behaviors, such as autonomously moving one object out of a group of objects.

Using multiple robots causes potential problems too, though: each robot uses a slightly different camera angle and has slightly differently shaped grippers due to human error and wear and tear on the ‘fingers.’ Because of this, the robots are still only being trained on very specific behaviors, and the ability to pick differently shaped objects up out of a box doesn’t mean they can also pick differently shaped objects off of a shelf.

On the other hand, the robot grippers are generic, and applying the data to robots with similar grippers would be relatively easy.

The next step in the process is to try to use the data to make the robots more adaptable, so that they can pick objects up off of any surface, not just the surface they practiced on.

(Via IEEE Spectrum.)

More in Industry 4.0