When it comes to picking up objects, we humans never think twice about how to grasp an item. However, this is a task that robots tend to struggle with, especially when outside of a structured setting.
Now, researchers from Queensland University of Technology have developed a faster and more accurate method for robots to grasp objects even in changing or cluttered environments.
Using a newly developed artificial neural network and a depth-mapping camera, a two-fingered picking robot was able to determine the best grasp for picking up a variety of objects by scanning the environment and mapping each pixel it captured using a depth image.
In real-world tests, researchers noted an 83 percent grasp success rate on previously unseen objects with adversarial geometry and 88 percent success on household objects that were moved during the picking attempt. They also achieved 81 percent accuracy when picking in dynamic clutter.
QUT researchers say this new method could eventually be used in a variety of industries from warehouses for online shopping and sorting to fruit picking.