Create a free Manufacturing.net account to continue

Robots Can Learn By Doing, Just Like Children Do

The robot uprising will be adorable. Or at least, the beginning of life for a robot that can “learn” might be adorable.

Mnet 171898 Robots And Kids Gaze Comparison 0

The robot uprising will be adorable.

Or at least, the beginning of life for a robot that can “learn” might be adorable.  The University of Washington has brought computer scientists and developmental psychologists together in order to understand how robots can make connections through gathering data in the same way that babies do. The robots can learn similarly to humans by performing simple tasks using trial and error and by repeating what they see humans do.

"If you want people who don't know anything about computer programming to be able to teach a robot, the way to do it is through demonstration -- showing the robot how to clean your dishes, fold your clothes, or do household chores,” said senior author Rajesh Rao, a professor of computer science and engineering at the University of Washington. “But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own."

The research was published in the journal PLOS ONE in November. Its end product was a new probabilistic model regarding how to teach robots to learn through imitation.

Children as young as 18 months use imitation to problem solve and understand how to achieve their goals. Part of that learning comes from play – what looks like a game could also be helping to form a baby’s mental model of the world.

The researchers worked on developing machine algorithms that could do the same thing.

A robot uses a learned probabilistic model to infer what a human wants it to do, by watching the action the human takes. Sensors track the human’s gaze and use that as an assumption to guide the robot's head movements.

(Interestingly, infants who are exposed to blindfolds or other visual barriers learn quickly that they don’t need to follow a blindfolded adult’s gaze, since they knew that the person can’t see. The University of Washington robot could be taught the same thing.)

When the robot is paired with a (not blindfolded) human asked to perform a simple task, the robot can replicate that task, and ask for feedback or help from the human if it needs it. The end result is that the robot might not do the exact same thing a human is doing, but it will get the same job done. For example, a human might push an object, but a robot with a gripper hand might realize that it would be more efficient for it to pick up the object. The robot could perform the task in its own way by having organically learned how to solve the problem.

"Babies use their own self-experience to interpret the behavior of others -- and so did our robot," said psychology professor and Institute for Learning & Brain Sciences Lab co-director Andrew Meltzoff.

Next, the team wants to explore how to apply their learned probabilistic model to more complicated tasks.