Create a free Manufacturing.net account to continue

Engineering Newswire: Robots That Learn Like Babies

This Engineering Newswire looks at controlling a telepresence robot with our brains, simulating a virtual bleeding leg wound and teaching robots to learn like babies.

Telepresence Robot Takes Directions from Brain

Italian and Swiss researchers are giving adults with motor disabilities more independence by combining brain control with a telepresence robot. The team uses the term “shared control” to describe their system, which lets the robot and user work together to achieve a goal.

Using a non-invasive brainwave headset to interpret the user’s thoughts, the user can tell the robot where to go by imagining the movement of hands and feet. The movements are associated with specific commands such as forward, backward, left, or right. When the software system receives the signals, it converts them into commands that the robot understands.

Bleeding Virtual Leg to Help Train Combat Medics

To better prepare combat medics for emergencies in the field, researchers from the University of California, Los Angeles have created the first detailed simulation of a human leg injured by flying shrapnel.

In addition to showing how a ballistic projectile would pass through the leg, the researchers also developed a hemorrhage simulation of an injured leg both standing upright and on the ground. And it is extremely true to life. It includes bone, muscle, and skin, along with a very realistic vascular system that drives the flow of blood.

Robots That Learn Like Babies

University of Washington developmental psychologists recently collaborated with their computer scientist colleagues to teach robots how to learn like infants.

The team has demonstrated that robots can "learn" much like kids -- by amassing data through exploration, watching a human perform a task and determining how best to carry out that task on its own.

The team used research on babies to develop machine learning algorithms that allow a robot to explore how its own actions result in different outcomes. Then it uses that model to infer what a human wants it to do and complete the task, and even to "ask" for help if it's not certain it can.