Create a free Manufacturing.net account to continue

Programming Intelligent Underwater Robots

Intelligent underwater robots have increasingly been deployed as vital tools.

Intelligent underwater robots have increasingly been deployed as vital tools for mapping the ocean floor and monitoring pockets of the sea.

Most autonomous vehicles are commanded by a low-level sequence of instructions, such as a series of way points that guide the vehicle in a straight line. These sequences leave little latitude to compensate for failures, except to return to the surface and call for help.

Autonomous Mars rovers, spacecraft, and air vehicles are commanded in a similar manner; however, writing such a sequence is time consuming and error prone. Scientists spend most of their time writing these scripts, or low-level commands, leaving them little time to think about the actual scientific objectives.

In order to give autonomous underwater vehicles (AUVs) more cognitive capabilities, engineers at the Massachusetts Institute of Technology (MIT) have developed a new programming approach that allows humans to identify high-level goals, while the vehicle performs the decision making to best accomplish them.

In March, the team traveled to the western coast of Australia to test the autonomous mission-planning system. For three weeks, the MIT engineers, along with researchers from Woods Hole Oceanographic Institution, the Australian Center for Field Robotics, and the University of Rhode Island, tested several classes of AUVs and their aptitude to work as a team to map the ocean surroundings.

โ€œOur cruise demonstrated that a large number of autonomous vehicles can be used at the same time to perform much more effective monitoring of the environment,โ€ explains Brian Williams, aeronautics and astronautics professor at MIT, and principle developer of the mission-planning system.

COGNITIVE REASONING

While on the research cruise, Williams and his team were able to demonstrate that their AUV, a Slocum Glider, could be commanded in terms of goals. These goals include telling the glider areas to explore, areas to avoid, the scientific value of the different explorations, and deadlines on when to perform the explorations.

The glider was then able to select which areas to explore, and which to skip, as well as to perform the explorations to meet all deadlines; and which paths to safely move between the areas. The glider even decided where to go based on the locations and routes of other vehicles, so that they could simultaneously operate safely together.

Throughout the course of the three-week experiment, the teamโ€™s Slocum Glider operated safely among the company of other autonomous vehicles. Although the other vehicles used traditional sequences, Williams hopes that they will all eventually be commanded in terms of high-level goals.

โ€œThe first few days at Scott Reef were spent making sure that the Slocum Glider functioned properly, and that its acoustic sonar was doing an effective job of collecting data,โ€ explains Williams. โ€œOver subsequent days we added functions that enabled it to be commanded in terms of goals.โ€

First, the team added algorithms for the Slocum Glider to plan energy efficient paths, while safely navigating within close proximity to seamounts and other parts of the reef. They then added a capability that allowed the vehicle to monitor the state of its environment and to coordinate with the ship to gain information about the other vehiclesโ€™ locations. The glider used this information to adapt its routes after receiving updates each time it surfaced.

Next, the engineers added the ability to select, order, and schedule scientific goals, such as mapping out a certain location of the ocean floor. Due to this added capacity, the glider was able to adapt its goals, plan of activities, schedules, and routes every time it came to the surface.

Finally, the team used these capabilities to have the Slocum Glider achieve its goals while navigating around vehicles operating in the reef at the same time. โ€œTo be safe, gliders normally operate far away from reefs and the coast, where there is little concern about collision,โ€ explains Williams.

โ€œWe demonstrated that the gliders could operate safely and autonomously near places that are both interesting scientifically, but would be too dangerous for vehicles that operate using traditional approaches.โ€

SEA MEETS SPACE

Williams and his team have been working on high-level programming for about 15 years. After NASA lost contact with the Mars Observer spacecraft just days before its insertion into Marsโ€™ orbit in 1993, the agency realized it needed an autonomous system that would allow spacecraft to identify and fix problems without human aid.

By 1999, Williams, who was working at NASAโ€™s Ames Research Center, had developed and demonstrated the new system on NASAโ€™s Deep Space 1 probe, which successfully performed an asteroid flyby. โ€œThe system we demonstrated in March is similar, but the planner is much faster and more capable, and unlike the spacecraft, it reasons about how to move the vehicle around,โ€ he adds.

The updated systemโ€™s hierarchal approach was actually inspired by the Star Trek Enterpriseโ€™s top-down command center. In fact, Williams even named his system after the fictional spacecraft.

At the highest level, the vehicle needs to decide which of the scientistโ€™s goals it will achieve and which ones it will drop, acting as a communication officer โ€“ as Uhura did in the original Star Trek. At the next level, the vehicle needs to act as a captain to come up with a plan and a schedule for how to achieve its goals, just as Captain Kirk would do. Finally, at the lowest level, the vehicle needs to act as a skilled navigator (Star Trekโ€™s Chekov), by planning how to move from one location to the next, while not colliding into the sea floor or other vehicles.

NASAโ€™s system, Remote Agent, had the ability to plan and schedule activities, as well as to diagnose and repair its hardware; however, the system took eight hours to develop a plan, and needed a lot of guidance. โ€œOur current system is able to come up with plans for more complicated systems within a fraction of a second, without needing guidance from a human,โ€ says Williams.

FURTHER APPLICATIONS

The cognitive programming system was designed to work in a large number of applications. According to Williams, enabling AUVs to do a better job of monitoring the environment will allow researchers and officials to manage natural resources more effectively.

Aside from underwater vehicles, Williams is also working with collaborators in Brazil to demonstrate how these techniques could be used to monitor crops with autonomous air vehicles.

Additionally, the MIT team is working with the Caltech Jet Propulsion Laboratory to show how similar methods can be used to safely operate Mars rovers.

By providing robots with control over higher-level decision making, engineers and scientists would have more time to focus on scientific objectives, while the autonomous robot determines its own mission plan.

The new programming system would significantly reduce the size of the operational teamโ€™s need on research cruises, and AUVs would be able to traverse more rugged environments.

Williams affirms, โ€œIn the future, we will want all the vehicles commanded in terms of goals, and we will want them to work collectively to achieve these goals.โ€


This article originally appeared in the June 2015 print edition of PD&D.

More in Industry 4.0