Clouds add depth to computer landscapes

A single camera watching the shadows of clouds moving across a landscape can allow a computer to calculate topography

CLOUDS are not normally a boon for image-processing algorithms because their shadows can distort objects in a scene, making them difficult for software to recognise.

However, Nathan Jacobs and colleagues at Washington University in St Louis, Missouri, are making shadows work for them, helping them to create a depth map of a scene from a single camera.

The shadows clouds cast create a depth map of a scene from one camera, a notoriously difficult task

Depth maps record the geography of a 3D landscape and represent it in 2D for surveillance and atmospheric monitoring. They are usually created using lasers, because adjacent pixels in camera images do not equate to adjacent geographic points: one pixel might form the line of a hill in the near distance, while an adjoining one is from a more distant landmark.

Enter the clouds - the shadows they cast can hint at real-world geography, Jacobs's team says. By comparing a series of images and recording the time at which the passing shadows change a pixel's colour they can estimate the distance between each pixel. "If the wind speed is known you can reconstruct the scene with the right scale," says Jacobs. "That is notoriously difficult from a single camera viewpoint."

Compared with laser-created maps, average positional error in the cloud map was just 2 per cent, Jacobs says. The work is to be presented at the Computer Vision and Pattern Recognition conference in San Francisco this week.

Issue 2765 of New Scientist magazine