CLOUDS are not normally a boon for image-processing algorithms because their shadows can distort objects in a scene, making them difficult for software to recognise.
Depth maps record the geography of a 3D landscape and represent it in 2D for surveillance and atmospheric monitoring. They are usually created using lasers, because adjacent pixels in camera images do not equate to adjacent geographic points: one pixel might form the line of a hill in the near distance, while an adjoining one is from a more distant landmark.
Enter the clouds - the shadows they cast can hint at real-world geography, Jacobs's team says. By comparing a series of images and recording the time at which the passing shadows change a pixel's colour they can estimate the distance between each pixel. "If the wind speed is known you can reconstruct the scene with the right scale," says Jacobs. "That is notoriously difficult from a single camera viewpoint."
Compared with laser-created maps, average positional error in the cloud map was just 2 per cent, Jacobs says. The work is to be presented at the Computer Vision and Pattern Recognition conference in San Francisco this week.