Two sensors that have been proposed for use on a space robot are cameras and radar. Considered individually, neither of these sensors provides enough information for a computer to derive a good surface description of a remote object. Their combination, however, can produce a complete surface model.
The lack of atmosphere in space presents special problems for optical image sensors. Frequently, edges are lost in shadow and surface details are obscured by diffraction effects caused by specularly reflected light. An alternate sensor for space robotic applications is microwave radar. The polarized radar cross-section (RCS) is a simple, well-understood, microwave measurement that contains limited information about a scattering object's surface shape.
These two data sets are fused through an error minimization procedure. First, an incomplete surface model is derived from the camera image. Next, the unknown characteristics of the surface are represented by some parameter. Finally, the correct value for this parameter is computed by iteratively generating theoretical predictions to the RCS and comparing them to the observed value.
A theoretical RCS may be computed from the surface model in several ways. One such RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. Here, the camera image provides the necessary information. When the method of moments is used to predict the RCS, the error minimization algorithm will converge in most cases.
By combining the microwave and optical information in this way, the shapes of some three-dimensional objects have been accurately recovered. Simulations and experiments were performed on plates, ellipsoids, and an arbitrary curved object. Simulations show that error in the recovered shapes is very small when the RCS measurement error is not too large. Experiments prove that the RCS can be measured within this tolerance.
In general, this investigation has shown the usefulness of sensor fusion applied to the shape reconstruction problem in space. Furthermore, a specific framework has been developed and proved effective for integrating the two types of sensors that are typically found on space vehicles.