Distributed Image Compression in Camera Networks
Dense networks of wireless, battery-powered sensors are now feasible thanks to recent hardware advances, but key issues such as power consumption plague widespread deployment. Fortunately, in a dense network of sensors, cross-sensor correlation can be exploited to reduce the communication power consumption. In this thesis, we examine a novel technique for distributed image compression in sensor networks. First, sensors are allowed to share low-bandwidth descriptors of their fields of view as image feature points, allowing sensors to identify a common region of overlap. The region is then compressed via spatial downsampling, and image super-resolution techniques are employed at the receiver to reconstruct an original-resolution estimate of the common area from the set of low-resolution sensor images. We demonstrate the feasibility of such an algorithm via a prototype implementation, and we evaluate the effectiveness of the proposed technique using a set of real sensor images gathered with an off-the-shelf digital camera.