Example Based Depth from Fog

The presence of fog in an image reduces contrast which can be considered a nuisance in imaging applications, however, we consider this useful information for image enhancement and scene understanding. In this work, we present a new method for estimating depth from fog in a single image and single image fog removal. We use an example based approach that is trained from data with known fog and depth. A data driven method and physics based model are used to develop the example based learning framework for single image fog removal. In addition, we account for various colors of fog by using a linear transformation of the RGB colorspace. This approach has the flexibility to learn from various scenes and relaxes the common constraint of fixed camera position. We present depth estimations and fog removal from a single image with good results.

Paper Accepted

Results

    Figure 1 - Elapsed Time vs. Image Size

    Comparison with Dark Channel Prior (DCP) method from He et al. [1] and the example based method. Upper left: observed foggy image. Upper right: true transmission. Lower left: Transmission from DCP method. Lower right: transmission from our proposed method.

    Figure 2 - Dictionary Construction

    The dictionary construction is achieved in a similar fashion by Freeman et al. [2]. Given a depth map d and the original image x, the foggy image is synthesized y using arbitrary scattering β and airlight a values to create the transmission map t.

References

[1] Kaiming He; Jian Sun; Xiaoou Tang; , Single image haze removal using dark channel prior, CVPR 2009, pp.1956-1963
[2] W.T. Freeman, T.R. Jones, and E.C. Pasztor, Examplebased super-resolution, Computer Graphics and Applications, IEEE, vol. 22, no. 2, pp. 56–65, 2002.