Pubs Projects Tools Join Team About Home

Wide-FOV Monocentric LF Camera

This work presents the first single-lens wide-field-of-view (FOV) light field (LF) camera. Shown above are two 138° LF panoramas and a depth estimate. These are 2D slices of larger 72 MPix (15 × 15 × 1600 × 200) 4D LFs. The depth estimate is based on a standard local 4D gradient method. The superresolution, parallax pan, and refocus examples below demonstrate more of the 4D structure of these panoramas. Click here for individual Panoramas.

Both wide-FOV and LF capture have been shown to simplify and enhance a range of tasks in computer vision, and we expect their combination to find application spanning autonomous vehicles, virtual and augmented reality capture, and robotics in general.

(left) The optical prototype employs a novel relay system and a rotating arm to emulate a tiled-sensor camera; (top-right) The main lens and lenslet array; (bot-right) The monocentric lens (fore), Lytro Illum (center), and a conventional lens with similar FOV and resolution (back).

Publications

•  G. M. Schuster, D. G. Dansereau, G. Wetzstein, and J. E. Ford, “Panoramic single-aperture multi-sensor light field camera,” Optics Express, vol. 27, no. 26, pp. 37257–37273, 2019. Available here.

•  D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” in Computer Vision and Pattern Recognition (CVPR), 2017. Available here, poster here.

•  G. M. Schuster, I. P. Agurok, J. E. Ford, D. G. Dansereau, and G. Wetzstein, “Panoramic monocentric light field camera,” in International Optical Design Conference (IODC), 2017.

Collaborators

This work was a collaboration between Donald Dansereau and Gordon Wetzstein from the Stanford Computational Imaging Lab and Joseph Ford, Glenn Schuster and Ilya Agurok from the Photonic Systems Integration Laboratory, UC San Diego.

Links

Acknowledgments

We thank Kurt Akeley and Lytro for a hardware donation that enabled this work. This work is supported by the NSF/Intel Partnership on Visual and Experiential Computing (Intel #1539120, NSF #IIS-1539120). The authors thank Google ATAP for providing the Omnivision sensor interface, and Julie Chang and Sreenath Krishnan for their help with early optical prototypes. The monocentric lenses used in this work were fabricated within the DARPA SCENICC research program.

Themes