Richardson-Lucy Deblurring for Moving Light Field Cameras
(left) Motion blur in 3-D scenes takes on a complex variety of shapes;
(right) We introduce a light-field generalization of Richardson-Lucy deblurring
which deals correctly with complex 3-D geometry and 6-DOF camera motion.
No depth estimation is performed, only the camera’s trajectory is required.
We generalize Richardson-Lucy deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur. We include a novel regularization term that maintains parallax information in the light field, and employ 4-D anisotropic total variation to reduce noise and ringing.
- Generalization of Richardson-Lucy deblurring to moving light field cameras
- 6-DOF camera motion in arbitrary 3D scenes
- Deblurring of nonuniform apparent motion without depth estimation
- Novel parallax-preserving light field regularization
- Low-dimensional motion model allows efficient convergence
- Mathematical proof that the algorithm converges to the ML estimate under Poisson noise
Limitations: Like conventional Richardson-Lucy, this method is not blind. However, we anticipate that it can be extended to blind deblurring by mirroring developments in 2D deconvolution.
Publications
• D. G. Dansereau, A. Eriksson, and J. Leitner, “Richardson-lucy deblurring for moving light field cameras,” CVPR workshop on Light Fields for Computer Vision (CVPR:LF4CV), in press, 2017. Available here.
Collaborators
I started this work with Juxi Leitner and Anders Eriksson while at the Australian Centre for Robotic Vision at the Queensland University of Technology, and have since contributed from my position at Stanford.
Presentations
Presentation from the 2017 CVPR Light Fields for Computer Vision (LF4CV)
workshop.
Acknowledgments
This research was supported by the Australian Research Council through the Centre of Excellence for Robotic Vision (project number CE140100016) and grant DE130101775. Computational resources and services provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia.
Gallery
(click to enlarge)
Richardson-Lucy deblurring iteratively refines an estimate of the deblurred image based on a simulation of the blur process. Conventionally the forward and backward blur are based on 2D convolution with a blur kernel.
In this work we prove that rendered motion blur can replace the 2D convolution step. Motion blur is easy to render from a light field, and complex scenes and 6-DOF camera motion are natively handled.
Camera motion can be easily simulated from a light field. Here the simulated camera translates towards the lorikeet while simultaneously changing its focal length, yielding a vertigo-like effect. Adding up the frames from this sequence yields simulated blur, seen below.
The blurry light field from the above light field and simulated camera trajectory, alternating with the deblurred light field from our method.
This result proves that in addition to 6-DOF camera motion, our method can natively handle changes in the camera's parameters during an exposure, e.g. the changing zoom in this example.
By mounting a Lytro Illum on a robotic arm, we produce blurry light fields with repeatable and ground-truthed 6-DOF camera motion. This is essential to evaluating deblurring methods in general, as it allows precise reporting and repeatability that are not easily achieved with hand-held sequences.
We anticipate generalizing our method to be blind, jointly estimating the camera's trajectory and the deblurred light field. Having ground-truthed camera trajectories will be essential for validating the blind method is correctly estimating the camera's trajectory.
A deblurring result for horizontal camera translation, captured using the Lytro Illum. This situation is not easily handled by 2D methods due to the non-uniform apparent motion of the scene.
In this and all following results, please see the paper for camera velocity, exposure times, and more quantitative results and comparisons to other methods.
A result for camera rotation about z using the arm-mounted Lytro Illum.
A result for camera translation along x using the arm-mounted Lytro Illum.
A result for camera translation along z using the arm-mounted Lytro Illum. We attribute the lower performance near the corners to edge effects and lens distortion. The central portions of the light field are successfully deblurred.
A simulated blurred light field created using the
LFSynth rendering tool, and deblurring result. Camera motion is horizontal, yielding depth-dependent blur.
As above for camera rotation about y.
As above for camera translation along z, again yielding depth-dependent blur.
As above for camera rotation about z.
Another scene showing blur due to horizontal camera motion.
As above for rotation about z.
As about for translation along z.
As above for rotation about y.
Another view of the Illum on the robot arm.