Pubs Projects Tools Join Team About Home

Change Detection from Mobile Light Field Cameras

This work presents a simple, closed-form solution to change detection from moving light field cameras. This task is generally complicated by nonuniform apparent motion, but by synthesizing fixed adjacent frames we reduce the problem to that of a stationary camera.

Above, camera motion between times τ0 and τ1 (left) causes apparent motion in static scene elements like the tree (top insets), making them difficult to disambiguate from genuinely dynamic elements, like the Kiwi. We render a novel view (bottom-right) showing scene content from time τ0 as seen from the point of view of the camera at τ1. Static elements now appear static, opening a family of dynamic-camera problems to static-camera solutions. No 3D model of the scene is required, rather the geometry implicitly encoded in the light field is directly exploited. In the case of change detection, this process yields a closed-form solution.

  • No explicit scene model is formed
  • Closed-form, constant runtime
  • Simple behaviours, failure modes
  • Easily parallelizable: GPU, FPGA, etc.
  • Outperforms competing single-camera methods for common scenes
  • Approach generalizes to other moving-camera problems

Publications

•  D. G. Dansereau, S. B. Williams, and P. I. Corke, “Simple change detection from mobile light field cameras,” Computer Vision and Image Understanding (CVIU), vol. 145C, pp. 160–171, 2016. Available here.

•  D. G. Dansereau, S. B. Williams, and P. I. Corke, “Closed-form change detection from moving light field cameras,” in IROS Workshop on Alternative Sensing for Robotic Perception, 2015.

Collaborators

This work was a collaboration between Donald Dansereau and Peter Corke from the Australian Centre for Robotic Vision at QUT, and Stefan Williams from the Australian Centre for Field Robotics Marine Robotics Group, University of Sydney.

Presentations

Presentation from the 2015 IROS Workshop on Alternative Sensing for Robotic Perception

Acknowledgments

This work was supported by the Australian Centre for Field Robotics (Project DP150104440) and the Australian Research Council Centre of Excellence for Robotic Vision (Project CE140100016). Thanks to the reviewers, Dr. Linda Miller and Dr. Jürgen Leitner for their helpful suggestions.