Pubs Projects Tools Join Team About Home

Learning Underwater Active Perception in Simulation

Underwater robotic operations are challenging and risky. When autonomous underwater vehicles (AUVs) are deployed and in-operation, communication is very limited and very often impossible. In the context of inspection operations, visibility of the asset is the first factor of success. However, the water conditions can drastically vary from a place to another, requiring to adapt the mission plan everytime. If the plan fails, the whole operation is jeoperdised, potentially leading to important loss of time and money.

This work enables AUVs to adapt to the water conditions without requiring any inputs from an operator, ensuring the collection of high-quality imagery at all time. For this, we propose an active perception framework based on a multi-layer perceptron (MLP) trained to predict image quality given a distance to a target and artificial light intensity.

In this work:

  • We improve simulation of underwater imagery within the Blender modeling software, including more accurate light behaviour in water, and models of the oceans.
  • We introduce a method for in-situ water column property estimation using a monocular camera and adjustable illumination.
  • We design a framework providing online guidance suggestions to maintain high-quality data collection and maximise visual coverage in a broad range of water conditions.

The approach was successfully tested and compared in simulations, including turbid and clear water environments. It proposed the best trade-off between image quality and visual coverage compared to classical non-adaptive approaches.

Publications

•  A. Cardaillac and D. G. Dansereau, “Learning Underwater Active Perception in Simulation,” under review, 2025. Preprint here.

Citing

@misc{cardaillac2025learning,
  title={Learning Underwater Active Perception in Simulation},
  author={Alexandre Cardaillac and Donald G. Dansereau},
  year={2025},
  eprint={2504.17817},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2504.17817},
}
This work was carried out in the Robotic Imaging Group at the Australian Centre for Robotics, University of Sydney.

Acknowledgments

The authors would like to thank Peter Roberts and Julien Flack from Advanced Navigation for their support and contribution to this work. This work was supported by the ARC Research Hub in Intelligent Robotic Systems for Real-Time Asset Management (IH210100030).

Themes

Downloads

The code and data is available on GitHub here.