Light Stage Object Classifier
Distinguishing visually similar objects like forged/authentic bills, healthy/unhealthy plants, and real and synthetic fruit like those shown here is beyond the capabilities of even the most sophisticated classifiers. We propose the use of multiplexed illumination to extend the range of objects that can be successfully discriminated.
Our methodology uses the light stage in two ways: first we model and synthetically relight training samples to allow joint pattern selection and classifier training in simulation. Then we use the trained patterns and classifier to quickly classify previously unseen samples.
Publications
• T. Wang and D. G. Dansereau, “Multiplexed illumination for classifying visually similar objects,” Applied Optics, vol. 60, no. 10, pp. B23–B31, Apr. 2021. Available here.
Acknowledgments
We would like to thank the University of Sydney Aerospace, Mechanical and Mechatronic Engineering FabLab for their support.
Themes
Dataset and Code
The code is available here.The dataset contains 16000 10-bit images of five types of real and synthetic fruit. It is split across three files:
- Relightable models [2.8 GB]: high-quality single-illuminant images. These drive the pattern selection and classifier training, and can be used to devise and evaluate new multiplexing schemes.
- SNR-Optimal [1.7 GB]: Captured with inference-time conditions, with more evident noise, and with illumination patterns selected to be optimal in terms of signal-to-noise (SNR) ratio.
- Greedy [837 MB]: Also captured with inference-time conditions, these patterns were jointly trained along with the classifier using our proposed greedy pattern selection scheme.
Alternative download link here.
Light Stage Prototype
The light stage prototype used to collect this data features a five-leg design with eight illumination sites distributed across four of its legs, four mounted on upper leg segments, and four mounted on lower.Each illumination site has four LEDs centered on Red (615 nm), Green (515 nm), Blue (460 nm), and Near-Infrared (NIR, 850 nm) colour bands. We use a Basler acA1920-150um monochrome machine vision camera with Edmund Optics NIR-VIZ 6mm infrared compatible lens.
Relightable Models
We captured each image with a single illuminant active, and imaged each sample in 20 different poses. We used a long (120 ms) exposure duration and averaged each image over multiple exposures to obtain high-SNR images. Filenames are of the formc<i>_p<j>_l<k>.tiffwhere:
- <i> denotes colour channel: 1r = red, 2g = green, 3b = blue, 4n = NIR
- <j> denotes the sample pose, 0..19
- <k> denotes the active illumination site, 1..8; shown here are typical images taken with each site active
c1r_p0_l1.tiffwas captured with red LEDs on, in the first of 20 poses, with the first illumination site active.
Inference-Time Captures
Each of the Greedy and SNR-Optimal sets contains images captured using trained illumination patterns. Shown here are examples of eight spatial patterns in a single colour channel, for a single pose of a single sample. Filenames are of the form<tag>_p<j>c<i>l<k>.tiffwhere:
- <tag> is one of "real" or "fake"
- <j> denotes the sample pose, 0..19
- <i> denotes colour channel: 0 = red, 1 = green, 2 = blue, 3 = NIR
- <k> denotes the index of the illumination pattern, 0..7 (0..3 for the greedy method)
real_p0c1l0.tiffcorresponds to a real (not synthetic) fruit sample, in the first of 20 poses, with green LEDs on, and with the first illumination pattern active.
Citing and Contact
For enquiries, please email twan8752 {at} uni dot sydney dot edu dot au.
If you find this work useful please cite
@article{wang2021multiplexed, title = {Multiplexed Illumination for Classifying Visually Similar Objects}, author = {Taihua Wang and Donald G. Dansereau}, journal = {OSA Applied Optics}, year = {2021}, volume = {60}, number = {4}, publisher={OSA} }