Automated annotation of corals in natural scene images using multiple texture representations
- Published
- Accepted
- Subject Areas
- Computer Vision, Data Mining and Machine Learning
- Keywords
- Annotation, Multi-classifier, Coral reef, Texture features, Rejection
- Copyright
- © 2016 Blanchet et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2016. Automated annotation of corals in natural scene images using multiple texture representations. PeerJ Preprints 4:e2026v2 https://doi.org/10.7287/peerj.preprints.2026v2
Abstract
Current coral reef health monitoring programs rely on biodiversity data obtained through the acquisition and annotation of underwater photographs. Manual annotation of these photographs is a necessary step, but has become problematic due to the high volume of images and the high cost of human resources. While automated and reliable multi-spectral annotation methods exist, coral reef images are often limited to visible light, which makes automation difficult. Much of the previous work has focused on popular texture recognition methods, but the results remain unsatisfactory when compared to human performance for the same task. In this work, we present an improved automatic method for coral image annotation that yields consistent accuracy improvements over existing methods. Our method builds on previous work by combining multiple feature representations. We demonstrate that the aggregation of multiple methods outperforms any single method. Furthermore, our proposed system requires virtually no parameter tuning, and supports rejection for improved results. Firstly, the complex texture diversity of corals is handled by combining multiple feature representations: local binary patterns, hue and opponent angle histograms, textons, and deep convolutional activation feature. Secondly, these multiple representations are aggregated using a score-level fusion of multiple support vector machines. Thirdly, rejection can optionally be applied to enhance classification results, and allows efficient semi-supervised image annotation in collaboration with human experts.
Author Comment
This version is a minor revision with the following changes: 1) added a small ecological interpretation in our analysis; 2) fixed a broken citation; 3) clarified a few sentences; 4) added justification to patch size parameter choice; 5) fixed formating.