Species-specific audio detection: A comparison of three template-based classification algorithms using random forests
A peer-reviewed article of this Preprint also exists.
Author and article information
Abstract
We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based classification. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.
Cite this as
2017. Species-specific audio detection: A comparison of three template-based classification algorithms using random forests. PeerJ Preprints 5:e2713v1 https://doi.org/10.7287/peerj.preprints.2713v1Author comment
This is a submission to PeerJ Computer Science for review.
Sections
Additional Information
Competing Interests
The authors declare there are no competing interests.
Author Contributions
Carlos J Corrada Bravo conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper.
Rafael Álvarez Berríos conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work.
T. Mitchell Aide conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper.
Data Deposition
The following information was supplied regarding data availability:
Figshare
Funding
The authors received no funding for this work.