This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Cite this article
Simon R, Varkevisser J, Mendoza E, Hochradel K, Scharff C, Riebel K, Halfwerk W.2019. Development and application of a robotic zebra finch (RoboFinch) to study multimodal cues in vocal communication. PeerJ Preprints7:e28004v3https://doi.org/10.7287/peerj.preprints.28004v3
Understanding animal behaviour through psychophysical experimentation is often limited by insufficiently realistic stimulus representation. Important physical dimensions of signals and cues, especially those that are outside the spectrum of human perception, can be difficult to standardize and control separately with currently available recording and displaying techniques (e.g. video displays). Accurate stimulus control is in particular important when studying multimodal signals, as spatial and temporal alignment between stimuli is often crucial. Especially for audiovisual presentations, some of these limitations can be circumvented by the employment of animal robots that are superior to video presentations in all situations requiring realistic 3D presentations to animals. Here we report the development of a robotic zebra finch, called RoboFinch, and how it can be used to study vocal learning in a songbird, the zebra finch.