Vocal emotion recognition in school-age children: normative data for the EmoHI test

Center for Language and Cognition Groningen, University of Groningen, Groningen, Groningen, Netherlands
Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, Groningen, Groningen, Nederland
CNRS, Lyon Neuroscience Research Center, Université Claude Bernard (Lyon I), Lyon, France
Clinical Neurosciences Department, University of Cambridge, Cambridge, United Kingdom
Hearbase Ltd, Hearing specialists, Kent, United Kingdom
The Ear Institute, University College London, University of London, London, United Kingdom
DOI
10.7287/peerj.preprints.27921v1
Subject Areas
Neuroscience, Cognitive Disorders, Otorhinolaryngology
Keywords
emotion, hearing loss, cochlear implants, perceptual development, cognitive development, vocal emotion recognition, auditory development
Copyright
© 2019 Nagels et al.
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Cite this article
Nagels L, Gaudrain E, Vickers D, Matos Lopes M, Hendriks P, Başkent D. 2019. Vocal emotion recognition in school-age children: normative data for the EmoHI test. PeerJ Preprints 7:e27921v1

Abstract

Traditionally, emotion recognition research has primarily used pictures and videos while audio test materials have received less attention and are not always readily available. Particularly for testing vocal emotion recognition in hearing-impaired listeners, the audio quality of assessment materials may becrucial. Here, we present a vocal emotion recognition test with non-language specific pseudospeech productions of multiple speakers expressing three core emotions (happy, angry, and sad): the EmoHI test. Recorded with high sound quality, the test is suitable to use with populations of children and adults with normal or impaired hearing, and across different languages. In the present study, we obtained normative data for vocal emotion recognition development in normal-hearing school-age (4-12 years) children using the EmoHI test. In addition, we tested Dutch and English children to investigate cross-language effects. Our results show that children’s emotion recognition accuracy scores improved significantly with age from the youngest group tested on (mean accuracy 4-6 years: 48.9%), but children’s performance did not reach adult-like values (mean accuracy adults: 94.1%) even for the oldest age group tested (mean accuracy 10-12 years: 81.1%). Furthermore, the effect of age on children’s development did not differ across languages. The strong but slow development in children’s ability to recognize vocal emotions emphasizes the role of auditory experience in forming robust representations of vocal emotions. The wide range of age-related performances that are captured and the lack of significant differences across the tested languages affirm the usability and versatility of the EmoHI test.

Author Comment

This is intended for the second international workshop on Vocal Interactivity in-and-between Humans, Animals and Robots (VIHAR 2019) collection.