PeerJ Award Winners: RiTA 2021

As with 8th International Conference on Robot Intelligence Technology and Applications (RiTA 2020), PeerJ sponsored an award for the for the best Three-Minute Thesis (3MT) at the 9th edition of this conference, RiTA 2021! We recently talked to the winner, Dr. WonHyong Lee, about his research interests.

*****

If you attended or presented at RiTA 2021 – either virtually or in-person –  then you are invited to submit research articles relevant to the scope of the conference series to a special RiTA 2021 Conference Collection which will be published by PeerJ Computer Science.

Authors wishing to be included should first submit an abstract via the Conference Collection homepage before the deadline of 29th April 2022. See the announcement blog for full details and submission instructions.

*****

WonHyong Lee Assistant Professor, Handong Global University, South Korea. 

Can you tell us a bit about yourself and your research interests?

I received my doctoral degree in the Department of Electrical Engineering at KAIST (Korea Advanced Institute of Science and Technology) in 2017. My dissertation title is “Hierarchical Emotional Memory-based Human Robot Social Interaction Framework”. I also worked at the George Washington University as a postdoctoral researcher in 2017-2018.

I am interested in creating robots and virtual agents that can emotionally and socially interact with people and applying them in real life. Social robots and emotional robots are the areas of my research interest. Recently, due to the COVID-19 pandemic and the advent of the “metaverse” era, my interest in VR applications with sociable 3D avatars is also growing.

I devised and implemented a scenario in which humans and a robot interact socially; and won a video award at HRI’18 conference. Visual Question Answering (VQA) interaction was also implemented into a real-time robotic system and was awarded at SMC’17 conference.

If you’d like to know more, please head to to my personal website.

What first interested you in this field of research?

I started this research when I entered my master’s program and participated in a project to create a robotic head that can express facial expressions. I studied and implemented how a robot can make natural and dynamic expressions, and after that, I thought that emotion generation should be put into robots so that they can cope with situations in human society. At that time, I developed a hardware robotic head, but later, I expanded my research interest to virtual robotic heads by developing 3D avatars. With the recent development of artificial intelligence technology, as the quality of information that can be recognized by robots or virtual agents increases, the way to create and express emotions in a variety of ways is opening up, so I am looking forward to the blooming of social robots.

Can you briefly explain the research you presented at RiTA 2021?

In a situation where outdoor gatherings are difficult due to COVID-19, my project team has developed a VR karaoke system where people can practice singing alone at home. One of the purposes of this study is to evaluate whether users can sing well but also to add 3D avatars in the VR environment to give the users the feeling of singing with other people like friends rather than doing it alone.

Therefore, in this study, in addition to the pitch and melody recognition technology, 3D avatar’s facial expression generation was implemented together, and the effect was verified through surveys.

I would like to thank the project members of this study: JaeHyeok Choi(center), Jonggwun Chong(right), and Woojin Lee(left)

What are your next steps?

This study confirmed that users responded positively to emotional interactions with avatars in VR environments. We would like to find applications for effective 3D avatar interaction through various attempts, as well as singing practice situations in karaoke. In addition, we plan to conduct research to make facial expressions of 3D avatars more natural and rich.

 

 

You may also like...