PeerJ Award Winners at ECVP 2024

by | Sep 23, 2024 | Award Winner Interviews

The 46th European Conference on Visual Perception (ECVP) was held in Aberdeen, Scotland, from August 25th to 29th, 2024. The conference welcomed over 800 vision scientists from around the globe, with the majority presenting their research and contributing to an exceptional scientific program.

The event featured 3 keynote lectures, 10 symposia, 17 talk sessions, 7 poster sessions, and 6 tutorials. In addition to the scientific content, attendees enjoyed a vibrant social program, including the Welcome Reception, Illusion Night (with over 35 interactive booths), the Conference Dinner, and the Farewell Party. Scottish culture was on full display, with experiences like whisky tasting and a ceilidh dance.

This year’s ECVP also introduced some exciting new elements: Perceptio-Nite, a networking evening specifically designed for students, a series of lunchtime roundtable discussions that addressed key debates in academia and scientific research, and a new Keynote format called Spotlight in Vision Lecture, showcasing groundbreaking new findings in vision science by early/ mid-career researchers.

Drs. Constanze Hesse and Mauro Manassi, ECVP Organising Committee. 

 

Yuqing Cai PhD Candidate at Utrecht University, Netherlands.

Can you tell us a bit about yourself and your research interests?

My primary focus lies in modeling the pupil size change with dynamic and continuous stimuli. I am interested in applying it to study visual field defects and cognitive processes such as visual attention.

What first interested you in this field of research?

The thing about pupillometric study that initially interested me was the fact that pupillary response is such a small physiological response in our body, and we won’t even notice the change for most time of our life, but it can reflect so profoundly on our visual functions and cognition. And although it is influenced by tons of different factors simultaneously, it can still be very informative when utilizing the right method to study it.

Can you briefly explain the research you presented at ECVP 2024?

The research I presented at ECVP was an application of the dynamic pupil size modeling method we developed (Open-DPSM). In the study, we intend to study the spatial biases of covert and overt attention together during free-viewing of videos. In the first experiment, we verified that pupil size changes can indeed be used to index the allocation of covert attention in complex and dynamic environments. In the second experiment, we measured the covert attention using the pupillometry method we verified in experiment 1 and measured the overt attention using gaze position. We concluded that there was a leftward bias of overt attention and a rightward bias of covert attention during free-viewing.

How will you continue to build on this research?

As a direct extension of the presented results, we would like to explore the possible factors that drive the rightward covert attention bias and why it diverts from the pseudoneglect studies, in which a general leftward covert attention bias was found. After the current project, more applications of the dynamic modeling of pupil size will be investigated, including some more practical applications. For instance, we intend to test whether the method can be used to detect visual field defects in patients with brain injuries.

Gwenllian Williams DPhil student at the University of Oxford, UK.

Can you tell us a bit about yourself and your research interests?

I am a final-year PhD student, supervised by Professor Kia Nobre, Dr Sage Boettcher, and Dr Dejan Draschkow. I’m interested in how our expectations about the structure of the world influence how we sample from and interpret our visual environment. My research focuses on visual search, where we look for ‘targets’ amongst ‘distractors’, such as when finding your friend amongst the other people in a busy train station. One way we can more efficiently find targets is by learning and using environmental regularities to guide our behaviour. Regularity-based guidance is typically studied in static displays. My research aims to investigate how this guidance operates over time in dynamic displays which contain regularities in when targets are likely to be found. For example, if you can predict when your friend’s train is likely to arrive, then it may be adaptive to tune your search behaviour to operate most effectively at this time.

What first interested you in this field of research?

I became fascinated when I learnt that our visual perceptions are not always veridical representations of the world. I was particularly interested in how our prior expectations can shape both how we perceive things, as in many visual illusions, and which things we perceive in the first place, through attentional guidance. My passion for innovative experimental design and data analysis stems from my educational background in mathematics and the physical sciences.

Can you briefly explain the research you presented at ECVP 2024?

I am delighted to have received a prize for my poster titled ‘Feature- and motor-based temporal predictions benefit visual search performance in dynamic settings’. In this study, participants searched for targets in a novel dynamic visual search task, each experiencing one of three forms of temporal regularity or a control condition. How long participants had been searching could inform them of 1) the likely colour of the target, 2) the likely motor response to the target, 3) both, or 4) neither. We found that performance improved with all three forms of regularity compared to the control, consistent with participants having adaptively prepared for targets of different colour, and/or motor responses, as a function of time.

How will you continue to build on this research?

I am nearing the end of my PhD and exploring my next steps in research. This project helped address some open questions and will be included in my thesis, along with other related work which focusses more on eye tracking. However, many questions about visual search in dynamic environments remain unanswered!

Get PeerJ Article Alerts