The 46th European Conference on Visual Perception (ECVP) was held in Aberdeen, Scotland, from August 25th to 29th, 2024. The conference welcomed over 800 vision scientists from around the globe, with the majority presenting their research and contributing to an exceptional scientific program.
The event featured 3 keynote lectures, 10 symposia, 17 talk sessions, 7 poster sessions, and 6 tutorials. In addition to the scientific content, attendees enjoyed a vibrant social program, including the Welcome Reception, Illusion Night (with over 35 interactive booths), the Conference Dinner, and the Farewell Party. Scottish culture was on full display, with experiences like whisky tasting and a ceilidh dance.
This year’s ECVP also introduced some exciting new elements: Perceptio-Nite, a networking evening specifically designed for students, a series of lunchtime roundtable discussions that addressed key debates in academia and scientific research, and a new Keynote format called Spotlight in Vision Lecture, showcasing groundbreaking new findings in vision science by early/ mid-career researchers.
Drs. Constanze Hesse and Mauro Manassi, ECVP Organising Committee.
Yuqing Cai PhD Candidate at Utrecht University, Netherlands.
Can you tell us a bit about yourself and your research interests?
My primary focus lies in modeling the pupil size change with dynamic and continuous stimuli. I am interested in applying it to study visual field defects and cognitive processes such as visual attention.
What first interested you in this field of research?
The thing about pupillometric study that initially interested me was the fact that pupillary response is such a small physiological response in our body, and we won’t even notice the change for most time of our life, but it can reflect so profoundly on our visual functions and cognition. And although it is influenced by tons of different factors simultaneously, it can still be very informative when utilizing the right method to study it.
Can you briefly explain the research you presented at ECVP 2024?
The research I presented at ECVP was an application of the dynamic pupil size modeling method we developed (Open-DPSM). In the study, we intend to study the spatial biases of covert and overt attention together during free-viewing of videos. In the first experiment, we verified that pupil size changes can indeed be used to index the allocation of covert attention in complex and dynamic environments. In the second experiment, we measured the covert attention using the pupillometry method we verified in experiment 1 and measured the overt attention using gaze position. We concluded that there was a leftward bias of overt attention and a rightward bias of covert attention during free-viewing.
How will you continue to build on this research?
As a direct extension of the presented results, we would like to explore the possible factors that drive the rightward covert attention bias and why it diverts from the pseudoneglect studies, in which a general leftward covert attention bias was found. After the current project, more applications of the dynamic modeling of pupil size will be investigated, including some more practical applications. For instance, we intend to test whether the method can be used to detect visual field defects in patients with brain injuries.
Gwenllian Williams DPhil student at the University of Oxford, UK.
Can you tell us a bit about yourself and your research interests?
I am a final-year PhD student, supervised by Professor Kia Nobre, Dr Sage Boettcher, and Dr Dejan Draschkow. I’m interested in how our expectations about the structure of the world influence how we sample from and interpret our visual environment. My research focuses on visual search, where we look for ‘targets’ amongst ‘distractors’, such as when finding your friend amongst the other people in a busy train station. One way we can more efficiently find targets is by learning and using environmental regularities to guide our behaviour. Regularity-based guidance is typically studied in static displays. My research aims to investigate how this guidance operates over time in dynamic displays which contain regularities in when targets are likely to be found. For example, if you can predict when your friend’s train is likely to arrive, then it may be adaptive to tune your search behaviour to operate most effectively at this time.
What first interested you in this field of research?
I became fascinated when I learnt that our visual perceptions are not always veridical representations of the world. I was particularly interested in how our prior expectations can shape both how we perceive things, as in many visual illusions, and which things we perceive in the first place, through attentional guidance. My passion for innovative experimental design and data analysis stems from my educational background in mathematics and the physical sciences.
Can you briefly explain the research you presented at ECVP 2024?
I am delighted to have received a prize for my poster titled ‘Feature- and motor-based temporal predictions benefit visual search performance in dynamic settings’. In this study, participants searched for targets in a novel dynamic visual search task, each experiencing one of three forms of temporal regularity or a control condition. How long participants had been searching could inform them of 1) the likely colour of the target, 2) the likely motor response to the target, 3) both, or 4) neither. We found that performance improved with all three forms of regularity compared to the control, consistent with participants having adaptively prepared for targets of different colour, and/or motor responses, as a function of time.
How will you continue to build on this research?
I am nearing the end of my PhD and exploring my next steps in research. This project helped address some open questions and will be included in my thesis, along with other related work which focusses more on eye tracking. However, many questions about visual search in dynamic environments remain unanswered!
Amit Rawal PhD student at the Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany; Vrije Universiteit Amsterdam, The Netherlands.
Can you tell us a bit about yourself and your research interests?
I’m a PhD student in the lab of Dr. Rosanne Rademaker. My main line of investigations are on visual working memory – its maintenance under naturalistic visual input, its interactions with eye movement behaviour, and its flexible storage under different task-demands.
What first interested you in this field of research?
I did my MA and subsequent research work at Taipei Medical University, Taiwan. There, I was involved in projects related to implicit learning, memory, and mental imagery, which also led me to become familiar with working memory research. But it was during the various brainstorming sessions with my supervisor, early in my PhD, where I digged a bit deeper into working memory and we decided to use naturalistic stimuli and employ eye-tracking as our main tool.
Can you briefly explain the research you presented at ECVP 2024?
My ECVP poster was titled “People are sensitive to their uniquely patterned retinal input“. Previous research has shown that there are consistent individual differences in eye movement behaviour. These differences should lead to unique inputs to the retinae, and our research question is whether people are sensitive to these differences. In this experiment, we showed people videos replaying either their eye movement patterns or someone else’s, and asked them to identify those that seemed to be theirs. Participants came to lab twice – once to freely-view lots of images while their eye movements were recorded. Then, they returned to lab on a different day and were asked to view on each trial a video that showed parts of an image from the previous session (a “replay”), exactly as viewed by either them or a different participant, and their task was to report whether it was them who viewed it or someone else. We found that almost all participants could perform above-chance, had metacognition regarding their mine/not-mine choices, had faster responses when reporting it was their replay and when they were correct. Eye data showed that their pupils were more dilated as they viewed someone else’s replay, possibly indicating surprise. Finally, on trials where participants viewed someone else’s replay, they tended to mistakenly say it was theirs when the viewing pattern of this other participant was similar to their own, for the same image. Altogether, our analysis shows that participants are sensitive to an input matching their own scene-vieweing behaviour and that they can make explicit judgements about whether this input shows their behaviour or someone else’s.
How will you continue to build on this research?
Next step is to explore which dimensions (fixated locations, fixation duration, saccade amplitude etc.) of viewing behaviour affect participants’ mine/not-mine judgements. There are also several control analyses left to be done, some of which were suggested by people who visited my poster at ECVP. Following this, we will submit the work for peer-review.
To build on the finding that people are sensitive to an input matching their viewing tendencies, we have another line of experiments aimed at studying how matching or mismatching replays may act as distractors as people remember a visual feature in a working memory task. Moving forward, I would like to continue studying individual differences in ocular behaviour and their interactions with higher cognitive functions.