The conference organisers were delighted to welcome delegates to the 45th European Conference on Visual Perception (ECVP 2023), happening for the first time on the beautiful island of Cyprus, in the city of Paphos. ECVP 2023 saw approximately 600 scientists from 45 different countries, and across all career stages, travel to Cyprus. Keynotes were given by Professor Manuela Piazza (Perception Lecture) and Professor David Burr (Rank Prize Lecture) alongside 8 excellent, member-initiated symposia, 5 tutorials, 120 oral presentations, 356 poster presentations. We also had 7 Illusion & Demo night contributions, including one from the Paradox Museum in Limassol which is dedicated to visual illusions and demonstrations. We hope that ECVP 2023 was one to remember, and that attendees enjoyed the content, social events and our beautiful island.
Kyriaki Mikellidou, Organiser
Rania Tachmatzidou PhD Candidate. Panteion University of Social and Political Sciences, Greece.
Can you tell us a bit about yourself and your research interests?
I am Rania Tachmatzidou, a PhD Candidate at the Department of Psychology, Panteion University of Social and Political Sciences, Athens, Greece, and a member of Multisensory and Temporal Processing Lab (MultiTime Lab).I completed my MSc. in Cognitive Science in the Department of History and Philosophy of Science, University of Athens. I also hold a BA from the same Department and a BSc in Dietetics from the University of Hertfordshire.
What first interested you in this field of research?
My main research interests are around multisensory integration and the bodily self. I am also fascinated by the study of time perception, so I decided to pursue a Master’s degree in Cognitive Science. During that time, I was lucky enough to meet Dr. Argiro Vatakis, who is an excellent mentor. She welcomed me to the MultiTime Lab and started working with me in some of our very first projects. A few weeks ago, I had the chance to present our latest published work at the 45th European Conference on Visual Perception.
Can you briefly explain the research you presented at ECVP 2023?
The poster, titled “The influence of scene violations and attention on timing”, was about the effect of semantic and syntactic violations of naturalistic scenes on timing estimations. In real world objects arrangement follows a few rules regarding their spatial relations with the scene (syntactic) as well as their contextual (semantic). Semantic rules violations have been found to influence time perception, leading to duration overestimations of scenes including them. So, we investigated whether syntactic violation affect time in the same way. To do so, we utilized an oddball paradigm along with natural scenes with and without violations. We presented to the participants a stream with 9 natural scenes, 8 with no violations and 1 with a semantic or syntactic violation. We asked participants to respond whether they perceived this “odd” scene as lasting longer or shorter compared to the no violation scenes. Interestingly, participants tented to overestimate the duration of scenes with syntactic violations, but they underestimated the duration of semantic scenes. We believe that this difference is due to the different processing of the two violations. We tend to try to “solve” semantic violations and that effort we put there draws our attention away from the timing task. Syntactic violations are so odd that we do not try to solve them so we are able to observe the oddball phenomenon. You can find the full version of our paper here https://www.nature.com/articles/s41598-023-37030-2.
What are your next steps?
These very interesting results made us want to extend the investigation of scene violations and timing. We are already working on our next steps, and we aim to look more thoroughly the effect of semantic and syntactic violations on timing. If you are interested in our research, you can follow me and MultiTime Lab.
Antonella Pomè Marie Curie Fellow. Heinrich Heine Univeristät, Germany.
Can you tell us a bit about yourself and your research interests?
My research is focused on the study of perceptual systems and sensorimotor integration, using investigation techniques based on psychophysics, eye movement recordings, pupillary activity, and virtual reality. My scientific training took place under the guidance of Prof. Burr (University of Florence); and I have additional research experience at the Centre of Research in Autism and Education (UCL), at Johns Hopkins University and at the Henrich Heine Universität.
To effectively interact with the external world, our motor and perceptual systems have to combine previous perceptual experiences with current sensory inputs. My research focuses mainly on the study of how visual stimuli are processed efficiently to create a stable and reliable representation of the visual world.
What first interested you in this field of research?
I have always been very curious in how we make sense of the huge number of things we see around us. We move our eyes at least 3 times per second and the brain is constantly bombarded with very different information. How do we combine this information to get a stable and reliable representation of the visual world we are experiencing? “The autistic world” can be quite different from ours, and right now little is known on how socio-communicative difficulties and behavioral stereotypes in adulthood arise from childhood.
Can you briefly explain the research you presented at ECVP 2023?
One of the main challenges of the brain is to distinguish between external movements and movements that come from our own eye-movements. How to achieve a stable representation of the world despite the high-speed motion on the retina due to own eye movements? The sensorimotor system must act against this self-produced stimulus to prevent the perception of disturbing motion for every eye-movement. We have recently demonstrated that at the time of saccade initiation, a prediction is made about the motion vectors that is contingent on the requested saccade size and that sensitivity for this motion vector is selectively reduced. Here, we wondered about motion omission in autism because difficulties in generating and updating predictions is a core symptom of ASD. We hypothesized motion omission magnitude might vary according to participants’ autism symptom severity. We used a paradigm in which motion stimulation is restricted to the intra-saccadic period by presenting gratings that were drifting faster than the flicker fusion frequency. Participants, covering a wide range of autistic traits, were presented either gratings that mimicked the natural motion velocity for the corresponding saccade vector or gratings that moved at unnatural velocities (e.g. 100°/s). Participants had to judge the location of the grating, which was presented in the upper or the lower part of the visual field. Motion prediction sensitivities were reduced as a function of the autism severity. Moreover, when tested with an adaptation paradigm, participants with high autistic traits showed difficulties in the ability to predict the upcoming motion associated with the requested eye movement, resulting in less amount of adaptation probably reflecting inefficiencies in assigning weight to the predictive information. Problems in effectively adapting to and calibrating against observed sensory evidence could lead to both hypersensitivities and hypo sensitivities in perception, which can be very disturbing and stressful to autistic people. We conclude that oculomotor inflexibility might increase the perceptual overstimulation that is experienced in ASD.
What are your next steps?
We are now in the phase of writing a manuscript to be submitted on an international journal, and at the same time perform more analyses based on the useful comments of my peers at ECVP2023. The next step will be to extend the results of the study to the clinical population. I am now writing a new grant to continue investigate the combination of action and perception in the autistic symptomatology (with and without an official diagnosis), so to provide a theoretical framework but also develop strategies to improve the everyday life of individuals with sensorimotor disturbances.
Anna Bruns PhD candidate. New York University, USA.
Can you tell us a bit about yourself and your research interests?
My academic background is very interdisciplinary–I have a BA in applied math from UC Berkeley, where I was research assistant to psychologist Tania Lombrozo and to art historian Sugata Ray, and an MA in Experimental Humanities (coursework in philosophy, psychology, art therapy, UX design)–and I’ve held two full time industry jobs at the intersection of art and tech. I’m thrilled to have found a niche in the field of empirical aesthetics. My current research focuses on the relationship between beauty and emotion.
What first interested you in this field of research?
I’ve been interested in art and aesthetic preference since I was a kid. When I discovered the field of empirical aesthetics through Semir Zeki’s work at the end of my undergrad, I knew it was a field I wanted to pursue. That sentiment was actually solidified after I read critiques of the field by people like John Hyman. I liked the way the study of beauty motivated talk between people who study art and people who study the mind and brain.
Can you briefly explain the research you presented at ECVP 2023?
In a study we conducted on the relationship between emotion and beauty, we found that participants’ happiness and the perceived happiness of an image both predict increased beauty ratings of that image. We were interested in whether text accompanying the image might modulate this effect, and in this project we found that it does. Participants rated images under four conditions: without text, with a redundant description of the image, with a compelling story about the image, and with a text box where they wrote a story about the image. We found that emotion affects image beauty twice as much with accompanying text as without, but only if the text is merely descriptive rather than telling a compelling story.
What are your next steps?
Next we plan to run a fifth condition, where the text is more reminiscent of canonical museum wall text (with artist details, etc.), so that results might become more relevant to museum studies. Then we’d like to get a better handle on the relationship between level of engagement and beauty experience. I’d generally like to continue studying aesthetic response to art across modalities, so using both images and songs as stimuli and comparing effects between stimulus types.
Philippe Blondé Postdoctoral Researcher. University of Iceland, Iceland.
Can you tell us a bit about yourself and your research interests?
My name is Philippe Blondé, I am a postdoc at the Icelandic Vision Lab of the University of Iceland (www.visionlab.is). I am working with Prof. Árni Kristjánsson at the Icelandic Vision Lab and Dr. David Pascucci at EPFL in Lausanne, Switzerland, on a project funded by the Icelandic Research Fund (IRF). I obtained my PhD in cognitive sciences at the Université Paris Descartes in 2021 after completing a thesis on the influence of mind wandering on episodic memory encoding. My research focuses mainly on attentional, perceptual and memory processes, with a particular interest in Bayesian models of cognition.
What first interested you in this field of research?
What interests me in particular is the way in which we can frame the functioning of the visual system in terms of Bayesian inference. The idea is that our behaviour is guided by our prior knowledge about the world (for example, if I’m looking for the family cat, I will check at their known favourite location first), but that this prior knowledge is then updated according to the data collected from the environment (for example, if the cat starts to prefer another location, I will implicitly learn it and tend to check this new location first). This principle is particularly interesting in the context of research on visual attention, as it postulates that the visual system operates based on a priority map and that visual search is primarily directed towards the places where the desired event is most likely to appear. Although some studies suggest that the visual system does not necessarily function in an optimal Bayesian way, the basic principle remains useful for understanding how we process information from the environment to guide our behaviour.
Can you briefly explain the research you presented at ECVP 2023?
This research focuses on implicit statistical learning, i.e., the ability of the visual system to extract and use statistical information from the environment to guide goal-directed behaviour. Previous studies have shown that when a target has a higher probability of appearing in a specific position on the screen, participants will detect it more quickly, even in the absence of explicit information about this statistical bias. Interestingly, this search bias persists even when the probabilities of the location of the target become equivalent. In addition, previous studies have suggested that statistical information can be extracted from different stimulus characteristics, such as black/white contrasts, leading to the learning of different biases and their association with different stimulus types. In this study, we wanted to take this concept a step further by assessing whether the visual system is capable of binding statistical information about locations to a particular colour distribution. The idea is that, if statistical information from colour distributions can be learned, the characteristics of the distribution (such as mean, variance or shape) could influence the learning process and give us an insight on what environmental features lead to the strongest implicit statistical learning.
What are your next steps?
The results I presented at ECVP demonstrate that the visual system can learn probabilistic information about the location of a target based on the colour distribution of the stimuli on-screen. It will now be interesting to see whether manipulations of the characteristics (e.g., mean, variance or shape) of a distribution can affect the acquisition of probabilistic location information. For example, if the variance of the distribution is lower, perhaps the distribution will appear more distinct, and the bias might appear sooner, or it may be stronger. There are many parameters to play with in this paradigm, and we plan to conduct a series of experiments which we hope will be included in a paper examining in detail how we extract statistical information from colour distributions.