Working in an institute with a long and rich history of research on visual perception, I inadvertently was drawn into this area. I did a set of experiments in my lab in which I used an eye-tracking system (made by Tobii) to record human gaze behaviour and an EEG system (made by BrainVision) to record brain activity while participants looked at different visual stimuli. I recorded from around 50 different participants.
In each recording session, I presented the participant with different sets of stimuli. Each aimed to find answers for different kinds of questions. Below, you can find a short description of each paradigm. We are in the late stages of the data analysis, and plan to publish the results and the code we have developed in near future.
in this set of recording, participants watched a few short clips. Each clip was a GoPro or dash-cam footage of cars, motorbikes or bicycles. The aim was to analyse human gaze behaviour while driving. And this is inspired by my interest in self-driving car algorithms, which I have describe in more details here.
If you are interested to see samples of such recordings, you check the following short clips:
Driving in a busy street in India.
Driving in a busy street in Vietnam.
GoPro footage of a Bicycle ride.
This problem was my main motivation to get into the area of visual perception. Partly, it was motivated by my love of photography and what I have read, done, and learned about the art of photography. The question itself is something that has kept many people, in particular, visual artists, busy: why do people look at a photograph the way they do? And how to guide the gaze of the viewer using the painting or photograph contents and also using various compositional elements. While contemplating this question and preparing the technical and administrative details of the experiments, I read a very interesting book called The Art of Photography, written by Mr. Bruce Barnbaum. In the book, he talks about various compositional elements that photographers can use in each photograph, elements such as contrast, texture, depth of field, color, etc. Summarising his own experiences as a professional photographer spanning decades, and also discussing other schools of thought, he has provided intuitive guidelines in his book for aspiring photographers on how to compose their photographs. I contacted him and the following meeting and correspondence led to a collaboration and to the inception of this set of experiments. He was kind enough to let me use his own photos, including the ones he has used in his books.
Even before starting the recordings, I knew this would be a very difficult problem to solve. But the reality exceeded my expectations. I devised a free-viewing paradigm, meaning the participants did not have any minimum or maximum viewing time. They had the computer mouse available to them, and they could click to see the next photograph. The resulting recordings have provided a rich dataset of human gaze behavior. Our own analysis is not over yet, although it has already provided an interesting insight into this problem. We plan to make the result of preliminary analysis and also the recorded gaze behavior publicly available to the interested researchers. Until then, you can see samples of the gaze behavior in the following short clips, just to get a taste of the complexities that we faced while analyzing the eye-tracking data.
Dominant Color (Can you guess what are you looking at?)
Buckskin, view from the Centre of the Earth
Our brains are extremely adept at correctly interpreting visual inputs. A particular feature of our brains has eluded engineers and scientists that have tried to replicate those abilities in AI. But sometimes there are situations in which no single interpretation exists. Under such conditions, our percept switches involuntarily. And this phenomenon is called Multsiatbale Perception. Possibly the most well-known is the Necker Cube. but in the realm of neuroscience, the most widely used by neuroscientists is most probably Binocular Rivalry. YOu can look at this paper. One stimulus which is of particular utility in studying Multistable perception is Adelson-MOvshon Grid. It includes three sets of moving parallel lines which are angled 120 degrees with respect to each other. You can check one of them here. Make the video fullscreen and then you can experience this phenomenon yourself. At each moment in time, we can perceive just one of the set of lines and ’diamonds moving in the other direction.
Using this stimulus to study multistable perception was inspired by a visit and the subsequent talk by the first author of this paper, an alumnus of MPI Bio-Kyb. In their paper, they ask the participant to report on their percept. Given the nature of the stimulus, it is possible to recognize the perceived direction using the eye-tracking system without any need for the participant to report it, hence removing the possibility of affecting the involuntary switching when the act of reporting happens. Originally, I was planning to automate the recognition of the percept in each recording. but in practice, there were so many anomalies and unexpected gaze behavior that we decided to do this manually. You can see an example of such recordings here.
Adding the 64-channel EEG recordings we did to the gaze behavior, we can have
a unique insight into the multistable perception phenomenon in tristable conditions.
We are at the latest stages of data analysis, and the results will be publicly available
soon.
Last updated on November 22, 2021. Generated using htlatex.