Even today’s most advanced models of visual cortex are not able to fully predict the brain’s responses.
Understanding how the brain processes visual input is a long standing goal in neuroscience. To solve this question in a quantitative, testable, and reproducible way, accurate predictive models of neural population responses to natural stimuli are crucial. However, even today’s most advanced models of visual cortex are only able to account for a fraction of the observed neuronal activity. Furthermore, because there is an abundance of models and metrics, but no widely-used reference dataset, it is challenging to compare models on equal ground. This makes it difficult to determine the current state-of-the-art model.
Benchmarking the predictive performance of your model on large-scale datasets.
The SENSORIUM 2023 competition offers a publicly available, large-scale dataset consisting of the activity from over 38,000 neurons from the primary visual cortex of five different mice in response to around 1800 natural scene videos (around 10 seconds). The dataset also includes additional behavioral measurements such as running speed, pupil dilation and eye movements. The performance of predictive models can be automatically evaluated by submitting predicted neural responses to our website which will display the performance of all submissions in a leaderboard for easy comparison.
The SENSORIUM 2022 competition approached this issue for static stimulus (pictures) models. This year we are adding a time dimensional component and try to inspire community to establish a benchmark dataset and a model for dynamic stimulus (videos). See more about data in the whitepaper and data section.
Currently, we offer two benchmark tracks
There is a main track and a bonus track. The focus of the main track is to predict the neuronal activity for dynamic stimuli, while the focus of the bonus is to generalize to out of domain (OOD) stimuli.
We provide data from 5 mice, more than 38,000 neurons, around 1800 natural scene videos (around 10 seconds), and additionally behavioral activity (pupil center, pupil dilation, and running speed). The goal is to train a networks, which would predict the neuronal activity for the neurons. To see more details about data or metrics see the corresponding section. Please note that this year there is not separate track for behaviour. It is up for the team if they want to use behavioural data or not.
We think that from a biological perspective it is crucial to have not only good performing but also generalizable models. Hence, we establish a bonus track with five out of domain (OOD) stimuli to measure the models’ generalizable capabilities. This track would have one winning team.
Let’s predict how the brain processes what we see!
To ensure you have a swift and easy start participating in our competition we prepared a starting kit for you. The starting kit (link to Github https://github.com/ecker-lab/sensorium_2023) includes a comprehensive 3-step manual with code examples for installing required software, downloading and inspecting the competition data, and training and submitting a model to the competition website.
Q: What is this competition about?
We are looking for the best neural predictive model that can predict the activity of thousands of neurons in the primary visual cortex of mice in response to videos.
Q: Why are neural predictive models interesting?
Accurate models of neuronal activity can serve as phenomenological digital twins for the visual cortex, allowing computational neuroscientists to derive new hypotheses about biological vision “in silico”, enabling systems neuroscientists to test them “in vivo”. On top of that, these models are relevant to machine learning researchers who use them to bridge the gap between biological and machine vision.
Q: Where will the results be presented?
We are happy to announce that we are part of the NeurIPS 2023 competition track! We’ll host a workshop at NeurIPS in December 2023 to present the winners and overall results of this competition.
Q: Are there plans for future data and competition releases?
We intend to start the competition this year at NeurIPS, but keep the website open for new challenges to make this website a valuable resource for data driven neural system identification models in mouse visual cortex and beyond.
2023-09-15 Submission deadline extension
We decided to extend the competition deadline to Oct. 15. This gives everyone another full month to improve their models or develop new approaches. Happy coding!
We’ve recently discovered a few issues with the competition code, which we want to communicate for transparency:
Details on normalization: We recently discovered a minor bug in our data export code that causes the neuronal response to be normalized per video frame instead of by one single number for each neuron when using the officially provided data loader. Since the evaluation code also uses that data loader, the code – albeit using a non-standard normalization – is self-consistent. However, if you are using a custom data loader, it might result in unexpected results. Thus we recommend using the official data loader.
We tested how much prediction results are affected by using the non-standard normalization compared to when normalizing neurons by a single number and found the differences to be minor. Therefore we decided to not change anything for now and conclude the competition with this non-standard normalization. We apologize for any inconvenience this might have caused.
2023-07-21 New dataset
as promised, we have updated the dataset - you can access the new dataset here - https://gin.g-node.org/pollytur/sensorium_2023_dataset.
The previous dataset stays online and you can use it for the competition as well. We added a new competition to codalab: Sensorium 2023 - Main Track [new].
2023-06-22 Data release
Unfortunately our data release for the Sensorium 2023 competition accidentally included the secret test set. Although we took the file offline immediately after finding out about it, it is not clear how many people gained access to the dataset. We therefore consider it compromised. To ensure a fair continuation of the competition, we will take the following actions:
We would like to apologize to all participants for the extra work and hassle this may cause. Thank you to Kaiwen Deng for immediately reporting this issue to us!
|Polina Turishcheva (University of Göttingen)||Eric Y. Wang (Baylor College of Medicine)|
|Konstantin F. Willeke (University of Tübingen)||Paul G. Fahey (Baylor College of Medicine)|
|Laura Hansel (University of Göttingen)||Michaela Vystrčilová (University of Göttingen)|
|Mohammad Bashiri (University of Tübingen)||Zhiwei Ding (Baylor College of Medicine)|
|Kayla Ponder (Baylor College of Medicine)||Alexander Ecker (University of Göttingen)|
|Andreas S. Tolias (Baylor College of Medicine)||Fabian H. Sinz (University of Göttingen)|