Research
My research investigates the computational principles of biological vision—both to understand how the brain works and to build more human-like AI systems. Working at the intersection of neuroscience, cognitive science, and artificial intelligence, my lab tackles a fundamental question: How can we bridge the gap between biological and artificial vision to advance both brain science and AI?Human-AI Alignment in Vision
We develop methods to quantify and improve the alignment between deep neural networks and human visual processing. Our recent work reveals that despite impressive performance, current vision models process information fundamentally differently from humans—a critical gap for both building robust AI systems and understanding brain mechanisms. Importantly, our harmonization procedure shows that alignment can be dramatically improved without changing network architectures, suggesting the misalignment stems from training procedures rather than structural limitations. This insight has motivated us to adopt a developmental psychology approach: we identify the learning principles and developmental trajectories that shape human vision, then incorporate these principles into AI training to create models that not only perform well but see the world as humans do. Funded by NSF.
Cognitive Benchmarks for AI Visual Reasoning
We develop rigorous cognitive-psychology-inspired benchmarks to evaluate fundamental gaps between human and machine vision. These benchmarks reveal systematic failures in modern AI. For example, our Pathfinder challenge shows that feedforward networks fail at contour integration tasks that humans solve effortlessly—a finding later confirmed by Google DeepMind, who showed that even state-of-the-art transformers fail while our brain-inspired recurrent models succeed. Our compositional reasoning benchmark reveals AI's inability to flexibly combine visual concepts, while our 3D-PC benchmark demonstrates failures in visual perspective-taking—a key signature of theory of mind. Even seemingly simple same-different judgments expose how neural networks struggle with basic visual relationships. Critically, this work not only reveals AI limitations but also helps identify brain mechanisms underlying relational processing. Funded by ONR.
Cortical Feedback and Visual Reasoning
We reverse-engineer how feedback connections in the brain enable complex visual reasoning and mental simulation. Our cognitive benchmarks reveal systematic failures of feedforward networks—from contour integration to relational judgments—pinpointing which computations require recurrent processing. These insights guide our experimental work: our neurophysiology studies show that same-different tasks that challenge feedforward AI engage distinct neural dynamics in primates, while our recent work reveals that both monkeys and recurrent neural networks use internal "mental simulations" to solve challenging visual tasks. By identifying where feedforward processing fails, we pinpoint the computational role of cortical feedback. This work is reshaping our understanding of how biological vision achieves robust reasoning through recurrent processing. Funded by ONR.
Explainable AI for Scientific Discovery
In collaboration with the Artificial and Natural Intelligence Toulouse Institute, we create tools to understand and interpret deep learning models. Our CRAFT framework and MACO approach help researchers open the "black box" of AI. CRAFT provides concept-based explanations revealing both "what" and "where" models look, while MACO enables feature visualization for state-of-the-art deep networks. These methods are implemented in our open-source Xplique toolbox, making explainability accessible to the broader research community. Critically, our tools reveal when AI learns deceptive strategies—for instance, in histopathology, we showed that models claiming superhuman cancer diagnosis actually relied on spurious correlations rather than meaningful biological features. See these tools in action: LENS explains what ImageNet models actually see, and LeafLens reveals how AI identifies plant species from cleared leaves. Building on this work, we are developing methods to identify computational mechanisms learned by foundation models—an effort outlined in our perspective on moving from prediction to understanding in brain science. Funded by ANR and NSF.
Teaching
I teach computational courses at the interface between natural and artificial intelligence, bridging neuroscience, cognitive science, and AI.CPSY 1291: Computational Methods for Mind, Brain & Behavior
Advanced Undergraduate/Graduate • Fall Semester
A broad introduction to NeuroAI combining lectures with hands-on programming assignments. Students explore computational models of brain and cognition, classical machine learning algorithms, and modern deep learning architectures.
CPSY 1950: Deep Learning in Brains, Minds & Machines
Advanced Undergraduate/Graduate • Spring Semester
A seminar-style exploration of cutting-edge research at the intersection of natural and artificial intelligence. Students engage with recent papers and develop critical perspectives on how biological and artificial systems process information.
Selected Recent Publications
- T. Serre & E. Pavlick • From Prediction to Understanding: Will AI Foundation Models Transform Brain Science? • Neuron • 2025
- P. Roelfsema & T. Serre • Feature binding in biological and artificial vision • Trends in Cognitive Sciences • 2025
- D. Linsley et al. • The 3D-PC: A benchmark for visual perspective taking in humans and machines • ICLR • 2025
- D. Linsley, P. Feng & T. Serre • Better artificial intelligence does not mean better models of biology • Trends in Cognitive Sciences • 2025
- S. Shahamatdar et al. • Deceptive learning in histopathology • Histopathology • 2024
- A. Ahuja et al. • Monkeys engage in visual simulation to solve complex problems • Current Biology • 2024
See the lab publications page for a complete list of publications.
Selected Talks
DIC-ISC-CRIA Seminar, Montreal • February 2026
Examining how cortical feedback facilitates compositional visual reasoning, distinguishing biological from artificial vision. Same-different judgments—fundamental symbolic operations that newborn ducklings master from single examples yet challenge state-of-the-art feedforward networks—illustrate this gap. The talk presents computational evidence that cortical feedback contributes essential mechanisms for the compositional reasoning capabilities that connect perception and abstract thought.
Simons Foundation Workshop on "Self Supervised Learning" • May 2025
Exploring how self-supervised learning can bridge the gap between artificial neural networks and biological vision systems. This talk presents our latest work on developing training procedures that align deep neural networks with primate visual processing, demonstrating that alignment can be dramatically improved without changing network architectures.
MindCORE Vision Seminar, University of Pennsylvania • April 2024
Demonstrating how feedforward neural networks struggle with visual reasoning problems that appear simple to humans. This talk presents our computational neuroscience model of feedback circuitry in the visual cortex, showing how it can be transformed into a modern deep recurrent network that addresses weaknesses of current state-of-the-art feedforward networks—providing evidence that neuroscience can offer powerful new concepts for AI.
Active Grants
- High-performance compute cluster for brain science
NIH S10OD036341 • 2025 – 2030 • PI
Supporting acquisition of a state-of-the-art high-performance computing cluster for large-scale computational neuroscience modeling and deep learning experiments. - One vision: Computational alignment of deep neural networks with humans
NSF IIS-2402875 • 2024 – 2028 • co-PI (Serre/Linsley)
Developing methods to quantify and improve alignment between deep neural networks and human visual processing using developmental psychology approaches. - Brain-inspired deep learning models of visual reasoning
ONR N00014-24-1-2026 • 2023 – 2028 • PI
Investigating computational principles underlying visual reasoning to develop more capable AI through cognitive benchmarks and brain-inspired architectures. - Brown Postdoctoral Training Program in Computational Psychiatry
NIH/NIMH 5T32MH126388 • 2021 – 2026 • co-PI (Frank/Rasmussen/Serre)
Training postdoctoral fellows at the intersection of computational neuroscience, machine learning, and psychiatry to develop computational methods for understanding psychiatric disorders. - REPRISM: Flexible embodied problem-solving by manipulating the representational prism
ONR MURI N00014-24-1-2603 • 2024 – 2027 • co-I (PI: Konidaris)
Developing vision systems that adaptively change representations based on task demands for flexible and generalizable problem-solving. - SEA-CROGS: Scalable, efficient and accelerated causal reasoning operators, graphs and spikes
DOE DE-SC0023191 • 2022 – 2027 • co-I (PI: Maxey)
Creating scalable causal reasoning methods combining graph-based representations, spiking neural networks, and causal inference for Earth system science. - The next generation of operator regression networks: Theory, algorithms, applications
ONR N00014-22-1-2795 • 2022 – 2027 • co-I (PI: Karniadakis)
Advancing theoretical foundations and practical applications of operator regression networks for solving partial differential equations and modeling physical systems.