CPSY 1950 Deep Learning in Brains, Minds & Machines

Spring 2026 • Advanced Undergraduate/Graduate

A seminar-style exploration of cutting-edge research at the intersection of natural and artificial intelligence. Students engage with recent papers and develop critical perspectives on how biological and artificial systems process information.

Format: Tuesday lectures + Thursday mini-conferences (lightning talks) • Project posters • Guest lectures


Course Information

Instructor: Thomas Serre
Email: thomas_serre@brown.edu
Time: Tuesday & Thursday, 2:30pm-3:50pm
Location: Friedman Hall 108
Office Hours: Wednesdays, 1:00-2:00pm @ Carney Innovation Hub (Room 402)
Communication: Ed Discussion

View Full Syllabus


Course Updates (Spring 2026): This course has been substantially revamped for Spring 2026 to reflect rapid changes in NeuroAI. As a result, the schedule and specific activities may be adjusted during the semester. Updates will be posted on the course website. Thank you in advance for your flexibility and patience.


Overview

This course uses a lecture + mini-conference (lightning talks) format:

  • Tuesdays (80 min): Instructor lecture introducing the week’s theme
  • Thursdays (80 min): Student lightning talks (a dozen presentations, about 4–5 min each) + synthesis/discussion

Students work in rotating small groups throughout the semester, culminating in project poster presentations. The course covers modern AI capabilities, explainable AI (XAI), NeuroAI, and cognitive and neural alignment.

Key Dates

  • First Class: Thursday, January 22, 2026
  • Bootcamp: Week 2 (async – no class January 27–29)
  • Lightning Talks: Weeks 3-9
  • Spring Break: Week 10 (March 24-26)
  • Project Posters: Weeks 11-12 (April 7-9) — 📋 Final project specification
  • Guest Lectures: Weeks 13-14
  • Final Exam: Tuesday, May 12, 2026, 9:00am

Group & Presentation Plan

  • Lightning talks are prepared in small groups.
  • Groups are not fixed: students will form new groups for each lightning week (rotating teams over the semester).

Schedule

Week 1 — Course kickoff

Thu 1/22 — Course Kickoff

NeuroAI goals, course structure, and how we will simulate scientific conferences (lightning talks and projects).

Google Slides

No readings or pre-class activities for Week 1—this is the introductory session.

Week 2 — Foundations of natural and artificial intelligence

Async Bootcamp (no in-person class this week)

Goal: Put everyone on a leveled playing field for Week 3. Complete the Required Core first (mandatory for all), then the conditional tracks that match your background. You do not need to do every track.

Note on access to papers: For any paper link below, please use the Brown Library proxy links (you may be prompted to log in with Brown credentials off campus).

Required Core — Mandatory for all students

⚠️ Complete both items below before moving on to any conditional tracks. Students with a strong background in cognitive neuroscience may skim, but should still review to ensure familiarity with the course’s framing and terminology.
Expected time: ~90 min total

  1. Marr’s Levels of Analysis (video)
    Nancy Kanwisher: “How can we study the human mind and brain? Marr’s levels of analysis”

  2. Coarse brain organization (video; focus on gross organization + functions)
    MIT OCW 9.13 (Spring 2019) Lecture 2: “Neuroanatomy”
    Viewing guidance: Watch for the high-level tour of major brain divisions/structures and what they do. Pay particular attention to concepts of receptive fields, neural tuning, and cortical maps — these are fundamental for understanding how neural networks relate to biological systems. You do not need to know fine anatomical details.

Conditional Track A — Cognitive neuroscience foundations

Complete this track if you have not taken at least 1–2 courses in neuroscience or cognitive neuroscience.
Target time: up to ~180 min.

Conditional Track B — Linear algebra intuition (refresher)

Complete this track if vectors/matrices feel rusty.
Target time: ~60 min total.

Conditional Track C — Deep learning intuition (refresher)

Complete this track if backprop/gradient descent are unfamiliar.
Target time: ~60 min total.

Bootcamp Checkpoint (all students; due Sun 2/2, 2:00pm)

Reading: Crick (1989). The recent excitement about neural networks. Nature

Reading response due Sun 2/2, 2:00pm: Submit on Canvas

Optional background papers (only if you want extra framing)

Week 3 — The three levers of deep learning

Tue 2/3 — Lecture: The three levers of deep learning

Architecture, learning objectives, and data shape representations, behavior, and generalization across modalities.

Slides · Lecture capture

Reading (complete before class): Serre (2019). Deep Learning: The Good, the Bad, and the Ugly. Annual Review of Vision Science

Reading response due Tue 2/3, 2:00pm: Submit on Canvas

Thu 2/5 — Lightning talks

Student lightning talks on the three levers of deep learning (architecture, objectives, data).

Lightning talk presentations · List of lightning talks (Google Sheet)

Reflection due Sun 2/8, 2:00pm: Submit on Canvas

Week 4 — Scaling and emerging capabilities

Tue 2/10 — Lecture: Scaling and emerging capabilities

Pretraining and fine-tuning/transfer; in-context learning and reasoning; what "emergence" claims mean and how to evaluate them critically.

Slides

Reading (complete before class): Firestone (2020). Performance vs. competence in human–machine comparisons. PNAS

Reading response due Tue 2/10, 2:00pm: Submit on Canvas

Thu 2/12 — Lightning talks

Student lightning talks on scaling and emerging capabilities.

List of lightning talks (Google Sheet)

Week 5 — Prediction vs Understanding

Tue 2/17 — No lecture (university holiday)

No class.

Thu 2/19 — Background reading and required viewing

Class on Feb 19 was cancelled (instructor sick).

Background paper (complete before or by Thu): Serre & Pavlick (2025). From Prediction to Understanding: Will AI Foundation Models Transform Brain Science? Neuron

Required viewing (watch both):

Paper response due Sun 2/22, 2:00pm: Submit on Canvas

Week 6 — Representation-level interpretability

Tue 2/24 — Lecture: Representation-level interpretability

Feature visualization, concept-based methods, sparse/dictionary approaches (incl. SAEs); what we can and can't reliably "name" in representations.

Guest lecturer: Dr Thomas Fel (Kempner Institute, Harvard) — thomasfel.fr

Slides · Lecture capture

Reading (complete before class): Olah et al. (2018). The Building Blocks of Interpretability. Distill

Reading response due Tue 2/24, 2:00pm: Submit on Canvas

Thu 2/26 — Lightning talks

Student lightning talks on features, concepts, and sparse methods.

📑 Lightning talk presentations · 📋 List of lightning talks (Google Sheet)

Read: Thorpe (1989). Local vs. Distributed Coding. Intellectica

Reflection due Sun 2/29, 2:00pm: Submit on Canvas

Week 7 — Mechanistic interpretability

Tue 3/3 — Lecture: Mechanistic interpretability

Circuits, causal interventions, and standards of evidence for mechanistic claims.

Guest lecturer: Michael Lepori (CS, Brown) — lepori.xyz

Slides · Lecture capture

Watch: Jensen Huang & Ilya Sutskever, Discovering the World Model Through LLM (YouTube)

Read: nostalgebraist (2020). Interpreting GPT: The Logit Lens. LessWrong

Reading response due Tue 3/3, 2:00pm: Submit on Canvas

Thu 3/5 — Lightning talks

Student lightning talks on circuits and causal interventions.

Lightning talk presentations · List of lightning talks (Google Sheet)

Read: Anthropic. Tracing the Thoughts of a Large Language Model

Reflection due Sun 3/8, 2:00pm: Submit on Canvas

Week 8 — Neural alignment

Tue 3/10 — Lecture: Neural alignment and model-to-brain mapping

Predicting neural data across measurement modalities; encoding/decoding and representational similarity; what alignment can and cannot justify.

Slides · Lecture capture

Reading (complete before class):

Reading response due Tue 3/10, 2:00pm: Submit on Canvas

Thu 3/12 — Lightning talks

Student lightning talks on model-to-brain mapping.

Lightning talk presentations · List of lightning talks (Google Sheet)

Read: Linsley, Feng & Serre (2025). Better artificial intelligence does not mean better models of biology. Trends in Cognitive Sciences

Reflection due Sun 3/15, 2:00pm: Submit on Canvas

Week 9 — Behavioral and cognitive alignment

Tue 3/17 — Lecture: Behavioral and cognitive alignment

Treating models as participants in cognitive tasks; behavioral signatures beyond accuracy (generalization, planning, decision making, cognitive control); confounds and best practices.

Slides · Lecture capture

Note: The beginning of the lecture is missing from this recording — the capture was not started until partway through class.

Reading (complete before class): Binz et al. (2025). A foundation model to predict and capture human cognition. Nature

Reading response due Tue 3/17, 2:00pm: Submit on Canvas

Thu 3/19 — Lightning talks

Student lightning talks on cognitive and behavioral evaluation.

Lightning talk presentations · List of lightning talks (Google Sheet)

Read: Brady et al. (2025). Dual-process theory and decision-making in large language models. Nature Reviews Psychology

Reflection due Sun 3/22, 2:00pm: Submit on Canvas

Week 10 — Spring Break

Tue 3/24 — Spring Break

No class

Thu 3/26 — Spring Break

No class

Week 11 — Project studio

Tue 3/31 — Project studio I

Project jumpstart: walk through Tekin's demo notebook, pick a task from Psych-101, and begin running your replication pipeline. In-class working time.

Slides · Lecture capture · Final project specification

Thu 4/2 — Project studio II

Continue project work: complete runs and draft poster. TA sign-off required to access frontier models.

Final project specification

Week 12 — Project poster presentations

Tue 4/7 — Project poster mini-conf A

Location: Kasper Multipurpose Room. Students present project findings in posters; structured peer feedback and synthesis discussion. Poster PDF due by 2:00 pm on Canvas.

Final project specification

Thu 4/9 — Project poster mini-conf B

Location: Kasper Multipurpose Room. Students present project findings in posters; structured peer feedback and synthesis discussion. Peer feedback due end of day on Gradescope.

Week 13 — Frontier topics in NeuroAI

Tue 4/14 — Guest lecture: Greta Tuckute (Kempner Institute, Harvard)

How the human brain represents and processes language—and what we learn by comparing brain and machine representations during language understanding.

Guest lecturer: Dr Greta Tuckute (Kempner Institute, Harvard) — tuckute.com

Slides (PDF) · Lecture capture

Reading (complete before class): Tuckute, Kanwisher & Fedorenko (2024). Language in Brains, Minds, and Machines. Annual Review of Neuroscience

Reading response due Tue 4/14, 2:00pm: Submit on Canvas

Thu 4/16 — Guest lecture: Rufin VanRullen (CNRS, France)

Global workspace theory, consciousness, and deep learning.

Guest lecturer: Dr Rufin VanRullen (CNRS, France) — rufinv.github.io

Lecture capture

Reading (complete before Thu lecture): VanRullen & Kanai (2021). Deep Learning and the Global Workspace Theory. Trends in Neurosciences

Reflection due Sun 4/19, 2:00pm: Submit on Canvas

Week 14 — Frontier topics in NeuroAI

Tue 4/21 — Lecture: The role of feedback in biological and artificial vision

Neuroscience-inspired recurrent models of vision: recurrent architectures motivated by cortical circuitry, how they go beyond feedforward DNNs, and links to behavioral and neural benchmarks (Serre lab).

Reading (complete before class): Read Kim et al. (2020) first, then Linsley et al. (2020). Prioritize main claims and Figures 1–2; skim implementation details.

Reading response due Tue 4/21, 2:00pm: Submit on Canvas

Thu 4/23 — Guest lecture: Victor Boutin (CNRS, France)

Generative models, energy-based models, and cognitive science.

Guest lecturer: Dr Victor Boutin (CNRS, France) — victorboutin.github.io

No new reading for Thursday; come with questions for our guest.

Course-wrapping reflection due Sun 4/26, 2:00pm: Submit on Canvas

Final Exam

Tuesday, May 12, 2026, 9:00am