CPSY 1950 Deep Learning in Brains, Minds & Machines
A seminar-style exploration of cutting-edge research at the intersection of natural and artificial intelligence. Students engage with recent papers and develop critical perspectives on how biological and artificial systems process information.
Format: Tuesday lectures + Thursday mini-conferences (lightning talks) • Project posters • Guest lectures
Course Information
Instructor: Thomas Serre
Email: thomas_serre@brown.edu
Time: Tuesday & Thursday, 2:30pm-3:50pm
Location: Friedman Hall 108
Office Hours: Wednesdays, 1:00-2:00pm @ Carney Innovation Hub (Room 402)
Communication: Ed Discussion
Course Updates (Spring 2026): This course has been substantially revamped for Spring 2026 to reflect rapid changes in NeuroAI. As a result, the schedule and specific activities may be adjusted during the semester. Updates will be posted on the course website. Thank you in advance for your flexibility and patience.
Overview
This course uses a lecture + mini-conference (lightning talks) format:
- Tuesdays (80 min): Instructor lecture introducing the week’s theme
- Thursdays (80 min): Student lightning talks (~15 presentations, 3 minutes each) + synthesis/discussion
Students work in rotating small groups throughout the semester, culminating in project poster presentations. The course covers modern AI capabilities, explainable AI (XAI), NeuroAI, and cognitive and neural alignment.
Key Dates
- First Class: Thursday, January 22, 2026
- Bootcamp: Week 2 (async – no class January 27–29)
- Lightning Talks: Weeks 3-9
- Spring Break: Week 10 (March 24-26)
- Project Posters: Weeks 11-12 (April 7-9)
- Guest Lectures: Weeks 13-14
- Final Exam: Tuesday, May 12, 2026, 9:00am
Group & Presentation Plan
- Lightning talks are prepared in small groups.
- Groups are not fixed: students will form new groups for each lightning week (rotating teams over the semester).
Schedule
Week 1 — Course kickoff
Thu 1/22 — Course Kickoff
NeuroAI goals, course structure, and how we will simulate scientific conferences (lightning talks and projects).
No readings or pre-class activities for Week 1—this is the introductory session.
Week 2 — Foundations of natural and artificial intelligence
Async Bootcamp (no in-person class this week)
Goal: Put everyone on a leveled playing field for Week 3. Complete the Required Core first (mandatory for all), then the conditional tracks that match your background. You do not need to do every track.
Note on access to papers: For any paper link below, please use the Brown Library proxy links (you may be prompted to log in with Brown credentials off campus).
Required Core — Mandatory for all students
⚠️ Complete both items below before moving on to any conditional tracks. Students with a strong background in cognitive neuroscience may skim, but should still review to ensure familiarity with the course’s framing and terminology.
Expected time: ~90 min total
-
Marr’s Levels of Analysis (video)
Nancy Kanwisher: “How can we study the human mind and brain? Marr’s levels of analysis” -
Coarse brain organization (video; focus on gross organization + functions)
MIT OCW 9.13 (Spring 2019) Lecture 2: “Neuroanatomy”
Viewing guidance: Watch for the high-level tour of major brain divisions/structures and what they do. Pay particular attention to concepts of receptive fields, neural tuning, and cortical maps — these are fundamental for understanding how neural networks relate to biological systems. You do not need to know fine anatomical details.
Conditional Track A — Cognitive neuroscience foundations
Complete this track if you have not taken at least 1–2 courses in neuroscience or cognitive neuroscience.
Target time: up to ~180 min.
-
A1) Neurons, spikes, firing rate (high-level; no ion-channel deep dive)
BrainFacts: “How neurons communicate” -
A2) Quick map of structures + functions
NIH/NINDS: “Brain basics — know your brain” - A3) Methods lectures (videos)
- MIT OCW 9.13 Lecture 4: “Cognitive neuroscience methods I”
- MIT OCW 9.13 Lecture 5: “Cognitive neuroscience methods II”
Note: Don’t worry if the jargon feels overwhelming — you don’t need to memorize all the details right now, and you can always come back to these videos later when these concepts become needed. These lectures include a section on Marr’s levels of analysis; you can skip it if redundant with the required core, though reviewing it twice may be helpful if you haven’t taken cognitive science courses.
- A4) Optional “what measures what?” cheat-sheets (short skims)
Conditional Track B — Linear algebra intuition (refresher)
Complete this track if vectors/matrices feel rusty.
Target time: ~60 min total.
- 3Blue1Brown: “Vectors, what even are they?”
- 3Blue1Brown: “Linear combinations, span, and basis vectors”
- 3Blue1Brown: “Linear transformations and matrices”
- 3Blue1Brown: “Matrix multiplication as composition”
Conditional Track C — Deep learning intuition (refresher)
Complete this track if backprop/gradient descent are unfamiliar.
Target time: ~60 min total.
- 3Blue1Brown: “What is a neural network?”
- 3Blue1Brown: “Gradient descent”
- 3Blue1Brown: “Backpropagation”
Bootcamp Checkpoint (all students; due Sun 2/2, 2:00pm)
Reading: Crick (1989) — The recent excitement about neural networks (Nature) (Brown Library proxy link)
Reading response due Sun 2/2, 2:00pm: Submit on Canvas
Optional background papers (only if you want extra framing)
- Cichy & Kaiser (2019) — Trends in Cognitive Sciences (Brown Library proxy link)
- Doerig et al. (2023) — Nature Reviews Neuroscience (Brown Library proxy link)
Week 3 — The three levers of deep learning
Tue 2/3 — Lecture: The three levers of deep learning
Architecture, learning objectives, and data shape representations, behavior, and generalization across modalities.
Reading (complete before class): Serre (2019) — Deep Learning: The Good, the Bad, and the Ugly (Annual Review of Vision Science)
Reading response due Tue 2/3, 2:00pm: Submit on Canvas
Thu 2/5 — Lightning talks
Student lightning talks on the three levers of deep learning (architecture, objectives, data).
Lightning talk presentations · List of lightning talks (Google Sheet)
Reflection due Sun 2/8, 2:00pm: Submit on Canvas
Week 4 — Scaling and emerging capabilities
Tue 2/10 — Lecture: Scaling and emerging capabilities
Pretraining and fine-tuning/transfer; in-context learning and reasoning; what "emergence" claims mean and how to evaluate them critically.
Reading (complete before class): Firestone (2020) — Performance vs. competence in human–machine comparisons (PNAS)
Reading response due Tue 2/10, 2:00pm: Submit on Canvas
Thu 2/12 — Lightning talks
Student lightning talks on scaling and emerging capabilities.
Week 5 — Prediction vs Understanding
Tue 2/17 — No lecture (university holiday)
No class.
Thu 2/19 — Background reading and required viewing
Class on Feb 19 was cancelled (instructor sick).
Background paper (complete before or by Thu):
Serre, T. & Pavlick, E. (2025). From Prediction to Understanding: Will AI Foundation Models Transform Brain Science? Neuron (Brown Library proxy)
Required viewing (watch both):
- Marcus, G. (2024). Keynote at AGI-24. Machine Learning Street Talk. Watch from ~5:00 to ~35:00 (Marcus's talk starts after a brief introduction).
- LeCun, Y. (2024). Objective-Driven AI: Towards AI Systems That Can Learn, Remember, Reason, and Plan. Ding Shum Lecture, Harvard CMSA. Watch the first ~36 minutes. Note: it gets fairly technical in places — if it starts feeling dry, you can stop around the 36-minute mark; you'll have what you need.
Paper response due Sun 2/22, 2:00pm: Submit on Canvas
Week 6 — Representation-level interpretability
Tue 2/24 — Lecture: Representation-level interpretability
Feature visualization, concept-based methods, sparse/dictionary approaches (incl. SAEs); what we can and can't reliably "name" in representations.
Guest lecturer: Dr Thomas Fel (Kempner Institute, Harvard) — thomasfel.fr
Reading (complete before class): Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., & Mordvinets, A. (2018). The Building Blocks of Interpretability. Distill
Reading response due Tue 2/24, 2:00pm: Submit on Canvas
Thu 2/26 — Lightning talks
Student lightning talks on features, concepts, and sparse methods.
📑 Lightning talk presentations · 📋 List of lightning talks (Google Sheet)
Read: Thorpe, S.J. (1989). Local vs. Distributed Coding. Intellectica, 8, 3–40.
Reflection due Sun 2/29, 2:00pm: Submit on Canvas
Week 7 — Mechanistic interpretability
Tue 3/3 — Lecture: Mechanistic interpretability
Circuits, causal interventions, and standards of evidence for mechanistic claims.
Guest lecturer: Michael Lepori (CS, Brown) — lepori.xyz
Watch: Jensen Huang & Ilya Sutskever, Discovering the World Model Through LLM (YouTube)
Read: nostalgebraist (2020). Interpreting GPT: The Logit Lens (LessWrong)
Reading response due Tue 3/3, 2:00pm: Submit on Canvas
Thu 3/5 — Lightning talks
Student lightning talks on circuits and causal interventions.
Week 8 — Neural alignment
Tue 3/10 — Lecture: Neural alignment and model-to-brain mapping
Predicting neural data across measurement modalities; encoding/decoding and representational similarity; what alignment can and cannot justify.
Thu 3/12 — Lightning talks
Student lightning talks on model-to-brain mapping.
Week 9 — Behavioral and cognitive alignment
Tue 3/17 — Lecture: Behavioral and cognitive alignment
Treating models as participants in cognitive tasks; behavioral signatures beyond accuracy (generalization, planning, decision making, cognitive control); confounds and best practices.
Thu 3/19 — Lightning talks
Student lightning talks on cognitive and behavioral evaluation.
Week 10 — Spring Break
Tue 3/24 — Spring Break
No class
Thu 3/26 — Spring Break
No class
Week 11 — Project studio
Tue 3/31 — Project studio I
Project launch and evaluation design; in-class time for groups to plan, run pilot tests, and produce first results/figures.
Thu 4/2 — Project studio II
Continue project work: complete runs and draft poster.
Week 12 — Project poster presentations
Tue 4/7 — Project poster mini-conf A
Students present project findings in posters (17 posters); structured peer feedback and synthesis discussion.
Thu 4/9 — Project poster mini-conf B
Students present project findings in posters (17 posters); structured peer feedback and synthesis discussion.
Week 13 — Guest lectures
Tue 4/14 — Guest lecture: Greta Tuckute (Kempner Institute, Harvard)
Details TBD
Thu 4/16 — Guest lecture: Rufin VanRullen (CNRS, France)
Frontier topics in NeuroAI: global workspace / consciousness & deep learning.
Week 14 — Guest lectures
Tue 4/21 — Guest lecture (TBD)
Details TBD
Thu 4/23 — Guest lecture: Victor Boutin (CNRS, France)
Frontier topics in NeuroAI: generative models, EBMs, cognitive science. Plus course wrap-up and final exam briefing.
Final Exam
Tuesday, May 12, 2026, 9:00am