Course Outline
CPSY 1950 — Course Overview (Spring 2026)
Core format
- T/Th, 80 minutes
- Tue = lecture introducing the week’s theme (conceptual, figure-first)
- Thu = mini-conference (lightning talks) on the same theme
- ~15 lightning talks (2:00 talk + 0:30 transition)
- ~20–30 min synthesis discussion/activity
Group & presentation plan
- Lightning talks are prepared in small groups.
- Groups are not fixed: students will form new groups for each lightning week (rotating teams over the semester).
Schedule
Week 1 — Course kickoff
Thu 1/22 — Course Kickoff
NeuroAI goals, course structure, and how we will simulate scientific conferences (lightning talks and posters).
No readings or pre-class activities for Week 1—this is the introductory session.
Week 2 — Bootcamp (async; replaces Tue/Thu lectures)
Tue 1/27 — Bootcamp I (async; completed during normal Tue class time)
Assigned reading (all students):
Deep learning intuition (conditional):
Optional for everyone, but MANDATORY for students who have not taken a deep learning course.
- 3Blue1Brown Ch.1: What is a neural network?
- 3Blue1Brown Ch.2: Gradient descent
- 3Blue1Brown Ch.3: Backpropagation
Linear algebra bootcamp (conditional):
Optional for everyone, but MANDATORY for students who have not taken linear algebra and have not taken any ML/AI course that used vectors/matrices seriously.
- 3Blue1Brown: Vectors, what even are they?
- 3Blue1Brown: Linear combinations, span, and basis vectors
- 3Blue1Brown: Linear transformations and matrices
- 3Blue1Brown: Matrix multiplication as composition
Neuroscience intro video (conditional):
Optional for everyone, but MANDATORY for students who have not taken an intro course in neuroscience and/or cognitive science and/or cognitive neuroscience.
Textbook-style foundations reading (conditional, skim):
Optional for everyone, but MANDATORY for the same students who are required to watch the neuroscience intro video.
Thu 1/29 — Bootcamp II (async; completed during normal Thu class time)
Assigned reading (all students):
Note: Incorporate world models in reinforcement learning.
Week 3 — The three levers of deep learning
Tue 2/3 — Lecture: The Three Levers of Deep Learning
How architecture, learning objectives, and experience (data/scale) shape representations, behavior, and generalization across modalities.
Note: Connect back to Nancy Kanwisher's examples of color and fruit detection in primates as the objective functions to be optimized / goal of the system/agent.
Note: Incorporate world models in reinforcement learning. See Ha & Schmidhuber (2018), World Models.
Thu 2/5 — Lightning Mini-Conf 1: Three Levers of DL
Details TBD
Week 4 — Scaling and emerging capabilities
Tue 2/10 — Lecture: Scaling and Emerging Capabilities
Pretraining and fine-tuning/transfer; in-context learning and reasoning; what 'emergence' claims mean and how to evaluate them critically.
Thu 2/12 — Lightning Mini-Conf 2: Scaling & Emergence
Details TBD
Week 5 — Prediction vs Understanding
Tue 2/17 — No lecture (university holiday)
No class.
Thu 2/19 — Background reading and required viewing
Background paper: Serre, T. & Pavlick, E. (2025). From Prediction to Understanding: Will AI Foundation Models Transform Brain Science? Neuron. Brown Library proxy.
Required viewing (watch both):
- Marcus, G. (2024). Keynote at AGI-24. Machine Learning Street Talk. Watch from ~5:00 to ~35:00.
- LeCun, Y. (2024). Objective-Driven AI. Ding Shum Lecture, Harvard CMSA. Watch the first ~36 minutes.
Paper response due Thu 2/19, 2:00pm: 📝 Submit on Canvas
Week 6 — Representation-level interpretability
Tue 2/24 — Lecture: Representation-Level Interpretability
Feature visualization, concept-based methods, sparse/dictionary approaches (incl. SAEs); what we can and can't reliably name in representations.
Thu 2/26 — Lightning Mini-Conf 4: Representation Interpretability
Details TBD
Week 7 — Mechanistic interpretability
Tue 3/3 — Lecture: Mechanistic Interpretability
Circuits, causal interventions, and standards of evidence for mechanistic claims.
Thu 3/5 — Lightning Mini-Conf 5: Mechanistic Interpretability
Details TBD
Week 8 — Neural alignment
Tue 3/10 — Lecture: Neural Alignment and Model-to-Brain Mapping
Predicting neural data across measurement modalities; encoding/decoding and representational similarity; what alignment can and cannot justify.
Thu 3/12 — Lightning Mini-Conf 6: Neural Alignment
Details TBD
Week 9 — Behavioral and cognitive alignment
Tue 3/17 — Lecture: Behavioral and Cognitive Alignment
Treating models as participants in cognitive tasks; behavioral signatures beyond accuracy (generalization, planning, decision making, cognitive control); confounds and best practices.
Thu 3/19 — Lightning Mini-Conf 7: Behavioral Alignment
Details TBD
Week 10 — Spring Break
Tue 3/24 — Spring Break
No class
Thu 3/26 — Spring Break
No class
Week 11 — Project studio
Tue 3/31 — Project Studio I
Project launch and evaluation design; in-class time for groups to plan, run pilot tests, and produce first results/figures.
Thu 4/2 — Project Studio II
Continue project work: complete runs and draft poster.
Week 12 — Project poster presentations
Tue 4/7 — Project Poster Mini-Conf A
Students present project findings in posters (17 posters); structured peer feedback and synthesis discussion.
Thu 4/9 — Project Poster Mini-Conf B
Students present project findings in posters (17 posters); structured peer feedback and synthesis discussion.
Week 13 — Guest lectures
Tue 4/14 — Guest lecture (TBD)
Details TBD
Thu 4/16 — Guest lecture: Rufin VanRullen
Frontier topics in NeuroAI: global workspace / consciousness & deep learning.
Week 14 — Guest lectures
Tue 4/21 — Guest lecture (TBD)
Details TBD
Thu 4/23 — Guest lecture: Victor Boutin
Frontier topics in NeuroAI: generative models, EBMs, cognitive science. Plus course wrap-up and final exam briefing.
Final Exam
Tuesday, May 12, 2026, 9:00am