Credit assignment under constraints in brains and machines

Jonathan Kadmon

The Hebrew University
Friday, January 16, 2026 at 12:00pm
Warren Hall room 205A

We do not know how the brain assigns credit to synapses during learning, but we do know that it almost certainly does not use backpropagation. Existing evidence instead points to low-dimensional, indirect error signals and to a diversity of synaptic plasticity rules that are often incompatible with gradient descent. In this talk, I will show that taking these constraints seriously yields new insights into how biological networks may learn and directly inspires new algorithms for training artificial networks.

I will focus on two such departures from standard machine learning. First, I will examine the dimensionality of error feedback. Using exact theory in linear networks together with simulations in modern architectures, I will show that effective learning is possible when layers receive only low-dimensional teaching signals, provided these are aligned with task-relevant directions. This perspective leads to new training algorithms that significantly reduce training compute—for example, by over 20% in FLOPs during vision transformer (ViT) training—while revealing error dimensionality as an inductive bias that shapes learned representations.

Second, I will turn to synaptic learning rules. Allowing mixtures of Hebbian, anti-Hebbian, and gradient-like plasticity naturally introduces non-gradient components into learning dynamics. I will show that these components can either destabilize learning or, in specific regimes, accelerate it by helping networks escape flat or saddle-like regions of the loss landscape.

Together, these results highlight how studying learning constraints observed in the brain can open new perspectives on both neural and artificial learning.