Scale-covariant neural representations of time

Marc Howard

Boston University
Wednesday, December 4, 2024 at 12:00pm
Warren Hall room 205A and via Zoom

A substantial body of work over the last decade plus has converged on a tractable computational model for how the brain represents the time of past events in the ongoing firing of neurons. Populations of neurons in a variety of brain regions code for time, but with two distinct forms of receptive fields. So-called temporal context cells, observed in a variety of brain regions,respond to incoming stimuli and relax at a continuous spectrum of rates. Temporal context cells can be understood as the real Laplace transform of the past leading up to the present. The continuity of neural time constants gives rise to a continuous representation of the time of past events. So-called time cells, originally characterized in hippocampus but now observed in a variety of brain regions, fire sequentially in response to a triggering stimulus. The distribution of temporal receptive fields evenly tiles the log time axis. The logarithmic choice of time constants renders the system scale-covariant. Convolutional neural networks over a logarithmic neural representation of time naturally show zero-shot generalization to rescaled inputs. The implications for computer vision are discussed.