Seminars

Date
Title
Speaker
Location
11:00 am

Equilibrium Propagation in the Diffusive Fitzhugh-Nagumo Model

Jack Kendall

Warren Hall room 205A and via Zoom (see note below to request the zoom link)

The Fitzhugh-Nagumo model with diffusive coupling is known to admit a variational (energy-based) formulation, which is a result of its underlying skew-gradient structure. We show that since stationary solutions of the diffusive Fitzhugh-Nagumo model are described by self-adjoint operators, the methods of equilibrium propagation for performing credit assignment can be applied. Further, for networks with the topology of a deep residual neural network, we show that the steady state solutions also admit a Hamiltonian description, and thus the methods of Hamiltonian Echo Backpropagation can be applied. We end by deriving an explicit layer-wise Hamiltonian recurrence relation governing inference in such models.

To request the Zoom link send an email to jteeters@berkeley.edu.  Also indicate if you would like to be added to the Redwood Seminar mailing list.

12:00 pm

Beyond Interpolation: Automated Discovery of Symmetry Groups via Tensor Factorization

Ben Dongsung Huh

Warren Hall room 205A and via Zoom (see note below to request the zoom link)

Standard deep learning models generalize via interpolation, relying on an implicit bias toward smooth functions to fit training data. While effective in-distribution, this approach often fails out-of-distribution (OOD) because it fails to capture the underlying generative structure of the data. To achieve robust extrapolation, a model cannot merely approximate the training manifold; it must identify the global algebraic laws—formally defined as symmetry groups—that govern the data. Recovering these rigorous, full-rank structures requires a fundamental shift from simple interpolation to automated symmetry discovery.

In this talk, I will present a theoretical framework for the HyperCube model, which formulates this discovery as a differentiable tensor factorization problem. We analyze the model’s optimization landscape and prove a unique inductive bias: in contrast to the typical implicit low-rank bias of deep learning, the HyperCube objective exerts a variational pressure toward unitary, full-rank representations. We show how this mechanism rigidly enforces discrete group axioms through continuous optimization, effectively recovering the exact algebraic structure hidden within the data.

To request the Zoom link send an email to jteeters@berkeley.edu.  Also indicate if you would like to be added to the Redwood Seminar mailing list.

12:00 pm

To be announced

Chris Rozell

Warren Hall room 205A and via Zoom (see note below to request the zoom link)

To be announced.

To request the Zoom link send an email to jteeters@berkeley.edu.  Also indicate if you would like to be added to the Redwood Seminar mailing list.