Beyond Interpolation: Automated Discovery of Symmetry Groups via Tensor Factorization
Ben Dongsung Huh
Warren Hall room 205A and via Zoom (see note below to request the zoom link)
Standard deep learning models generalize via interpolation, relying on an implicit bias toward smooth functions to fit training data. While effective in-distribution, this approach often fails out-of-distribution (OOD) because it fails to capture the underlying generative structure of the data. To achieve robust extrapolation, a model cannot merely approximate the training manifold; it must identify the global algebraic laws—formally defined as symmetry groups—that govern the data. Recovering these rigorous, full-rank structures requires a fundamental shift from simple interpolation to automated symmetry discovery.
In this talk, I will present a theoretical framework for the HyperCube model, which formulates this discovery as a differentiable tensor factorization problem. We analyze the model’s optimization landscape and prove a unique inductive bias: in contrast to the typical implicit low-rank bias of deep learning, the HyperCube objective exerts a variational pressure toward unitary, full-rank representations. We show how this mechanism rigidly enforces discrete group axioms through continuous optimization, effectively recovering the exact algebraic structure hidden within the data.
—
To request the Zoom link send an email to jteeters@berkeley.edu. Also indicate if you would like to be added to the Redwood Seminar mailing list.