Spiking Manifesto
Eugene Izhikevich
125 LKS and Zoom (see note below to request the zoom link)
Practically everything computers do is better, faster, and more power-efficient than the brain. For example, a calculator performs numerical computations more energy-efficiently than any human. Yet modern AI models are a thousand times less efficient than the brain. These models have increasingly larger and larger dimension to maximize their representational capacity, requiring GPUs to perform multiplications of huge matrices. In contrast, the brain’s spiking neural networks exhibit factorially explosive encoding capacity even when their size is small. They compute through the polychronization of spikes rather than explicit matrix-vector products, resulting in lower energy requirements. This manifesto proposes a framework for thinking about popular AI models in terms of spiking networks and polychronization, and for interpreting spiking activity as nature’s way of implementing look-up tables. This suggests a path toward converting AI models into a novel class of architectures with much smaller size yet combinatorially large representation capacity, offering the promise of a thousandfold improvement in performance. The presentation is based on Izhikevich (2025) https://arxiv.org/pdf/2512.11843
Bio:
Founder and CEO of SpikeCore, San Diego, CA
Founder and Chairman of the Board of Brain Corp, San Diego, CA
Founder and Editor-in-Chief of Scholarpedia – the peer-reviewed encyclopedia
—
To request the Zoom link send an email to jteeters@berkeley.edu. Also indicate if you would like to be added to the Redwood Seminar mailing list.
A Small-Scale System for Autoregressive Program Synthesis Enabling Controlled Experimentation
Russell Webb
Warren 205A and Zoom (see note below to request the zoom link)
What research can be pursued with small models trained to complete true programs? Typically, researchers study program synthesis via large language models (LLMs) which introduce issues such as knowing what is in or out of distribution, understanding fine-tuning effects, understanding the effects of tokenization, and higher demand on compute and storage to carry out experiments. We present a system called Cadmus which includes an integer virtual machine (VM), a dataset composed of true programs of diverse tasks, and an autoregressive transformer model that is trained for under $200 of compute cost. The system can be used to study program completion, out-of-distribution representations, inductive reasoning, and instruction following in a setting where researchers have effective and affordable fine-grained control of the training distribution and the ability to inspect and instrument models. Smaller models working on complex reasoning tasks enable instrumentation and investigations that may be prohibitively expensive on larger models. To demonstrate that these tasks are complex enough to be of interest, we show that these Cadmus models outperform GPT-5 (by achieving 100% accuracy while GPT-5 has 95% accuracy) even on a simple task of completing correct, integer arithmetic programs in our domain-specific language (DSL) while providing transparency into the dataset’s relationship to the problem. We also show that GPT-5 brings unknown priors into its reasoning process when solving the same tasks, demonstrating a confounding factor that prevents the use of large-scale LLMs for some investigations where the training set relationship to the task needs to be fully understood.
—
To request the Zoom link send an email to jteeters@berkeley.edu. Also indicate if you would like to be added to the Redwood Seminar mailing list.
To be announced
Chris Rozell
Warren Hall room 205A and via Zoom (see note below to request the zoom link)
To be announced.
—
To request the Zoom link send an email to jteeters@berkeley.edu. Also indicate if you would like to be added to the Redwood Seminar mailing list.