As transistors shrink to nanoscale dimensions, it is becoming increasingly difficult to make the current computing paradigm work. At two-dozen nanometers wide, a transistor’s “freeway” can only carry ten “lanes” of electron traffic. With so few lanes, a few “potholes” (dopant atoms introduced during fabrication) or “accidents” (electrons trapped during operation) may bring traffic to a complete halt, with disastrous consequences. To avoid disaster, the industry recently switched from planar transistors to three-dimensional ones. These transistors’ “double-decker freeway” made it possible to shrink the device’s width while increasing—rather than decreasing—the number of traffic lanes. Thus, the probability that traffic halts completely is kept vanishingly small. Going 3D, however, increases the fabrication process’ complexity. As a consequence, after decreasing exponentially for the past half century, the cost of a transistor rose for the very first time last year.
I’ll make a case for accommodating heterogeneity (potholes) and stochasticity (accidents) by combining analog computation with digital communication. It appears that the brain uses this unique mix of analog and digital techniques to deal with traffic jams in its ion-channels, biology’s single-lane nanoscale transistors. To support my case, I’ll present a Kalman-filter-based brain-machine interface and a three-degree-of-freedom robot-arm controller implemented on a chip that combines analog computation with digital communication much like the brain does. A formal theory for approximating arbitrary nonlinear dynamical systems with networks of spiking neurons was used to derive weights applied to synaptic inputs (analog computation) triggered by spikes that the chip’s silicon neurons receive from each other (digital communication). This neuromorphic computing paradigm was robust to heterogeneity (transistor-to-transistor dopant fluctuations) and stochasticity (randomly dropped spikes), suggesting that it may well prove to be more cost-effective than the current computing paradigm as transistors scale down to a few nanometers.
Bio:
Kwabena Boahen received the B.S. and M.S.E. degrees in electrical and computer engineering from Johns Hopkins University, Baltimore, MD, both in 1989, and the Ph.D. degree in computation and neural systems from California Institute of Technology, Pasadena, CA, in 1997. He was on the bioengineering faculty of University of Pennsylvania from 1997 to 2005, where he held the first Skirkanich Term Junior Chair. He is presently Professor of Bioengineering at Stanford University, with a courtesy appointment in Electrical Engineering. He is the founding director of Stanford’s Brains in Silicon Laboratory, which develops silicon integrated circuits that emulate the way neurons compute, linking the seemingly disparate fields of electronics and computer science with neurobiology and medicine.
Prof. Boahen’s contributions to the field of neuromorphic engineering include a silicon retina that could be used to give the blind sight, a self-organizing chip that emulates the way the developing brain wires itself up, and a mixed analog-digital hardware platform (Neurogrid) that simulates a million cortical neurons in real-time—rivaling a supercomputer while consuming only a few watts. He has received several distinguished honors, including a Fellowship from the Packard Foundation (1999), a CAREER award from the National Science Foundation (2001), a Young Investigator Award from the Office of Naval Research (2002), a Pioneer Award from the National Institutes of Health (2006), and a Transformative Research Award from the National Institutes of Health (2011). He is a Fellow of the Institute of Electrical and Electronics Engineers (2015) and of the American Institute of Medical and Biological Engineers (2015). His 2007 TED talk, “A computer that works like the brain”, has been viewed half-a-million times.