Background
As an undergraduate at UCSB, I majored in biophysical chemistry and computer science, emphasizing in genetics and computer architecture respectively. I was initially focused on synthetic biology and enzyme design but later pivoted to neuromorphic computing focusing on temporal logic paradigms and spiking hardware technologies. After pivoting to CS/EE, I spent a great deal of time researching learning rules for temporal, spiking encoding schemes and in the process, became interested in the problem of transformation invariance and one-shot learning. Through this line of research, I learned about the principles of concept manifolds in biology and, through this, Bruno Olshausen’s work on form and motion pathways in the visual system as well as Pentti Kanerva’s book Sparse Distributed Memory, which along with Tony Plate’s thesis, laid the foundation for the field of Hyperdimensional Computing (HDC)/Vector Symbolic Architectures (VSAs). For the rest of my time as an undergraduate, I focused on developing a computational paradigm combining temporal, delay-based computing with HDC and designing hardware architectures capable of implementing such designs.
After graduating from UCSB, I went to work at Lawrence Berkeley National Laboratories (LBL) in the Computer Architecture Group (CAG) doing neuromorphic computer hardware-software codesign. There I focused on designing energy-efficient high performance computing (HPC) systems with post-Moore (non-CMOS) technologies, primarily superconducting, spiking single flux quantum (SFQ) hardware. My work remained mostly centered around developing dedicated architectures for HDC/VSA pattern matching algorithms for highly area, energy, and bandwidth constrained applications such as edge and intermittent computing.
I decided to do a PhD in neuroscience as opposed to CS/EE as I feel that the future of computing lies in biological neural systems. The human brain runs on ~20W yet outperforms cutting-edge AI and supercomputers on a huge assortment of tasks despite also processing information at a vastly slower rate. Additionally, neural circuits are highly robust and flexible, capable of sustaining a massive amount of damage before functionality is compromised and operating correctly over an incredibly broad range of architectural/biophysical parameters. I’m very interested in studying the architectural and algorithmic properties underlying these properties in neural circuits.
Outside of my PhD research, I’m a passionate technical diver specializing in cave and deep diving. I’ve explored multiple underwater cave systems in Thailand going to a maximum depth of 102 m on open circuit. I also love drawing, snowboarding, backpacking, and exploring.