On-device learning has emerged as a prevailing trend that enables intelligence on edge devices for IoT applications. However, there remain multiple gaps before large-scale deployments in the real world, including dynamic and changeable environments, limited hardware resources, heterogeneous sensor modalities, etc. Hyperdimensional Computing (HDC) is an emerging computing paradigm inspired by brains. In HDC, all data is represented using high-dimensional, low-precision (often binary) vectors known as “hypervectors’,’ which can be manipulated through simple element-wise operations to perform tasks like memorization and learning. HDC is suitable for edge applications due to its single-pass training capability and hardware efficiency.
In this talk, I will present two of our recent works that use HDC to address the above-mentioned challenges for real-world IoT applications at the edge.
In the first work, we design and deploy an on-device lifelong learning system called LifeHD to continuously adapt the dynamic environment. LifeHD is designed for edge scenarios with (i) streaming data input, (ii) lack of supervision and (iii) limited on-board resources. We utilize a two-tier associative memory organization to intelligently store and manage HDC vectors, which represent the historical patterns as cluster centroids. We implement LifeHD on off-the-shelf edge platforms and show that LifeHD improves the unsupervised clustering accuracy by up to 74.8% compared to the state-of-the-art NN-based unsupervised lifelong learning baselines with as much as 34.3x better energy efficiency.
In the second work, we design MultimodalHD, a novel solution for Multimodal Federated Learning using HDC. MutlmodalHD uses a static HD encoder to encode raw sensory data from different modalities into high-dimensional low-precision hypervectors. These multimodal hypervectors are then fed to an attentive fusion module on local clients and a proximity-based aggregation strategy on the cloud. Results show that MultimodalHD delivers comparable (if not better) accuracy compared to state-of-the-art MFL algorithms, while being 2x – 8x more efficient in training time.
Bio: Xiaofan Yu is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of California, San Diego, under the supervision of Prof. Tajana Rosing. Her research focuses on the intersection of embedded systems and edge computing, with an emphasis on advancing next-generation on-device intelligence for Internet of Things (IoT) applications. Xiaofan has successfully collaborated with multiple faculties members and industrial labs, and her work has earned her recognition as an EECS Rising Star in 2022, a CPS Rising Star in 2023 and a ML & Systems Rising Star in 2024. Xiaofan received her B.S. degree from Peking University, China in 2018 and her M.S. degree from University of California, San Diego in 2020.