Exploring Vector Symbolic Architectures for Applications in Computer Vision and Signal Processing

Kenny Schlegel

Chemnitz University of Technology, Chemnitz, Germany
Wednesday, July 6, 2022 at 12:00pm
Evans Hall Room 560 and via Zoom (see below to obtain Zoom link)

Vector Symbolic Architectures (VSAs) combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors. Major goals are the exploitation of their representational power and ability to deal with fuzziness and ambiguity. The basis of a VSA are the high-dimensional vectors, which can represent entities or data as symbols. Based on those vectors and the operators, it is possible to create compositional structures without losing the underlying original symbols and their relations. The principles of a VSA have already been applied in several applications, mostly with the simple structure of superimposed role-filler-pairs.

In this talk, I will first give an overview of our VSA comparison [1], in which different existing VSA implementations were compared experimentally.
Second, I explain our experience in applying VSAs in computer vision and signal processing, specifically visual place recognition and time series classification. There, we also build upon the structure of superimposed role-filler-pairs and were able to use them to improve existing algorithms. For example, in the field of visual place recognition, we can enrich the descriptor vector of an image with additional information, such as spatial semantic information, without increasing the resulting vector representation [2]⁠. This saves computational costs and can increase the performance.
In another application, we integrated the principles of a VSA into a state-of-the-art time series classification algorithm to provide explicit global time encoding [3]⁠. This prevents the original method from failing in special cases where global context is important to distinguish signals. Moreover, this temporal coding can also improve results on multiple datasets from a benchmark ensemble of time series classification.

[1] Schlegel, K., Neubert, P. & Protzel, P. (2021) A comparison of Vector Symbolic Architectures. Artificial Intelligence Review. DOI: 10.1007/s10462-021-10110-3, Online: https://link.springer.com/article/10.1007/s10462-021-10110-3

[2] Neubert, P., Schubert, S., Schlegel, K. & Protzel, P. (2021) Vector Semantic Representations as Descriptors for Visual Place Recognition. In Proc. of Robotics: Science and Systems (RSS). DOI: 10.15607/RSS.2021.XVII.083, Online: http://www.roboticsproceedings.org/rss17/p083.pdf

[3] Schlegel, K., Neubert, P. & Protzel, P. (2022) HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification with Hyperdimensional Computing. In Proc. of International Joint Conference on Neural Networks (IJCNN). (to appear, early access: https://arxiv.org/pdf/2202.08055.pdf)