Symbolic binding with distributed representations

Friedrich Sommer

Redwood Center for Theoretical Neuroscience, UC Berkeley
Friday, October 30, 2020 at 11:00pm
Zoom

Connectionism is a movement in Psychology that began in the 1980ies to understand cognition using neural network models operating on distributed representations. Connectionism not only greatly advanced traditional neural networks, i.e. working out error back propagation, it also identified their limitations. Importantly, traditional neural networks lack a full set of operations required for symbolic reasoning, in particular, lacking an operation for variable binding, such as key-value binding. Connectionists offered two types of binding operations for vector representations: (i) outer product or Tensor Product Representations (TPR), in which the representation dimension exponentially increases with the number of bound items. (ii) Dimensionality-preserving “reduced” representations, like the Hadamard product or circular convolution used in vector-symbolic architectures (VSA). We have investigated relationships between the different types of connectionist binding operations. Further we have explored efficient binding methods for sparse representations, which are relevant for neuromorphic computing and neuroscience modeling.

First I will describe theoretical results obtained by using compressed sensing (CS) conditions as a tool to create equivalent pairs of sparse and dense vectors:
– Under CS conditions, the Hadamard product of dense vectors is mathematically equivalent to the tensor product of the corresponding sparse vectors. If CS conditions are not met, the Hadamard product is a lossy representation of the TPR.
– Under CS conditions, the concatenation of sparse vectors is equivalent to the (protected) sum operation of the corresponding
dense vectors commonly used in VSA. Thus, concatenation is essentially a bundling operation, not a binding operation.

Second, I will describe comparisons of different methods for dimensionality- and sparsity-preserving binding. Particularly promising is circular convolution on sparse distributed representations with block structure, similar to Laiho et al. 2015. In sparse block codes, each block is one-hot, as can be achieved by local competition, similar as in orientation hypercolumns in visual cortex. We have tested this binding method on a symbolic reasoning task and a classification problem.

Arxiv link: https://arxiv.org/abs/2009.06734