(Slides) What is a reasonable architecture for an algorithmic view of the mind? Is it akin to a single giant deep network or is it more like several small modules connected by some graph? How is memory captured — is it some lookup table? Take a simple event like meeting someone over coffee — how would your mind remember who the person was, what was discussed? Such information needs to be organized and indexed in a way so that it can be quickly accessed in future if say I met the same person again.
We propose that information related to such events and inputs is stored as a sketch (a compact representation that approximately reconstructs the input) — the sketching mechanism is based on random subspace embedding and is able to approximately reconstruct the original input and its basic statistics up to some level of accuracy. The sketching mechanism implicitly enables different high level object oriented abstractions such as classes, attributes, references, type-information, modules into a knowledge graph without explicitly incorporating such ideas into the mechanism operations. We will see how ideas based on sketching can lead to an initial version of a very simplified architecture for an algorithmic view of the mind. We will see how a simplified implementation of Neural Memory based storing sketches using Locality sensitive hashing can be used to almost double the capacity of BERT with a small amount of Neural Memory while adding <1% FLOPS. (Based on https://arxiv.org/abs/1905.12730, http://proceedings.mlr.press/v130/panigrahy21a/panigrahy21a.pdf, https://arxiv.org/abs/1910.06718, https://theory.stanford.edu/~rinap/papers/sketchmodules10.pdf).