Now out in the November issue of the Journal of Vision, a new paper by Redwood alum Dylan Paiton and collaborators investigates how competition through network interactions can lead to adversarial robustness and selectivity in sparse coding neural networks. The authors compare layers defined by the Locally Competitive Algorithm (LCA) with purely feedforward layers, finding that LCA leads to representations that are more robust and selective to semantically meaningful features in the data. This work further establishes the benefit of using recurrence and inhibition for sparse inference, which is missing from many popular neural network architectures. You can view this paper at the link above or by visiting our publications page.