Lee CW, Olshausen BA (1996). A nonlinear Hebbian network that learns to detect disparity in random-dot stereograms. Neural Computation, 8: 545-566.

An intrinsic limitation of linear, Hebbian networks is that they are capable of learning only from the linear pairwise correlations within an input stream. In order to explore what higher forms of structure could be learned with a nonlinear Hebbian network, we have constructed a model network containing a simple form of nonlinearity and we have applied the network to the problem of learning to detect the disparities present in random-dot stereograms. The network consists of three layers, with nonlinear, sigmoidal activation functions in the second layer units. The nonlinearities allow the second layer to transform the pixel-based representation in the input into a new representation based on coupled pairs of left-right inputs. The third layer of the network then clusters patterns occurring on the second layer outputs according to their disparity via a standard competitive learning rule. Analysis of the network dynamics shows that the second-layer units' nonlinearities interact with the Hebbian learning rule to expand the region over which pairs of left-right inputs are stable. The learning rule is neurobiologically inspired and plausible, and the model may shed light on how the nervous system learns to use coincidence detection in general.