Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.
We introduce a new machine learning approach for image segmentation, based on a joint energy model over image features and novel local binary shape descriptors. These descriptors compactly represent rich shape information at multiple scales, including interactions between multiple objects. Our approach, which does not rely on any hand-designed features, reflects the inherent combinatorial nature of dense image segmentation problems. We propose efficient algorithms for learning deep neural networks to model the joint energy, and for local optimization of this energy in the space of supervoxel agglomerations.
We demonstrate the effectiveness of our approach on 3-D biological data, where rich shape information appears to be critical for resolving ambiguities. On two challenging 3-D electron microscopy datasets highly relevant to ongoing efforts towards large-scale dense mapping of neural circuits, we achieve state-of-the-art segmentation accuracy.