Activity of neuronal populations in several cortical regions can be described by a Dynamic Neural Field (DNF) equation. A DNF is a continuous in time and in space activation function defined over a metric space spanned over perceptual (e.g., color, retinal location, orientation) or motor (e.g., orientation of the head, direction of movement) dimensions, in which neurons in the underlying population have their receptive fields. Attractor properties of the localised-bump solution of the DNF equation allow us to build cognitive architectures using DNFs as “building blocks’’. Such architectures have been shown to form scene representations and maps, learn sequences of actions and perceptual states, generate spatial-language descriptions, or orchestrate behavior of a robotic agent. DNF architectures land themselves for implementation in neuromorphic hardware, making use of the inherently parallel, event-driven, in-memory nature of this computing substrate. In this talk, I will give an overview of the DNF framework, showing our latest examples of the neuro-dynamic architectures that control neuromorphic robotic agents in a closed behavioral loop and build representations of places, sequences, and sensorimotor mappings in spiking neural networks on neuromorphic chips. I will review the computational primitives that allow us to develop cognitive behaving architectures in neuromorphic hardware and will contrast and link this computing framework to other neuronally-inspired approaches to building cognitive systems.