Changes in an animal’s behavioral state, such as arousal and movements, induce complex modulations of the baseline input currents to sensory areas, eliciting sensory modality-specific effects. A simple computational principle explaining the effects of baseline modulations to recurrent cortical circuits is lacking. We investigate the benefits of baseline modulations, referred to as biases in machine learning, using a reservoir computing approach in neural networks with random and fixed synaptic weights. Bias modulations, that only control the mean and variance of the bias distribution, unlock a set of new network phases and phenomena, including chaos enhancement, neural hysteresis and ergodicity breaking. Strikingly, we found that such simple bias modulations enable reservoir networks to perform multiple tasks, without any optimization of the network couplings. What is the expressivity of networks with learned biases? We show that networks with random weights can be trained to perform multiple tasks and predict dynamical systems just by learning biases. Overall, we introduce a biologically motivated theory of bias modulation and learning that opens new directions for brain-inspired artificial intelligence and sheds new light on the role of behavioral modulations of cortical activity.