Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. Theoretical work has suggested that shifts in preferred orientation are helpful for efficient coding of orientation content, but it has remained unclear precisely what is being optimized, nor is it clear how this optimization is computed. I will describe a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products across the neural population. I will demonstrate that this adjustment can be carried out by a very simple learning rule. Model simulations closely match existing data. The model does a better job of matching physiological data than several alternative models involving homeostatic maintenance of response correlations or covariances, as well as feedforward gain-control models. Finally, I will describe two psychophysical experiments using a “contingent adaptation” paradigm that are consistent with the model.