Subjective ratings given by observers are a critical part of research in image and video quality assessment. In this talk, I first review different approaches to subjective ratings and identify potential pitfalls that are often overlooked. Using existing and newly collected data, I statistically demonstrate the non-linear use of the scale, changes in ratings throughout the experiment, individual observer differences in rating features, and biases from allowing observers to decide how many ratings they give. The findings suggest that seemingly minor issues can have a significant influence on the data, and researchers should be aware of the potential pitfalls to ensure reliable results. In the second part of the talk, I propose an experimental and statistical method to quantify individual differences in quality ratings. Using a set of 256 images, 16 observers rated the images with three different types of colour distortion. Using Bayesian mixed-effects modelling, models that accounted for individual differences had higher predictive power than models that omitted them. Moreover, individual differences were found to be meaningful and stable between individuals. Lastly, clustering the observers revealed three distinct groups. This suggests that individual differences may present an opportunity for tailored content delivery in the future.