Weak supervision describes the use of noisy or error-prone data labels for training supervised learning models.
It can be expensive or impractical to create or obtain highly-accurate labels for a large dataset. Weak supervision offers the choice of using a larger number of somewhat-less-accurate data labels.