TR2003-127

Unsupervised Improvement of Visual Detectors Using Co-Training


    •  Anat Levin, Paul Viola and Yoav Freund, "Unsupervised Improvement of Visual Detectors Using Co-Training", Tech. Rep. TR2003-127, Mitsubishi Electric Research Laboratories, Cambridge, MA, October 2003.
      BibTeX TR2003-127 PDF
      • @techreport{MERL_TR2003-127,
      • author = {Anat Levin, Paul Viola and Yoav Freund},
      • title = {Unsupervised Improvement of Visual Detectors Using Co-Training},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2003-127},
      • month = oct,
      • year = 2003,
      • url = {https://www.merl.com/publications/TR2003-127/}
      • }
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning

Abstract:

One significant challenge in the construction of visual detection systems is the acquisition of sufficient labeled data. This paper describes a new technique for training visual detectors which requires only a small quantity of labeled data and then uses unlabeled data to improve performance over time. Unsupervised improvement is based on the co-training framework of Blum and Mitchell, in which two disparate classifiers are trained simultaneously. Unlabeled examples which are confidently labeled by one classifier are added, with labels, to the training set of the other classifier. Experiments are presented on the realistic task of automobile detecction in roadway surveillance video. In this application, co-training reduces the false positive rate by a factor of 2 to 11 from the classifier trained with labeled data alone.