Bounding the asymmetric error of a convex combination of classifiers.

Minh-Tri Pham, Tat-Jen Cham
Author affiliations

Authors

  • Minh-Tri Pham Cambridge Research Laboratory Toshiba Research Europe Ltd
  • Tat-Jen Cham School of Computer Engineering Nanyang Technological University

DOI:

https://doi.org/10.15625/1813-9663/28/4/2363

Keywords:

asymmetric error, asymmetric boosting, imbalanced classification, Rademacher complexity

Abstract

Asymmetric error is an error that trades off between the false positive rate and the false negative rate of a binary classifier. It has been recently used in solving the imbalanced classification problem e.g., in asymmetric boosting. However, to date, the relationship between an empirical asymmetric error and its generalization counterpart has not been addressed. Bounds on the classical generalization error are not directly applicable since different penalties are associated with the false positive rate and the false negative rate respectively, and the class probability is typically ignored in the training set. In this paper, we present a bound on the expected asymmetric error of any convex combination of classifiers based on its empirical asymmetric error. We also show that the bound is a generalization of one of the latest (and tightest) bounds on the classification error of the combined classifier.

Metrics

Metrics Loading ...

Published

18-12-2012

How to Cite

[1]
M.-T. Pham and T.-J. Cham, “Bounding the asymmetric error of a convex combination of classifiers”., JCC, vol. 28, no. 4, pp. 310–322, Dec. 2012.

Issue

Section

Computer Science