Bounding the asymmetric error of a convex combination of classifiers.

Minh-Tri Pham, Tat-Jen Cham


Asymmetric error is an error that trades off between the false positive rate and the false negative rate of a binary classifier. It has been recently used in solving the imbalanced classification problem e.g., in asymmetric boosting. However, to date, the relationship between an empirical asymmetric error and its generalization counterpart has not been addressed. Bounds on the classical generalization error are not directly applicable since different penalties are associated with the false positive rate and the false negative rate respectively, and the class probability is typically ignored in the training set. In this paper, we present a bound on the expected asymmetric error of any convex combination of classifiers based on its empirical asymmetric error. We also show that the bound is a generalization of one of the latest (and tightest) bounds on the classification error of the combined classifier.


asymmetric error, asymmetric boosting, imbalanced classification, Rademacher complexity


Journal of Computer Science and Cybernetics ISSN: 1813-9663

Published by Vietnam Academy of Science and Technology