LEARNING OPTIMAL THRESHOLD FOR BAYESIAN POSTERIOR PROBABILITIES TO MITIGATE THE CLASS IMBALANCE PROBLEM

Do Thai Nghe, Do Thanh Nghi, LARS SCHMIDT THIEME
Author affiliations

Authors

  • Do Thai Nghe Nhà xuất bản Khoa học Tự nhiên và Công nghệ
  • Do Thanh Nghi
  • LARS SCHMIDT THIEME

DOI:

https://doi.org/10.15625/0866-708X/48/4/1155

Abstract

ABSTRACT

Class imbalance is one of the problems which degrade the classifier's performance. Researchers have introduced many methods to tackle  this problem including pre-processing, internal classifier processing, and post-processing– which mainly relies on posterior probabilities. Bayesian Network (BN) is known as a  classifier which produces good posterior probabilities. This study proposes two methods which utilize Bayesian posterior probabilities to deal with imbalanced data.
In the first method, we optimize the threshold on the posterior probabilities produced by BNs to maximize the F1-Measure. Once the optimal threshold is found, we use it for the final classification. We investigate this method on several Bayesian classifiers such as Naive Bayes (NB), BN, TAN, BAN, and Markov Blanket BN. In the second method, instead of learning on each classifier separately as in the former, we combine these classifiers by a voting ensemble. The experimental results on 20 benchmark imbalanceddatasets collected from the UCI repository show that our methods significantly outperform the baseline NB. These methods also perform as good as the state-of-the-art sampling methods and significantly better in certain
cases.

Downloads

Download data is not yet available.

Published

25-06-2012

How to Cite

[1]
D. Thai Nghe, D. Thanh Nghi, and L. SCHMIDT THIEME, “LEARNING OPTIMAL THRESHOLD FOR BAYESIAN POSTERIOR PROBABILITIES TO MITIGATE THE CLASS IMBALANCE PROBLEM”, Vietnam J. Sci. Technol., vol. 48, no. 4, Jun. 2012.

Issue

Section

Articles