If I use undersampling in case of an unbalanced binary target variable to train a model, the prediction method calculates probabilities under the assumption of a balanced data set. How can I convert these probabilities to actual probabilities for the unbalanced data? Is the a conversion argument/function implemented in the mlr package or another package? For example:
a <- data.frame(y=factor(sample(0:1, prob = c(0.1,0.9), replace=T, size=100)))
a$x <- as.numeric(a$y)+rnorm(n=100, sd=1)
task <- makeClassifTask(data=a, target="y", positive="0")
learner <- makeLearner("classif.binomial", predict.type="prob")
learner <- makeUndersampleWrapper(learner, usw.rate = 0.1, usw.cl = "1")
model <- train(learner, task, subset = 1:50)
pred <- predict(model, task, subset = 51:100)
head(pred$data)
A very simple yet powerful method has been proposed by [Dal Pozzolo et al., 2015].
It is specifically designed to tackle the issue of calibration (i.e. transforming predicted probabilities of your classifier into atcual probabilities in the unbalanced case) in the case of downsampling.
You just have to correct your predicted probability p_s using the following formula:
p = beta * p_s / ((beta-1) * p_s + 1)
where beta is the ratio of the number majority class instances after undersampling over the number majority class ones in the original training set.
Other methods
Other methods which are not specifically focused on the downsampling bias have been proposed. Among which the most popular ones are the following:
- Platt’s scaling or sigmoid method (Platt, 1999)
- Isotonic regression-based method (Zadrozny and Elkan, 2001)
They are both implemented in R