Selecting tuning parameters with caret using stand

2019-04-11 10:19发布

I'm using caret with custom fitting metric, but I need to maximize not just this metric but lower bound of it's confidence interval. So I'd like to maximize something like mean(metric) - k * stddev(metric). I know how to do this manually, but is there a way to tell caret to automatically select best parameters using this function?

标签: r r-caret
2条回答
beautiful°
2楼-- · 2019-04-11 10:43

There is more basic example in the caret's help for train function:

madSummary <- function (data,
                        lev = NULL,
                        model = NULL) {
  out <- mad(data$obs - data$pred, 
             na.rm = TRUE)  
  names(out) <- "MAD"
  out
}

robustControl <- trainControl(summaryFunction = madSummary)
marsGrid <- expand.grid(degree = 1, nprune = (1:10) * 2)

earthFit <- train(medv ~ .,
                  data = BostonHousing, 
                  method = "earth",
                  tuneGrid = marsGrid,
                  metric = "MAD",
                  maximize = FALSE,
                  trControl = robustControl)
查看更多
欢心
3楼-- · 2019-04-11 10:52

Yes, you can define your own selection metric through the "summaryFunction" parameter of your "trainControl" object and then with the "metric" parameter of your call to train(). Details on this are pretty well documented in the "Alternate Performance Metrics" section on caret's model tuning page: http://caret.r-forge.r-project.org/training.html

I don't think you gave enough information for anyone to write exactly what you're looking for, but here is an example using the code from the twoClassSummary function:

> library(caret)
> data(Titanic)
> 
> #an example custom function
> roc <- function (data, lev = NULL, model = NULL) {
+   require(pROC)
+   if (!all(levels(data[, "pred"]) == levels(data[, "obs"]))) 
+     stop("levels of observed and predicted data do not match")
+   rocObject <- try(pROC:::roc(data$obs, data[, lev[1]]), silent = TRUE)
+   rocAUC <- if (class(rocObject)[1] == "try-error") 
+     NA
+   else rocObject$auc
+   out <- c(rocAUC, sensitivity(data[, "pred"], data[, "obs"], lev[1]), specificity(data[, "pred"], data[, "obs"], lev[2]))
+   names(out) <- c("ROC", "Sens", "Spec")
+   out
+ }
> 
> #your train control specs
> tc <- trainControl(method="cv",classProb=TRUE,summaryFunction=roc)
> #yoru model with selection metric specificed
> train(Survived~.,data=data.frame(Titanic),method="rf",trControl=tc,metric="ROC")
32 samples
 4 predictors
 2 classes: 'No', 'Yes' 

No pre-processing
Resampling: Cross-Validation (10 fold) 

Summary of sample sizes: 28, 29, 30, 30, 28, 28, ... 

Resampling results across tuning parameters:

  mtry  ROC    Sens  Spec  ROC SD  Sens SD  Spec SD
  2     0.9    0.2   0.25  0.175   0.35     0.425  
  4     0.85   0.4   0.6   0.211   0.459    0.459  
  6     0.875  0.35  0.6   0.212   0.412    0.459  

ROC was used to select the optimal model using  the largest value.
The final value used for the model was mtry = 2. 
查看更多
登录 后发表回答