I'm using R package randomForest
to do a regression on some biological data. My training data size is 38772 X 201
.
I just wondered---what would be a good value for the number of trees ntree
and the number of variable per level mtry
? Is there an approximate formula to find such parameter values?
Each row in my input data is a 200 character representing the amino acid sequence, and I want to build a regression model to use such sequence in order to predict the distances between the proteins.
The default for mtry is quite sensible so there is not really a need to muck with it. There is a function
tuneRF
for optimizing this parameter. However, be aware that it may cause bias.There is no optimization for the number of bootstrap replicates. I often start with
ntree=501
and then plot the random forest object. This will show you the error convergence based on the OOB error. You want enough trees to stabilize the error but not so many that you over correlate the ensemble, which leads to overfit.Here is the caveat: variable interactions stabilize at a slower rate than error so, if you have a large number of independent variables you need more replicates. I would keep the ntree an odd number so ties can be broken.
For the dimensions of you problem I would start
ntree=1501
. I would also recommended looking onto one of the published variable selection approaches to reduce the number of your independent variables.One nice trick that I use is to initially start with first taking square root of the number of predictors and plug that value for "mtry". It is usually around the same value that tunerf funtion in random forest would pick.
I use the code below to check for accuracy as I play around with ntree and mtry (change the parameters):
The short answer is no.
The
randomForest
function of course has default values for bothntree
andmtry
. The default formtry
is often (but not always) sensible, while generally people will want to increasentree
from it's default of 500 quite a bit.The "correct" value for
ntree
generally isn't much of a concern, as it will be quite apparent with a little tinkering that the predictions from the model won't change much after a certain number of trees.You can spend (read: waste) a lot of time tinkering with things like
mtry
(andsampsize
andmaxnodes
andnodesize
etc.), probably to some benefit, but in my experience not a lot. However, every data set will be different. Sometimes you may see a big difference, sometimes none at all.The caret package has a very general function
train
that allows you to do a simple grid search over parameter values likemtry
for a wide variety of models. My only caution would be that doing this with fairly large data sets is likely to get time consuming fairly quickly, so watch out for that.Also, somehow I forgot that the ranfomForest package itself has a
tuneRF
function that is specifically for searching for the "optimal" value formtry
.Could this paper help ? Limiting the Number of Trees in Random Forests
They never use more than 200 trees.