Are there some ready to use libraries or packages for python or R to reduce the number of levels for large categorical factors?
I want to achieve something similar to R: "Binning" categorical variables but encode into the most frequently top-k factors and "other".
Here is an example in
R
usingdata.table
a bit, but it should be easy withoutdata.table
also.I do not think you want to do it in this way. Grouping many levels into one group might make that feature less predictive. What you want to do is put all the levels that would go into Other into a cluster based on a similarity metric. Some of them might cluster with your top-K levels and some might cluster together to give best performance.
I had a similar issue and ended up answering it myself here. For my similarity metric I used the proximity matrix from a random forest regression fit on all features except that one. The difference in my solution is that some of my top-K most common may be clustered together since I use k-mediods to cluster. You would want to alter the cluster algorithm so that your mediods are the top-K you have chosen.
The R package
forcats
hasfct_lump()
for this purpose.Where
f
is the factor andn
is the number of most common levels to be preserved. The remaining are recoded toOther
.Here's an approach using
base
R: