How to handle categorical features for Decision Tr

2019-04-12 02:16发布

I am trying to build decision tree and random forest classifier on the UCI bank marketing data -> https://archive.ics.uci.edu/ml/datasets/bank+marketing. There are many categorical features (having string values) in the data set.

In the spark ml document, it's mentioned that the categorical variables can be converted to numeric by indexing using either StringIndexer or VectorIndexer. I chose to use StringIndexer (vector index requires vector feature and vector assembler which convert features to vector feature accepts only numeric type ). Using this approach, each of the level of a categorical feature will be assigned numeric value based on it's frequency (0 for most frequent label of a category feature).

My question is how the algorithm of Random Forest or Decision Tree will understand that new features (derived from categorical features) are different than continuous variable. Will indexed feature be considered as continuous in the algorithm? Is it the right approach? Or should I go ahead with One-Hot-Encoding for categorical features.

I read some of the answers from this forum but i didn't get clarity on the last part.

3条回答
冷血范
2楼-- · 2019-04-12 02:27

One Hot Encoding should be done for categorical variables with categories > 2.

To understand why, you should know the difference between the sub categories of categorical data: Ordinal data and Nominal data.

Ordinal Data: The values has some sort of ordering between them. example: Customer Feedback(excellent, good, neutral, bad, very bad). As you can see there is a clear ordering between them (excellent > good > neutral > bad > very bad). In this case StringIndexer alone is sufficient for modelling purpose.

Nominal Data: The values has no defined ordering between them. example: colours(black, blue, white, ...). In this case StringIndexer alone is NOT sufficient. and One Hot Encoding is required after String Indexing.

After String Indexing lets assume the output is:

 id | colour   | categoryIndex
----|----------|---------------
 0  | black    | 0.0
 1  | white    | 1.0
 2  | yellow   | 2.0
 3  | red      | 3.0

Then without One Hot Encoding, the machine learning algorithm will assume: red > yellow > white > black, which we know its not true. OneHotEncoder() will help us avoid this situation.

So to answer your question,

Will indexed feature be considered as continuous in the algorithm?

It will be considered as continious variable.

Is it the right approach? Or should I go ahead with One-Hot-Encoding for categorical features

depends on your understanding of data.Although Random Forest and some boosting methods doesn't require OneHot Encoding, most ML algorithms need it.

Refer: https://spark.apache.org/docs/latest/ml-features.html#onehotencoder

查看更多
叼着烟拽天下
3楼-- · 2019-04-12 02:30

In short, Spark's RandomForest does NOT require OneHotEncoder for categorical features created by StringIndexer or VectorIndexer.

Longer Explanation. In general DecisionTrees can handle both Ordinal and Nominal types of data. However, when it comes to the implementation, it could be that OneHotEncoder is required (as it is in Python's scikit-learn).
Luckily, Spark's implementation of RandomForest honors categorical features if properly handled and OneHotEncoder is NOT required! Proper handling means that categorical features contain the corresponding metadata so that RF knows what it is working on. Features that have been created by StringIndexer or VectorIndexer contain metadata in the DataFrame about being generated by the Indexer and being categorical.

查看更多
欢心
4楼-- · 2019-04-12 02:39

According to the vdep answers, the StringIndexer is enough for Ordinal Data. Howerver the StringIndexer sort the data by label frequency, for example "excellent > good > neutral > bad > very bad" maybe become the "good,excellent,neutral". So for Oridinal data, the StringIndexer do not suit for it.

Secondly, for Nominal Data, the document tells us that

for a binary classification problem with one categorical feature with three categories A, B and C whose corresponding proportions of label 1 are 0.2, 0.6 and 0.4, the categorical features are ordered as A, C, B. The two split candidates are A | C, B and A , C | B where | denotes the split.

The "corresponding proportions of label 1" is same as the label frequency? So I am confused of the feasibility with the StringInder to DecisionTree in Spark.

查看更多
登录 后发表回答