Handling unseen categorical variables and MaxBins

2019-08-08 20:24发布

问题:

Below is the code I have for a RandomForest multiclass-classification model. I am reading from a CSV file and doing various transformations as seen in the code.

  1. I am calculating the max number of categories and then giving it as a parameter to RF. This takes a lot of time! Is there a parameter to set or an easier way to make the model automatically infer the max categories?Since it can go more than 1000 and I cannot omit them.

  2. How do I handle unseen labels on new data for prediction since StringIndexer will not work in that case. the code below is just a split of data but I will be introducing new data as well in future

    // Need to predict 2 classes
    val cols_to_predict=Array("Label1","Label2")
    
    // ID col
    val omit_cols=Array("Key")
    
    // reading the csv file
    val data = sqlContext.read
    .format("com.databricks.spark.csv")
    .option("header", "true") // Use first line of all files as header
    .option("inferSchema", "true") // Automatically infer data types
    .load("abc.csv")
    .cache()
    
    // creating a features DF by droppping the labels so that I can run all 
    // the cols through String Indexer
    val features=data.drop("Label1").drop("Label2").drop("Key")
    
    // Since I do not know my max categories possible, I find it out
    // and use it for maxBins parameter in RF
    val distinct_col_counts=features.columns.map(x => data.select(x).distinct().count ).max
    
    val transformers: Array[org.apache.spark.ml.PipelineStage] = features.columns.map(
      cname => new StringIndexer().setInputCol(cname).setOutputCol(s"${cname}_index").fit(features)
    )
    val assembler  = new VectorAssembler()
      .setInputCols(features.columns.map(cname => s"${cname}_index"))
      .setOutputCol("features")
    
    val labelIndexer2 = new StringIndexer()
      .setInputCol("prog_label2")
      .setOutputCol("Label2")
      .fit(data)
    
    val labelIndexer1 = new StringIndexer()
      .setInputCol("orig_label1")
      .setOutputCol("Label1")
      .fit(data)
    
    val rf = new RandomForestClassifier()
      .setLabelCol("Label1")
      .setFeaturesCol("features")
      .setNumTrees(100)
      .setMaxBins(distinct_col_counts.toInt)
    
    val labelConverter = new IndexToString()
      .setInputCol("prediction")
      .setOutputCol("predictedLabel")
      .setLabels(labelIndexer1.labels)
    
    // Split into train and test
    val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))
    trainingData.cache()
    testData.cache()
    
    // Running only for one label for now Label1
    val stages: Array[org.apache.spark.ml.PipelineStage] =transformers :+ labelIndexer1 :+ assembler :+ rf :+ labelConverter //:+ labelIndexer2
    
    val pipeline=new Pipeline().setStages(stages)
    val model=pipeline.fit(trainingData)
    val predictions = model.transform(testData)