How to save and load MLLib model in Apache Spark

2019-01-23 08:37发布

I trained a classification model in Apache Spark (using pyspark). I stored the model in an object, LogisticRegressionModel. Now, I want to make predictions on new data. I would like to store the model, and read it back into a new program in order to make the predictions. Any idea how to store the model? I'm thinking of maybe pickle, but I'm a newbie to both python and Spark, so I'd like to hear what the community thinks.

UPDATE: I also needed a decision tree classifier. To read it, I needed to import DecisionTreeModel from pyspark.mllib.tree import DecisionTree, DecisionTreeModel

1条回答
Luminary・发光体
2楼-- · 2019-01-23 09:10

You can save your model by using the save method of mllib models.

# let lrm be a LogisticRegression Model
lrm.save(sc, "lrm_model.model")

After storing it you can load it in another application.

sameModel = LogisticRegressionModel.load(sc, "lrm_model.model")

As @zero323 stated before, there is another way to achieve this, and is by using the Predictive Model Markup Language (PMML).

is an XML-based file format developed by the Data Mining Group to provide a way for applications to describe and exchange models produced by data mining and machine learning algorithms.

查看更多
登录 后发表回答