I'm using scikit-learn to cluster text documents. I'm using the classes CountVectorizer, TfidfTransformer and MiniBatchKMeans to help me do that.
New text documents are added to the system all the time, which means that I need to use the classes above to transform the text and predict a cluster. My question is: how should I store the data on disk?
Should I simply pickle the vectorizer, transformer and kmeans objects?
Should I just save the data? If so, how do I add it back to the vectorizer, transformer and kmeans objects?
Any help would be greatly appreciated
It depends on what you want to do.
If you want to find some fixed cluster centers on a training set and then re-use them later to compute cluster assignments for new data then pickling the models (or just saving the vocabulary of the vectorizer and the other models constructors parameters and the cluster center positions) is ok.
If what you want is doing clustering with new data, you might want to retrain the whole pipeline using the union of the new data + the old data to make it possible for the vocabulary of the vectorizer to build new features (dimensions) for the new words and let the clustering algorithm find cluster centers that better match the structure of the complete dataset.
Note that in the future we will provide hashing vectorizers (see for instance this pull request on hashing transformers as a first building block), hence storing the vocabulary won't be necessary any more (but you will loose the ability to introspect the "meaning" of the feature dimensions).
As for pickling the models vs using your own representation for their parameters I have answered this part in your previous question here: Persist Tf-Idf data
Yeah, I think the general answer with sk-learn is to pickle and pray.
It seems to me that this is super fragile, compared to have a documented serialization format that doesn't depend on implementation details. But maybe they know this, and won't make backwards incompatible changes to their classes?