SGD Classifier partial fit learning with different

2019-07-31 04:28发布

I am trying to perform a SGD classification for one hot encoded data. I did a fit on my training example and want to perform partial_fit on fewer data at later time. I understand the error getting thrown because of dimension change between fit data and partial_fit data.

I also understand I need to perform data transform on my hot_new_df but I am unsure how.

IN[29] -- is where i am doing a fit()

IN[32] -- is where i am doing a partial_fit()

I have just presented a hypothetical example here... My actual data is over shape of 40K rows and ~200 columns

Jupyter QtConsole 4.3.1
Python 3.6.2 |Anaconda custom (64-bit)| (default, Sep 21 2017, 18:29:43) 
Type 'copyright', 'credits' or 'license' for more information
IPython 6.1.0 -- An enhanced Interactive Python. Type '?' for help.

In [27]: import pandas as pd
    ...: 
    ...: input_df = pd.DataFrame(dict(fruit=['Apple', 'Orange', 'Pine'], 
    ...:                              color=['Red', 'Orange','Green'],
    ...:                              is_sweet = [0,0,1],
    ...:                              country=['USA','India','Asia'],
    ...:                              is_valid = ['Valid', 'Valid', 'Invalid']))
    ...: input_df
Out[27]: 
    color country   fruit  is_sweet is_valid
0     Red     USA   Apple         0    Valid
1  Orange   India  Orange         0    Valid
2   Green    Asia    Pine         1  Invalid

In [28]: hot_df = pd.get_dummies(input_df, columns=['fruit','color','country'])
    ...: hot_df
Out[28]: 
   is_sweet is_valid  fruit_Apple  fruit_Orange  fruit_Pine  color_Green  \
0         0    Valid            1             0           0            0   
1         0    Valid            0             1           0            0   
2         1  Invalid            0             0           1            1   

   color_Orange  color_Red  country_Asia  country_India  country_USA  
0             0          1             0              0            1  
1             1          0             0              1            0  
2             0          0             1              0            0  

In [29]: from sklearn.linear_model import SGDClassifier
    ...: from sklearn.model_selection import train_test_split
    ...: 
    ...: X_train, X_test, y_train, y_test = train_test_split(hot_df.drop(['is_valid'], axis=1),
    ...:                                                     hot_df['is_valid'],
    ...:                                                     test_size=0.1)
    ...: clf = SGDClassifier(loss="log", penalty="l2")
    ...: clf.fit(X_train, y_train)
    ...: clf
/Users/praj3/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/stochastic_gradient.py:84: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.stochastic_gradient.SGDClassifier'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
  "and default tol will be 1e-3." % type(self), FutureWarning)
Out[29]: 
SGDClassifier(alpha=0.0001, average=False, class_weight=None, epsilon=0.1,
       eta0=0.0, fit_intercept=True, l1_ratio=0.15,
       learning_rate='optimal', loss='log', max_iter=5, n_iter=None,
       n_jobs=1, penalty='l2', power_t=0.5, random_state=None,
       shuffle=True, tol=None, verbose=0, warm_start=False)

In [30]: new_df = pd.DataFrame(dict(fruit=['Banana'],
    ...:                            color=['Red'],
    ...:                            is_sweet=[1],
    ...:                            country=['India'],
    ...:                            is_valid=['Invalid']))
    ...: new_df
Out[30]: 
  color country   fruit  is_sweet is_valid
0   Red   India  Banana         1  Invalid

In [31]: hot_new_df = pd.get_dummies(new_df, columns=['fruit','color','country'])
    ...: hot_new_df
Out[31]: 
   is_sweet is_valid  fruit_Banana  color_Red  country_India
0         1  Invalid             1          1              1

In [32]: clf.partial_fit(hot_new_df.drop(['is_valid'], axis=1), hot_new_df['is_valid'])
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-32-088a54ade6f8> in <module>()
----> 1 clf.partial_fit(hot_new_df.drop(['is_valid'], axis=1), hot_new_df['is_valid'])

~/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/stochastic_gradient.py in partial_fit(self, X, y, classes, sample_weight)
    543                                  learning_rate=self.learning_rate, max_iter=1,
    544                                  classes=classes, sample_weight=sample_weight,
--> 545                                  coef_init=None, intercept_init=None)
    546 
    547     def fit(self, X, y, coef_init=None, intercept_init=None,

~/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/stochastic_gradient.py in _partial_fit(self, X, y, alpha, C, loss, learning_rate, max_iter, classes, sample_weight, coef_init, intercept_init)
    381         elif n_features != self.coef_.shape[-1]:
    382             raise ValueError("Number of features %d does not match previous "
--> 383                              "data %d." % (n_features, self.coef_.shape[-1]))
    384 
    385         self.loss_function_ = self._get_loss_function(loss)

ValueError: Number of features 4 does not match previous data 10.

In [33]: 

1条回答
Luminary・发光体
2楼-- · 2019-07-31 04:32

You should use the sklearn.preprocessing.OneHotEncoder. The documentation for this can be found here.

Do your train_test_split before encoding then usage will be something like this:

from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
encoder.fit(X_train)

X_train = encoder.transform(X_train)
X_test = encoder.transform(X_test)

I hope this helps!

查看更多
登录 后发表回答