scikit-learn - multinomial logistic regression wit

2019-07-29 06:12发布

I'm implementing a multinomial logistic regression model in Python using scikit-learn. The thing is, however, that I'd like to use probability distribution for classes of my target variable. As an example let's say that this is a 3-classes variable which looks as follows:

    class_1 class_2 class_3
0   0.0     0.0     1.0
1   1.0     0.0     0.0
2   0.0     0.5     0.5
3   0.2     0.3     0.5
4   0.5     0.1     0.4

So that a sum of values for every row equals to 1.

How could I fit a model like this? When I try:

model = LogisticRegression(solver='saga', multi_class='multinomial')
model.fit(X, probabilities)

I get an error saying:

ValueError: bad input shape (10000, 3)

Which I know is related to the fact that this method expects a vector, not a matrix. But here I can't compress the probabilities matrix into vector since the classes are not exclusive.

2条回答
Evening l夕情丶
2楼-- · 2019-07-29 06:49

You need to input the correct labels with the training data, and then the logistic regression model will give you probabilities in return when you use predict_proba(X), and it would return a matrix of shape [n_samples, n_classes]. If you use a just predict(X) then it would give you an array of the most probable class in shape [n_samples,1]

查看更多
女痞
3楼-- · 2019-07-29 07:07

You can't have cross-entropy loss with non-indicator probabilities in scikit-learn; this is not implemented and not supported in API. It is a scikit-learn's limitation.

For logistic regression you can approximate it by upsampling instances according to probabilities of their labels. For example, you can up-sample every instance 10x: e.g. if for a training instance class 1 has probability 0.2, and class 2 has probability 0.8, generate 10 training instances: 8 with class 2 and 2 with class 1. It won't be as efficient as it could be, but in a limit you'll be optimizing the same objective function.

You can do something like this:

from sklearn.utils import check_random_state
import numpy as np

def expand_dataset(X, y_proba, factor=10, random_state=None):
    """
    Convert a dataset with float multiclass probabilities to a dataset
    with indicator probabilities by duplicating X rows and sampling
    true labels.
    """
    rng = check_random_state(random_state)
    n_classes = y_proba.shape[1]
    classes = np.arange(n_classes, dtype=int)
    for x, probs in zip(X, y_proba):
        for label in rng.choice(classes, size=factor, p=probs):
            yield x, label

See a more complete example here: https://github.com/TeamHG-Memex/eli5/blob/8cde96878f14c8f46e10627190abd9eb9e705ed4/eli5/lime/utils.py#L16

Alternatively, you can implement your Logistic Regression using libraries like TensorFlow or PyTorch; unlike scikit-learn, it is easy to define any loss in these frameworks, and cross-entropy is available out of box.

查看更多
登录 后发表回答