If using a library like scikit-learn, how do I assign more weight on certain features in the input to a classifier like SVM? Is this something people do or is there another solution to my problem?
相关问题
- How to get a list of antonyms lemmas using Python,
- How to use Reshape keras layer with two None dimen
- How to conditionally scale values in Keras Lambda
- Trying to understand Pytorch's implementation
- Convolutional Neural Network seems to be randomly
相关文章
- what is the difference between transformer and est
- How to downgrade to cuda 10.0 in arch linux?
- ValueError: Unknown label type: 'continuous
- How to use cross_val_score with random_state
- Python loading old version of sklearn
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
- How to disable keras warnings?
First of all - you should probably not do it. The whole concept of machine learning is to use statistical analysis to assign optimal weights. You are interfering here with the whole concept, thus you need really strong evidence that this is crucial to the process you are trying to model, and for some reason your model is currently missing it.
That being said - there is no general answer. This is purely model specific, some of which will allow you to weight features - in random forest you could bias distribution from which you sample features to analyse towards the ones that you are interested in; in SVM it should be enough to just multiply given feature by a constant - remember when you were told to normalize your features in SVM? This is why - you can use the scale of features to 'steer' your classifier towards given features. The ones with high values will be preffered. This will actually work for any weight norm-regularized model (regularized logistic regression, ridge regression, lasso etc.).