I have 7 classes that needs to be classified and I have 10 features. Is there a optimal value for k that I need to use in this case or do I have to run the KNN for values of k between 1 and 10 (around 10) and determine the best value with the help of the algorithm itself?
相关问题
- Bulding a classification model in R studio with ke
- Create class intervals in r and sum values
- What does cl parameter in knn function in R mean?
- How to get all confusion matrix terminologies (TPR
- How to fine-tune a keras model with existing plus
相关文章
- McNemar's test in Python and comparison of cla
- keras image preprocessing unbalanced data
- loss, val_loss, acc and val_acc do not update at a
- How do I specify the positive class in an H2O rand
- Get recall (sensitivity) and precision (PPV) value
- MultiClass using LIBSVM
- Any difference between H2O and Scikit-Learn metric
- Does stemming harm precision in text classificatio
In addition to the article I posted in the comments there is this one as well that suggests:
It's going to depend a lot on your individual cases, sometimes it is best to run through each possible value for k and decide for yourself.
Important thing to note in k-NN algorithm is the that the number of features and the number of classes both don't play a part in determining the value of k in k-NN algorithm. k-NN algorithm is an ad-hoc classifier used to classify test data based on distance metric, i.e a test sample is classified as Class-1 if there are more number of Class-1 training samples closer to the test sample compared to other Classes training samples. For Eg: If value of k = 5 samples, then the 5 closest training samples are selected based on a distance metric and then a voting for most number of samples per class is done. So if 3 samples belong to Class-1 and 2 belong to Class-5, then that test sample is classified as Class-1. So the value of k indicates the number of training samples that are needed to classify the test sample.
Coming to your question, the value of k is non-parametric and a general rule of thumb in choosing the value of k is k = sqrt(N)/2, where N stands for the number of samples in your training dataset. Another tip that I suggest is to try and keep the value of k odd, so that there is no tie between choosing a class but that points to the fact that training data is highly correlated between classes and using a simple classification algorithm such as k-NN would result in poor classification performance.
In KNN, finding the value of k is not easy. A small value of k means that noise will have a higher influence on the result and a large value make it computationally expensive.
Data scientists usually choose :
1.An odd number if the number of classes is 2
2.Another simple approach to select k is set k = sqrt(n). where n = number of data points in training data.
Hope this will help you.