I have a dataset that contains 510
samples for training and 127
samples for testing, each sample has 7680
features. I want to design a model to predict the height (cm)-label-from the training data. Currently, I used SVM but it provided very bad result. Could you look at my code and give me some comments. You can try it in your machine using the dataset and a runnable code
import numpy as np
from sklearn.svm import SVR
# Training Data
train_X = np.loadtxt('trainX.txt') # 510 x 7680
train_Y = np.loadtxt('trainY.txt') # 510 x 1
test_X = np.loadtxt('testX.txt') # 127 x 7680
test_Y = np.loadtxt('testY.txt') # 127 x 1
my_svr = SVR(C=1000, epsilon=0.2)
my_svr.fit(train_X,train_Y)
p_regression = my_svr.predict(test_X)
print (p_regression)
print (test_Y)
Some results:
p_regression
[15.67367165 16.35094166 13.10510262 14.03943211 12.7116549 11.45071423
13.27225207 9.44959181 10.45775627 13.23953143 14.95568324 11.35994414
10.69531821 12.42556347 14.54712287 12.25965911 9.04101931 14.03604126
12.41237627 13.51951317 10.36302674 9.86389635 11.41448842 15.67146184
14.74764672 11.22794536 12.04429175 12.48199183 14.29790809 16.21724184
10.94478135 9.68210872 14.8663311 8.62974573 15.17281425 12.97230127
9.46515876 16.24388177 10.35742683 15.65336366 11.04652502 16.35094166
14.03943211 10.29066405 13.27225207 9.44959181 10.45775627 13.23953143
14.95568324 11.35994414 10.69531821 12.42556347 14.54712287 12.25965911
9.04101931 14.03604126 12.41237627 13.51951317 10.36302674 9.86389635
11.41448842 15.67146184 14.74764672 11.22794536 12.04429175 12.48199183
14.29790809 16.21724184 10.94478135 9.68210872 14.8663311 8.62974573
15.17281425 12.97230127 9.46515876 16.24388177 10.35742683 15.65336366
11.04652502 16.35094166 14.03943211 10.29066405 13.27225207 9.44959181
10.45775627 13.23953143 14.95568324 11.35994414 10.69531821 12.42556347
14.54712287 12.25965911 9.04101931 14.03604126 12.41237627 13.51951317
10.36302674 9.86389635 11.41448842 15.67146184 14.74764672 11.22794536
12.04429175 12.48199183 14.29790809 16.21724184 10.94478135 9.68210872
14.8663311 8.62974573 15.17281425 12.97230127 9.46515876 16.24388177
10.35742683 15.65336366 11.04652502 16.35094166 14.03943211 10.29066405
13.27225207 9.44959181 10.45775627 13.23953143 14.95568324 11.35994414
10.69531821]
test_Y
[13. 14. 13. 15. 15. 17. 13. 17. 16. 12. 17. 6. 4. 3. 4. 6. 6. 8.
9. 18. 3. 6. 4. 6. 7. 8. 11. 11. 13. 12. 12. 14. 13. 12. 15. 15.
16. 15. 17. 18. 17. 14. 15. 17. 13. 17. 16. 12. 17. 6. 4. 3. 4. 6.
6. 8. 9. 18. 3. 6. 4. 6. 7. 8. 11. 11. 13. 12. 12. 14. 13. 12.
15. 15. 16. 15. 17. 18. 17. 14. 15. 17. 13. 17. 16. 12. 17. 6. 4. 3.
4. 6. 6. 8. 9. 18. 3. 6. 4. 6. 7. 8. 11. 11. 13. 12. 12. 14.
13. 12. 15. 15. 16. 15. 17. 18. 17. 14. 15. 17. 13. 17. 16. 12. 17. 6.
4.]
Here is a similar approach. We will split data sets into
train
andtest
ones.train
data set will be used for tuning hyperparameters and for fitting different models. Then we will choose the best (in terms of MSE) model and predict values from thetest
data set.All trained (fitted) models will be saved as Pickle files, so they can be loaded later on using
joblib.load()
method.Output:
Code:
I agree with @George -
"there is something "wrong" with the test set"
. I got similar results of MSE - approx. 21.I also tried to put train and test datasets together and feed it to GridSearchCV.
Here are the results of those attempts:
Also different splits are giving very different test scores:
Here is a full code:
PS sorry for using name
classifier
instead ofregressor
- I just reused my old code where I was searching for the best classifier....according to your dataset it seems that your features are too high. It is better to use features grouping algorithms before you start processing with SVM.