Same output of the Keras model

2019-08-17 23:33发布

I have a Keras model for predicting moves in the game. I have an input shape of (160,120 ,1). I have the following model with an output of 9 nodes:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Conv2D, MaxPooling2D, ZeroPadding2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import Adam
from keras.regularizers import l2
from keras import optimizers
def alexnet_model(n_classes=9, l2_reg=0.,
    weights=None):

    # Initialize model
    alexnet = Sequential()
    alexnet.add(Conv2D(24, (11, 11), input_shape=(160,120,1), activation ='relu'))
    alexnet.add(MaxPooling2D(pool_size=(2, 2)))
    alexnet.add(BatchNormalization())
    alexnet.add(Conv2D(36, (5, 5), activation ='relu'))
    alexnet.add(MaxPooling2D(pool_size=(2, 2)))
    alexnet.add(Conv2D(48, (3, 3),  activation ='relu'))
    alexnet.add(Conv2D(54, (3, 3),  activation ='relu'))
    alexnet.add(MaxPooling2D(pool_size=(2, 2)))
    alexnet.add(Flatten())
    alexnet.add(Dense(300,   activation ='tanh'))
    alexnet.add(Dropout(0.5))
    alexnet.add(Dense(200,   activation ='tanh'))
    alexnet.add(Dropout(0.5))
    alexnet.add(Dense(100,   activation ='tanh'))
    alexnet.add(Dropout(0.5))


    alexnet.add(Dense(n_classes , activation = 'softmax'))

    optimizer = Adam(lr=1e-3)

    alexnet.compile(loss='categorical_crossentropy', optimizer=optimizer)


    alexnet.summary()


    return alexnet

Then, I run a training script. My X has a shape of (12862, 160, 120, 1) and y of (1000,9).

import numpy as np
import tensorflow as tf
from random import shuffle
import pandas as pd
from tensorflow.keras import layers,models
from keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
# what to start at
START_NUMBER = 60

# what to end at
hm_data = 111

# use a previous model to begin?
START_FRESH = False
WIDTH = 160
HEIGHT = 120
LR = 1e-3
EPOCHS = 1

MODEL_NAME = 'model_new.h5'
EXISTING_MODEL_NAME = ''

model = alexnet_model()

X=[]

Y=[]
for i in range(EPOCHS):
    train_data = np.load('training_data_1.npy')
    print(len(train_data))
    train = train_data[0:12862]
    test = train_data[-1000:]

    X = np.array([i[0] for i in train]).reshape(-1,WIDTH,HEIGHT,1)
    Y = np.array([i[1] for i in train])

    test_x = np.array([i[0] for i in test]).reshape(-1,WIDTH,HEIGHT,1)
    test_y = np.array([i[1] for i in test])
    print(X.shape)
    model.fit(X, Y , batch_size = 16, epochs = 10 , validation_data = (test_x, test_y), verbose=1)
    model.save(MODEL_NAME)

# tensorboard --logdir=foo:C:/Users/H/Desktop/ai-gaming-phase5/log

After testing the model I get an output:

array([[2.8518048e-01, 5.5075828e-03, 7.3730588e-02, 5.3255934e-02,
        1.0635615e-01, 6.4690344e-02, 9.1519929e-08, 7.0413840e-08,
        4.1127869e-01]], dtype=float32)

with this line of code:

model.predict(X[100].reshape(-1,160,120,1)) 

I know that it is not good to test model on X but it doesn't matter which picture I use but I get the same output. Just for reference (my Y values):

w = [1,0,0,0,0,0,0,0,0]
s = [0,1,0,0,0,0,0,0,0]
a = [0,0,1,0,0,0,0,0,0]
d = [0,0,0,1,0,0,0,0,0]
wa = [0,0,0,0,1,0,0,0,0]
wd = [0,0,0,0,0,1,0,0,0]
sa = [0,0,0,0,0,0,1,0,0]
sd = [0,0,0,0,0,0,0,1,0]
nk = [0,0,0,0,0,0,0,0,1]

I tried another model but it still doesn't work. Here is the amount of training data for each class:

Counter({'[1, 0, 0, 0, 0, 0, 0, 0, 0]': 5000,
         '[0, 0, 0, 0, 0, 0, 0, 0, 1]': 5000,
         '[0, 0, 0, 0, 1, 0, 0, 0, 0]': 1183,
         '[0, 0, 0, 0, 0, 1, 0, 0, 0]': 982,
         '[0, 0, 1, 0, 0, 0, 0, 0, 0]': 832,
         '[0, 0, 0, 1, 0, 0, 0, 0, 0]': 764,
         '[0, 1, 0, 0, 0, 0, 0, 0, 0]': 101})

I think that the problem is in the model but I don't know how to change it. Could it be the problem of small training data? The loss valus is also not going down: loss: 1.7416 - val_loss: 1.4639. It only decreases by a few decimals and sometimes even goes back up.

2条回答
forever°为你锁心
2楼-- · 2019-08-18 00:32

Solved! Just normalizing training data didn't work. I decreased amount of nodes and layers and everything worked fine. I guess it was an overfitting problem.

查看更多
兄弟一词,经得起流年.
3楼-- · 2019-08-18 00:37

From what it appears in your code, and since you mentioned that the loss is decreasing very slowly, the best guess is that the input data (which I think are images) is not normalized and therefore this prevents a smooth gradient flow. Try normalizing them. One simple way of doing it is like this:

X = X.astype('float32') / 255.0
test_x = test_x.astype('float32') / 255.0

Further, you may need to account for the class imbalance in the training data and counter it by using class_weights argument in fit method (look at the doc to find out how it can be used).

查看更多
登录 后发表回答