How to train mix of image and data in CNN using Im

2019-05-13 19:59发布

I would like to train a convolutional neural network in Tflearn-Tensorflow using a mix of images (pixel info) and data. Because I have a short number of images, I need to use the Image Augmentation to increase the number of image samples that I pass to the network. But that means that I can only pass image data as input data, having to add the non-image data at a later stage, presumably before the fully connected layer. I can't work out how to do this, since it seems that I can only tell the network what data to use when I call model.fit({'input': ) and I can't pass the concatenation of both types of data there as input_data calls directly to the image augmentation. Is there any concatenation that I can do mid-stage to add the extra data or any other alternatives that allows me use ImageAugmentation and the non-image data that I need to train the network? My code with some comments below. Many thanks.

import tensorflow as tf
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression

#px_train:pixel data, data_train: additional data 
px_train, data_train, px_cv, data_cv, labels_train, labels_cv = prepare_data(path, filename)

img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle = 89.)
img_aug.add_random_blur(sigma_max=3.)
img_aug.add_random_flip_updown()
img_aug.add_random_90degrees_rotation(rotations = [0, 1, 2, 3])

#I can only pass image data here to apply data_augmentation 
convnet = input_data(shape = [None, 96, 96, 1], name = 'input', data_augmentation = img_aug)

convnet = conv_2d(convnet, 32, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

convnet = conv_2d(convnet, 64, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

convnet = tf.reshape(convnet, [-1, 24*24*64])    
#convnet = tf.concat((convnet, conv_feat), 1)
#If I concatenated data like above, where could I tell Tensorflow to assign the variable conv_feat to my 'data_train' values?

convnet = fully_connected(convnet, 1024, activation = 'relu')
convnet = dropout(convnet, 0.8)

convnet = fully_connected(convnet, 99, activation = 'softmax')
convnet = regression(convnet, optimizer = 'adam', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'labels')

model = tflearn.DNN(convnet)

#I can't add additional 'input' labels here to pass my 'data_train'. TF gives error.
model.fit({'input': np.array(px_train).reshape(-1, 96, 96, 1)}, {'labels': labels_train}, n_epoch = 50, validation_set = ({'input': np.array(px_cv).reshape(-1, 96, 96, 1)}, {'labels': labels_cv}), snapshot_step = 500, show_metric = True, run_id = 'Test')

1条回答
何必那么认真
2楼-- · 2019-05-13 20:38

If you look at the documentation for the model.fit method: http://tflearn.org/models/dnn/. To give multiple inputs to model.fit you just need to pass them as a list i.e. model.fit([X1, X2], Y). In this way X1 is passed to the first input_data layer you have and X2 is passed to the second input_data layer.

If you are looking to concatenation of different layers you can take a look at the merge layer in Tflearn: http://tflearn.org/layers/merge_ops/

Edit 1:

I think the following code should run, however you may want to merge you layers in a different way than I am doing it.

import tensorflow as tf
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from tflearn.layers.merge_ops import merge
from tflearn.data_augmentation import ImageAugmentation

img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle = 89.)
img_aug.add_random_blur(sigma_max=3.)
img_aug.add_random_flip_updown()
img_aug.add_random_90degrees_rotation(rotations = [0, 1, 2, 3])

convnet = input_data(shape = [None, 96, 96, 1], data_augmentation = img_aug)
convfeat = input_data(shape = [None, 120])

convnet = conv_2d(convnet, 32, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

convnet = conv_2d(convnet, 64, 2, activation = 'relu')
convnet = max_pool_2d(convnet, 2)                                   

# To merge the layers they need to have same dimension
convnet = fully_connected(convnet, 120) 
convnet = merge([convnet, convfeat], 'concat')

convnet = fully_connected(convnet, 1024, activation = 'relu')
convnet = dropout(convnet, 0.8)

convnet = fully_connected(convnet, 99, activation = 'softmax')
convnet = regression(convnet, optimizer = 'adam', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'labels')

model = tflearn.DNN(convnet)

# Give multiple inputs as a list
model.fit([np.array(px_train).reshape(-1, 96, 96, 1), np.array(data_train).reshape(-1, 120)], 
           labels_train, 
           n_epoch = 50, 
           validation_set = ([np.array(px_cv).reshape(-1, 96, 96, 1), np.array(data_cv).reshape(-1, 120)], labels_cv), 
           snapshot_step = 500, 
           show_metric = True, 
           run_id = 'Test')
查看更多
登录 后发表回答