I am having trouble finding the correct way of passing multiple inputs to a model. The model has 2 inputs
- noise image of shape
(256, 256, 3)
- input image of shape
(256, 256, 3)
and 1 output
- output image of shape
(256, 256, 3)
I am producing the images via ImageDataGenerator
:
x_data_gen = ImageDataGenerator(
horizontal_flip=True,
validation_split=0.2)
And I am producing the samples via a python generator:
def image_sampler(datagen, batch_size, subset="training"):
for imgs in datagen.flow_from_directory('data/r_cropped', batch_size=batch_size, class_mode=None, seed=1, subset=subset):
g_y = []
noises = []
bw_images = []
for i in imgs:
# append to expected output the original image
g_y.append(i/255.0)
noises.append(generate_noise(1, 256, 3)[0])
bw_images.append(iu_rgb2gray(i))
yield(np.array([noises, bw_images]), np.array(g_y))
When trying to train the model with:
generator.fit_generator(
image_sampler(x_data_gen, 32),
validation_data=image_sampler(x_data_gen,32,"validation"),
epochs=EPOCHS,
steps_per_epoch= 540,
validation_steps=160 )
I receive an error stating:
Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays
while the message is quite clear, I do not understand how to fix the generation process to solve it.
I tried:
yield([noises, bw_images], np.array(g_y))
but this didn't work as it would reach a different error:
AttributeError: 'list' object has no attribute 'shape'
what am I missing?
When you have multiple inputs/outputs you should pass them as a list of numpy arrays. So your second approach is correct but you have forgotten to convert the lists to numpy arrays in your second approach:
A more verbose approach to make sure everything is correct, is to choose names for the input and output layers. Example:
Then, use those names like this in your generator function:
By doing so, you are making sure that the mapping is done correctly.