I'm trying to follow the Deep Autoencoder Keras example. I'm getting a dimension mismatch exception, but for the life of me, I can't figure out why. It works when I use only one encoded dimension, but not when I stack them.
Exception: Input 0 is incompatible with layer dense_18:
expected shape=(None, 128), found shape=(None, 32)*
The error is on the line decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
from keras.layers import Dense,Input
from keras.models import Model
import numpy as np
# this is the size of the encoded representations
encoding_dim = 32
#NPUT LAYER
input_img = Input(shape=(784,))
#ENCODE LAYER
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim*4, activation='relu')(input_img)
encoded = Dense(encoding_dim*2, activation='relu')(encoded)
encoded = Dense(encoding_dim, activation='relu')(encoded)
#DECODED LAYER
# "decoded" is the lossy reconstruction of the input
decoded = Dense(encoding_dim*2, activation='relu')(encoded)
decoded = Dense(encoding_dim*4, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
#MODEL
autoencoder = Model(input=input_img, output=decoded)
#SEPERATE ENCODER MODEL
encoder = Model(input=input_img, output=encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
#COMPILER
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')