Keras Model With CuDNNLSTM Layers Doesn't Work

2019-06-05 05:05发布

问题:

I have used an AWS p3 instance to train the following model using GPU acceleration:

x = CuDNNLSTM(128, return_sequences=True)(inputs)
x = Dropout(0.2)(x)
x = CuDNNLSTM(128, return_sequences=False)(x)
x = Dropout(0.2)(x)
predictions = Dense(1, activation='tanh')(x)
model = Model(inputs=inputs, outputs=predictions)

After training I saved the model with Keras' save_model function and moved it to a separate production server that doesn't have a GPU.

When I attempt to predict using the model on the production server it fails with the following error:

No OpKernel was registered to support Op 'CudnnRNN' with these attrs. Registered devices: [CPU], Registered kernels:

I'm guessing this is because the production server doesn't have GPU support, but I was hoping this wouldn't be a problem. Is there any way I can use this model on a production server without a GPU?

回答1:

No, you can't, CuDNN requires the use of a CUDA GPU. You have to replace your CuDNNLSTM layers with standard LSTM ones.



回答2:

try

pip install tensorflow-gpu