In Keras
, the high-level deep learning library, there are multiple types of recurrent layers; these include LSTM
(Long short term memory) and CuDNNLSTM
. According to the Keras documentation, a CuDNNLSTM
is a:
Fast LSTM implementation backed by CuDNN.
Can only be run on GPU, with the TensorFlow backend.
It is my belief that Keras automatically uses the GPU wherever possible. According to the TensorFlow build instructions, to have a working TensorFlow GPU backend, you will need CuDNN:
The following NVIDIA software must be installed on your system:
- NVIDIA's Cuda Toolkit (>= 7.0). We recommend version 9.0. For details, see NVIDIA's documentation. Ensure that you append the relevant Cuda pathnames to the LD_LIBRARY_PATH environment variable as described in the NVIDIA documentation.
- The NVIDIA drivers associated with NVIDIA's Cuda Toolkit.
- cuDNN (>= v3). We recommend version 6.0. For details, see NVIDIA's documentation, particularly the description of appending the appropriate pathname to your LD_LIBRARY_PATH environment variable.
Therefore, how would a CuDNNLSTM
differ in any way from a normal LSTM
using a TensorFlow GPU backend? Will CuDNNLSTM
be automatically selected and replace the normal LSTM
when an available TensorFlow GPU backend is found?
Why don't you try it out for yourself and see?
In my case, training a model with LSTM
took 10mins 30seconds.
Simply switching the call from LSTM()
to CuDNNLSTM()
took less than a minute.
I also noticed that switching to CuDNNLSTM()
speeds up model.evaluate()
and model.predict()
substantially as well.
TL;DR; The difference is 15x speed up in model training time!
Setup Steps
Dependencies
Performance Benchmark: Comparison of the standard test machines.
1 iteration of Training on 612235 samples.
keras.layers.LSTM
Intel i5-4690 CPU only:
612235/612235 [==============================] - 3755s 6ms/step - loss: 2.7339 - acc: 0.5067 - val_loss: 2.1149 - val_acc: 0.6175
GTX:950 & Intel i5-4690:
612235/612235 [==============================] - 1417s 2ms/step - loss: 2.7007 - acc: 0.5137 - val_loss: 2.0983 - val_acc: 0.6199
2.5x gain with GPU.
GTX:970 & Intel i5-4690:
612235/612235 [==============================] - 1322s 2ms/step - loss: 1.9214 - acc: 0.6442 - val_loss: 1.8808 - val_acc: 0.6461
Ignorable gain with powerful GPU.
RTX 2070 & Intel i7-9700K:
612235/612235 [==============================] - 1012s 2ms/step - loss: 2.7268 - acc: 0.5111 - val_loss: 2.1162 - val_acc: 0.6234
Very minimal gain even with awesome HW upgrades!!!
keras.layers.CuDNNLSTM
RTX 2070 & Intel i7-9700K:
612235/612235 [==============================] - 69s 112us/step - loss: 1.9139 - acc: 0.6437 - val_loss: 1.8668 - val_acc: 0.6469
54x gain over CPU!
15x gain over traditional(non Cuda) LSTM implementation!
GPUs are good for massive parallel computation, most of the linear algebra ops can be parallelized to improve performance, Vector operations like matrix multiplication and gradient descent can be applied to large matrices that are executed in parallel with GPU support. CUDA - Compute Unified Device Architecture provides an interface that allows vector ops to take advantage of GPU parallelism. CuDNN implements kernels for large matrix operations on GPU using CUDA.
Here, CuDNNLSTM is designed for CUDA parallel processing and cannot run if there is no GPU. But LSTM is designed for normal CPUs. Faster time of execution is because of parallelism.