Following the upgrade to Keras 2.0.9, I have been using the multi_gpu_model
utility but I can't save my models or best weights using
model.save('path')
The error I get is
TypeError: can’t pickle module objects
I suspect there is some problem gaining access to the model object. Is there a work around this issue?
It's something that need a little work around by loading the multi_gpu_model weight to the regular model weight. e.g.
`
refrence: https://github.com/fchollet/keras/issues/8123
To be honest, the easiest approach to this is to actually examine the multi gpu parallel model using
(The parallel model is simply the model after applying the multi_gpu function). This clearly highlights the actual model (in I think the penultimate layer - I am not at my computer right now). Then you can use the name of this layer to save the model.
Often its called sequential_1 but if you are using a published architecture, it may be 'googlenet' or 'alexnet'. You will see the name of the layer from the summary.
Then its simple to just save
Maxims approach works, but its overkill I think.
Rem: you will need to compile both the model, and the parallel model.
Workaround
Here's a patched version that doesn't fail while saving:
You can use this
multi_gpu_model
function, until the bug is fixed in keras. Also, when loading the model, it's important to provide the tensorflow module object:How it works
The problem is with
import tensorflow
line in the middle ofmulti_gpu_model
:This creates a closure for the
get_slice
lambda function, which includes the number of gpus (that's ok) and tensorflow module (not ok). Model save tries to serialize all layers, including the ones that callget_slice
and fails exactly becausetf
is in the closure.The solution is to move import out of
multi_gpu_model
, so thattf
becomes a global object, though still needed forget_slice
to work. This fixes the problem of saving, but in loading one has to providetf
explicitly.