I'm looking at porting from a different production machine learning framework to TensorFlow. In our current system for both training and inference we load copies of our model onto as many GPUs as are on the machine.
I would like to keep this way of load-balancing for now. Where can I find a simple example of loading one copy of a TF model onto each GPU that's available on a machine?
Here's an example from https://github.com/rafaljozefowicz/lm/blob/master/language_model.py#L21
You wrap your model creation code into
_forward
function, and then call it once for each GPU