Configuration/Flags for TF-Slim across multiple GP

2019-07-07 06:01发布

问题:

I am curios if there are examples on how to run TF-Slim models/slim using deployment/model_deploy.py across multiple GPU’s on multiple machines. The documentation is pretty good but I am missing a couple of pieces. Specifically what needs to be put in for worker_device and ps_device and what additionally needs to be run on each machine?

An example like the one at the bottom of the distributed page would be awesome. https://www.tensorflow.org/how_tos/distributed/