Based on the documentation, the default GPU is the one with the lowest id:
If you have more than one GPU in your system, the GPU with the lowest ID will be selected by default.
Is it possible to change this default from command line or one line of code?
Suever's answer correctly shows how to pin your operations to a particular GPU. However, if you are running multiple TensorFlow programs on the same machine, it is recommended that you set the
CUDA_VISIBLE_DEVICES
environment variable to expose different GPUs before starting the processes. Otherwise, TensorFlow will attempt to allocate almost the entire memory on all of the available GPUs, which prevents other processes from using those GPUs (even if the current process isn't using them).Note that if you use
CUDA_VISIBLE_DEVICES
, the device names"/gpu:0"
,"/gpu:1"
, etc. refer to the 0th and 1st visible devices in the current process.As is stated in the documentation, you can use
tf.device('/gpu:id')
to specify a device other than the default.Just to be clear regarding the use of the environment variable
CUDA_VISIBLE_DEVICES
:To run a script
my_script.py
on GPU 1 only, in the Linux terminal you can use the following command:More examples illustrating the syntax:
FYI:
If you want to run your code on the second GPU,it assumes that your machine has two GPUs, You can do the following trick.
open Terminal
open tmux by typing tmux (you can install it by sudo apt-get install tmux)
Note: By default, tensorflow uses the first GPU, so with above trick, you can run your another code on the second GPU, separately.
Hope it would be helpful!!