In pytorch, if I'm not writing anything about using CPU/GPU, and my machine supports CUDA (torch.cuda.is_available() == True
):
- What is my script using, CPU or GPU?
- If CPU, what should I do to make it run on GPU? Do I need to rewrite everything?
- If GPU, will this script crash if
torch.cuda.is_available() == False
? - Does this do anything about making the training faster?
- I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?
You should write you code so that it will use GPU processing if torch.cuda.is_available == True.
So:
My way is like this (below pytorch 0.4):
UPDATE pytorch 0.4:
from PyTorch 0.4.0 Migration Guide.
PyTorch defaults to the CPU, unless you use the
.cuda()
methods on your models and thetorch.cuda.XTensor
variants of PyTorch's tensors.