Using CUDA with pytorch?

2020-05-21 05:52发布

I have searched on here but I found only outdated posts.

I want to run the training on my GPU. I found on some forums that I need to apply .cuda() on anything I want to use CUDA with (I've applied it to everything I could without making the program crash) Surprisingly, this makes the training even slower.

Then, I found that you could use this torch.set_default_tensor_type('torch.cuda.FloatTensor') to use CUDA. With both enabled, nothing changes. What is happening?

Is there a way to reliably enable CUDA on the whole model?

EDIT: This was flagged as a duplicate. It isn't. The post I was linked to didn't answer all of my questions.

Also, what does MyModel() mean? I need more tangible examples, like code examples. (This is the post I am referring to)

标签: pytorch torch
1条回答
别忘想泡老子
2楼-- · 2020-05-21 06:01

You can use the tensor.to(device) command to move a tensor to a device.

The .to() command is also used to move a whole model to a device, like in the post you linked to.

Another possibility is to set the device of a tensor during creation using the device= keyword argument, like in t = torch.tensor(some_list, device=device)

To set the device dynamically in your code, you can use

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

to set cuda as your device if possible.

There are various code examples on PyTorch Tutorials and in the documentation linked above that could help you.

查看更多
登录 后发表回答