Torch Cuda - Generates two processes on both GPU c

2019-07-02 16:16发布

When I run;

require 'cutorch'

in lua it automatically allocates two processes to two of the cores in my GPU. For example I get the following output in nvidia-smi;

---------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1      6091    C   /home/msmith/torch/install/bin/qlua             98MiB |
|    2      6091    C   /home/msmith/torch/install/bin/qlua             99MiB |
+-----------------------------------------------------------------------------+

I would like to be able to control which GPU the process goes on. I have tried;

cutorch.setDevice(<Device Number>)

but this just creates more processes on the GPU.

Thanks.

标签: luajit torch
2条回答
萌系小妹纸
2楼-- · 2019-07-02 16:26

As the previous answer said, choosing GPU can be done using CUDA_VISIBLE_DEVICES environment variable on the command line before calling torch or torch-lua-script. This is a general way with CUDA and can be used with any application, not only torch. The number here can though clash with the number set in cutorch.setDevice() (also which instead is 1-based). One can select multiple specific GPU:s by a comma-separated list, for example:

CUDA_VISIBLE_DEVICES=1,2

This will result in torch running only on GPU 1 and 2. More information can be found here:

https://devblogs.nvidia.com/parallelforall/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/

查看更多
不美不萌又怎样
3楼-- · 2019-07-02 16:28

You can control which GPU your process will run on before launching it using the CUDA_VISIBLE_DEVICES environment variable, e.g. to run only on CPU 0:

export CUDA_VISIBLE_DEVICES=0
luajit your-script.lua
查看更多
登录 后发表回答