I have four NVIDIA GTX 1080 graphic cards and when I'm initializing a session I see following console output:
Adding visible gpu devices: 0, 1, 2, 3
Device interconnect StreamExecutor with strength 1 edge matrix:
0 1 2 3
0: N Y N N
1: Y N N N
2: N N N Y
3: N N Y N
And as well I have 2 NVIDIA M60 Tesla graphic cards and the initialization looks like:
Adding visible gpu devices: 0, 1, 2, 3
Device interconnect StreamExecutor with strength 1 edge matrix:
0 1 2 3
0: N N N N
1: N N N N
2: N N N N
3: N N N N
And I noticed this output was changed for me since last update from 1.6 to 1.8 for 1080 gpu. It looked something like this (cannot remember precisely, just memories):
Adding visible gpu devices: 0, 1, 2, 3
Device interconnect StreamExecutor with strength 1 edge matrix:
0 1 2 3 0 1 2 3
0: Y N N N 0: N N Y N
1: N Y N N or 1: N N N Y
2: N N Y N 2: Y N N N
3: N N N Y 3: N Y N N
My questions are:
- what is this Device interconnect?
- what influence it has on computation power?
- why it differ for different GPUs?
- can it change over time due to hardware reasons (failures, drivers inconsistency...)?
TL;DR
what is this Device interconnect?
As stated by Almog David in the comments, this tells you if one GPU has direct memory access to the other.
what influence it has on computation power?
The only effect this has is for multi-GPU training. The data transfer is faster if the two GPUs have device interconnect.
why it differ for different GPUs?
This depends on the topology of the hardware setup. A motherboard only has so many PCI-e slots that are connected by the same bus. (check topology with nvidia-smi topo -m
)
can it change over time due to hardware reasons (failures, drivers inconsistency...)?
I don't think that the order can change over time, unless NVIDIA changes the default enumeration scheme. There is a little more detail here
Explaination
This message is generated in the BaseGPUDeviceFactory::CreateDevices
function. It iterates through each pair of devices in the given order and calls cuDeviceCanAccessPeer
. As mentioned by Almog David says in the comments, this just indicates whether you can perform DMA between devices.
You can perform a little test to check that the order matters. Consider the following snippet:
#test.py
import tensorflow as tf
#allow growth to take up minimal resources
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
Now let's check the output with different device order in CUDA_VISIBLE_DEVICES
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python3 test.py
...
2019-03-26 15:26:16.111423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-03-26 15:26:18.635894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-26 15:26:18.635965: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-03-26 15:26:18.635974: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y N N
2019-03-26 15:26:18.635982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N N N
2019-03-26 15:26:18.635987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: N N N Y
2019-03-26 15:26:18.636010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: N N Y N
...
$ CUDA_VISIBLE_DEVICES=2,0,1,3 python3 test.py
...
2019-03-26 15:26:30.090493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1, 2, 3
2019-03-26 15:26:32.758272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-26 15:26:32.758349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2 3
2019-03-26 15:26:32.758358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N N N Y
2019-03-26 15:26:32.758364: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: N N Y N
2019-03-26 15:26:32.758389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 2: N Y N N
2019-03-26 15:26:32.758412: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 3: Y N N N
...
You can get a more detailed explanation of the connections by running nvidia-smi topo -m
. For example:
GPU0 GPU1 GPU2 GPU3 CPU Affinity
GPU0 X PHB SYS SYS 0-7,16-23
GPU1 PHB X SYS SYS 0-7,16-23
GPU2 SYS SYS X PHB 8-15,24-31
GPU3 SYS SYS PHB X 8-15,24-31
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge)
PIX = Connection traversing a single PCIe switch
NV# = Connection traversing a bonded set of # NVLinks
I believe the lower you go on the list, the faster the transfer.