To allocate space to a variable on GPU memory there must be enough space on continuous memory region. In other words you cannot have fragmented memory regions allocated to a variable on GPUS, unlike RAM. Having different shared variables stored on the GPU memory and continuously updating them could cause memory fragmentation. Therefore, even if there is enough free memory (in terms of bytes) on the GPU, you may not be able to use those memory regions as they are not in a continuous block.
My question is how does Theano deal with such problem?
Does shared_var.set_value([])
release all the memory assigned to that shared variable so that the next update (shared_var.set_value(newDataPoints)
) will only allocate the amount of memory to the shared variable and therefore avoid memory fragmentation?
Here is is explained that updating a shared variable may still cause memory fragmentation. So I wonder whether the problem persists if the parameters borrow
or allow_gc
(in theanorc) is set to True
?
How one can keep track of the amount of free memory in a block (continuous) on a GPU?