In TensorFlow, you can create custom allocators for various reasons (I am doing it for new hardware). Due to the structure of the device, I need to use a struct of a few elements as my data pointer which the allocator returns as a void*
.
In the kernels that I am writing, I am given access to Tensors but I need t get the pointer struct that I wrote. Examining the classes, it seemed that I could get this struct by doing tensor_t.buf_->data()
The problem is that I can't find code that does this and I am worried that it is unsafe (highly likely!) or there is a more standard way to do this.
Can someone confirm if this is a good/bad idea? And provide an alternative if such exists?
You may also be able to use Tensor::tensor_data().data() to get access to the raw pointer, without using the weird indirection through DMAHelper.
Four days later ...
from GPUUtils
DMAHelper::base()
is a friend class method that is given the ability to use the privateTensor::base()
to get at the data pointer.The implementation shows that this is all just a wrapper around what I wanted to do after yet another abstraction. I am guessing it is a safer approach to getting the pointer and should be used instead.