TensorFlow Custom Allocator and Accessing Data fro

2019-05-28 19:32发布

In TensorFlow, you can create custom allocators for various reasons (I am doing it for new hardware). Due to the structure of the device, I need to use a struct of a few elements as my data pointer which the allocator returns as a void*.

In the kernels that I am writing, I am given access to Tensors but I need t get the pointer struct that I wrote. Examining the classes, it seemed that I could get this struct by doing tensor_t.buf_->data()

Tensor::buf_

TensorBuffer::data()

The problem is that I can't find code that does this and I am worried that it is unsafe (highly likely!) or there is a more standard way to do this.

Can someone confirm if this is a good/bad idea? And provide an alternative if such exists?

2条回答
乱世女痞
2楼-- · 2019-05-28 19:34

You may also be able to use Tensor::tensor_data().data() to get access to the raw pointer, without using the weird indirection through DMAHelper.

查看更多
\"骚年 ilove
3楼-- · 2019-05-28 19:41

Four days later ...

void* GetBase(const Tensor* src) {
  return const_cast<void*>(DMAHelper::base(src));
}

from GPUUtils

DMAHelper::base() is a friend class method that is given the ability to use the private Tensor::base() to get at the data pointer.

The implementation shows that this is all just a wrapper around what I wanted to do after yet another abstraction. I am guessing it is a safer approach to getting the pointer and should be used instead.

查看更多
登录 后发表回答