I follow these examples to write custom op in TensorFlow:
Adding a New Op
cuda_op_kernel
Change the function to operation I need to do.
But all the examples are tests in Python code.
I need to run the my op from c++ code, how can I do this?
相关问题
- Sorting 3 numbers without branching [closed]
- batch_dot with variable batch size in Keras
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
相关文章
- tensorflow 神经网络 训练集准确度远高于验证集和测试集准确度?
- Tensorflow: device CUDA:0 not supported by XLA ser
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- Selecting only the first few characters in a strin
- What exactly do pointers store? (C++)
This simple example shows the construction and the execution of a graph using C++ API:
As in the Python counterpart, you first need to build a computational graph in a scope, which in this case has only a matrix multiplication in it, whose end point is in
v
. Then you need to open a new session (session
) for the scope, and run it on your graph. In this case there is no feed dictionary, but at the end of the page there is an example on how to feed values:All the code segments here reported come from the C++ API guide for TensorFlow
If you want to call custom OP you have to use almost the same code. I have a custom op in this repository that I will use as an example code. The OP has been registered:
and the Op is defined to be a Cuda Kernel in the cuda file. To launch the Op I have to (again), create a new computational graph, register my op, open a session and make it run from my code: