Say I use OpenCL to manage memory (so that memory management between GPU/CPU uses the same code), but my calculation uses optimized CUDA and CPU code (not OpenCL). Can I still use the OpenCL device memory pointers and pass them to CUDA functions/kernels?
相关问题
- Sorting 3 numbers without branching [closed]
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
- What uses more memory in c++? An 2 ints or 2 funct
相关文章
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- Selecting only the first few characters in a strin
- Why are memory addresses incremented by 4 in MIPS?
- What exactly do pointers store? (C++)
- Converting glm::lookat matrix to quaternion and ba
Since both CUDA and OpenCL can interop with OpenGL you might explore that as a middle ground. I was successfully able to access the same OpenGL texture both from OpenCL (as an image) and CUDA, so you might be able to do the same thing for a buffer of data (I'm not positive what the OpenGL representation would be though).
AFAIK this is not possible, but there is no technical reason why you shouldn't be able to.
NVIDIA could build an extension to the OpenCL API to interoperate with CUDA, much like the interoperability provisions for Direct3D and OpenGL.