Since Nov. 2016 it's possible to compile CUDA code which references Eigen3.3 - see this answer
This answer is not what I'm looking for and may now be "outdated" in the sense that there might now be an easier way, since the following is written in the docs
Starting from Eigen 3.3, it is now possible to use Eigen's objects and algorithms within CUDA kernels. However, only a subset of features are supported to make sure that no dynamic allocation is triggered within a CUDA kernel.
See also here. Unfortunately, I was not able to find any example of how this might look like.
My Question
Is it now possible to write a kernel such as the following, which should simply calculate a bunch of dot products?
__global__ void cu_dot(Eigen::Vector3d *v1, Eigen::Vector3d *v2, double *out, size_t N)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if(idx < N)
{
out[idx] = v1[idx].dot(v2[idx]);
}
return;
}
I can compile this, but it does not seem to work. When I try to copy the data to host, I get illegal memory access
. Note that I originally store the Vector3d's as `std::vector and then respectively use
cudaMalloc((void **)&p_device_v1, sizeof(Eigen::Vector3d)*n);
cudaMemcpy(p_v1_device, v1.data(), sizeof(Eigen::Vector3d)*n, cudaMemcpyHostToDevice);
I have set up an MWE project using CMake at https://github.com/GPMueller/eigen-cuda
In the MWE project on github, you wrote:
The
v1.data()
andv2.data()
pointers are in the CPU memory. You need to use the pointers in the GPU memory, i.e.The CPU vs GPU results are not identical, but that's an issue with the code, i.e. you didn't perform a reduction on the multiple dot products.