Using Eigen 3.3 in a CUDA kernel

2019-04-27 12:31发布

Since Nov. 2016 it's possible to compile CUDA code which references Eigen3.3 - see this answer

This answer is not what I'm looking for and may now be "outdated" in the sense that there might now be an easier way, since the following is written in the docs

Starting from Eigen 3.3, it is now possible to use Eigen's objects and algorithms within CUDA kernels. However, only a subset of features are supported to make sure that no dynamic allocation is triggered within a CUDA kernel.

See also here. Unfortunately, I was not able to find any example of how this might look like.

My Question

Is it now possible to write a kernel such as the following, which should simply calculate a bunch of dot products?

__global__ void cu_dot(Eigen::Vector3d *v1, Eigen::Vector3d *v2, double *out, size_t N)
{
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    if(idx < N)
    {
        out[idx] = v1[idx].dot(v2[idx]);
    }
    return;
}

I can compile this, but it does not seem to work. When I try to copy the data to host, I get illegal memory access. Note that I originally store the Vector3d's as `std::vector and then respectively use

cudaMalloc((void **)&p_device_v1, sizeof(Eigen::Vector3d)*n);
cudaMemcpy(p_v1_device, v1.data(), sizeof(Eigen::Vector3d)*n, cudaMemcpyHostToDevice);

I have set up an MWE project using CMake at https://github.com/GPMueller/eigen-cuda

1条回答
Summer. ? 凉城
2楼-- · 2019-04-27 13:10

In the MWE project on github, you wrote:

double dot(std::vector<Eigen::Vector3d> v1, std::vector<Eigen::Vector3d> v2)
{   
    ...     
    // Dot product
    cu_dot<<<(n+1023)/1024, 1024>>>(v1.data(), v2.data(), dev_ret, n);

The v1.data() and v2.data() pointers are in the CPU memory. You need to use the pointers in the GPU memory, i.e.

// Dot product
cu_dot<<<(n+1023)/1024, 1024>>>(dev_v1, dev_v2, dev_ret, n);

The CPU vs GPU results are not identical, but that's an issue with the code, i.e. you didn't perform a reduction on the multiple dot products.

查看更多
登录 后发表回答