Using constants with CUDA

2020-02-05 03:41发布

Which is the best way of using constants in CUDA?

One way is to define constants in constant memory, like:

// CUDA global constants
__constant__ int M;

int main(void)
{
    ...
    cudaMemcpyToSymbol("M", &M, sizeof(M));
    ...
}

An alterative way would be to use the C preprocessor:

#define M = ... 

I would think defining constants with the C preprocessor is much faster. Which are then the benefits of using the constant memory on a CUDA device?

2条回答
虎瘦雄心在
2楼-- · 2020-02-05 03:59

Regular C/C++ style constants: In CUDA C (itself a modification of C99) constants are absolute compile time entities. This is hardly surprising given the amount of optimization that happens in NVCC is VERY involved given the nature of GPU processing.

#define: macros are as always very inelegant but useful in a pinch.

The __constant__ variable specifier is, however a completely new animal and something of a misnomer in my opinion. I will put down what Nvidia has here in the space below:

The __constant__ qualifier, optionally used together with __device__, declares a variable that:

  • Resides in constant memory space,
  • Has the lifetime of an application,
  • Is accessible from all the threads within the grid and from the host through the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMemcpyFromSymbol()).

Nvidia's documentation specifies that __constant__ is available at register level speed (near-zero latency) provided it is the same constant being accessed by all threads of a warp.

They are declared at global scope in CUDA code. HOWEVER based on personal (and currently ongoing) experience you have to be careful with this specifier when it comes to separate compilation, like separating your CUDA code (.cu and .cuh files) from your C/C++ code by putting wrapper functions in C-style headers.

Unlike traditional "constant" specified variables however these are initialized at runtime fromthe host code that allocates device memory and ultimately launches the kernel. I repeat I am currently working code that demonstrates these can be set at runtime using cudaMemcpyToSymbol() before kernel execution.

They are quite handy to say the least given the L1 cache level speed that is guaranteed for access.

查看更多
叛逆
3楼-- · 2020-02-05 04:15
  1. constants that are known at compile time should be defined using preprocessor macros (e.g. #define) or via C/C++ const variables at global/file scope.
  2. Usage of __constant__ memory may be beneficial for programs who use certain values that don't change for the duration of the kernel and for which certain access patterns are present (e.g. all threads access the same value at the same time). This is not better or faster than constants that satisfy the requirements of item 1 above.
  3. If the number of choices to be made by a program are relatively small in number, and these choices affect kernel execution, one possible approach for additional compile-time optimization would be to use templated code/kernels
查看更多
登录 后发表回答