I am using pyCUDA for CUDA programming. I need to use random number inside kernel function. CURAND library doesn't work inside it (pyCUDA). Since, there is lot of work to be done in GPU, generating random number inside CPU and then transferring them to GPU won't work, rather dissolve the motive of using GPU.
Supplementary Questions:
- Is there a way to allocate memory on GPU using 1 block and 1 thread.
- I am using more than one kernel. Do I need to use multiple SourceModule blocks?
There is one problem I have with the accepted answer. We have a name mangling there which is sort of nasty (these
_Z10initkerneli
and_Z14randfillkernelPfi
). To avoid that we can wrap the code in theextern "C" {...}
clause manually.Then the code is still compiled with
no_extern_c=True
:and this should work with
Hope that helps.
Despite what you assert in your question, PyCUDA has pretty comprehensive support for CUrand. The GPUArray module has a direct interface to fill device memory using the host side API (noting that the random generators run on the GPU in this case).
It is also perfectly possible to use the device side API from CUrand in PyCUDA kernel code. In this use case the trickiest part is allocating memory for the thread generator states. There are three choices -- statically in code, dynamically using host memory side allocation, and dynamically using device side memory allocation. The following (very lightly tested) example illustrates the latter, seeing as you asked about it in your question:
Here there is an initialization kernel which needs to be run once to allocate memory for the generator states and initialize them with the seed, and then a kernel which uses those states. You will need to be mindful of malloc heap size limits if you want to run a lot of threads, but those can be manipulated via the PyCUDA driver API interface.