I'm trying to construct a parallel algorithm with CUDA that takes an array of integers and removes all of the 0
's with or without keeping the order.
Example:
Global Memory: {0, 0, 0, 0, 14, 0, 0, 17, 0, 0, 0, 0, 13}
Host Memory Result: {17, 13, 14, 0, 0, ...}
The simplest way is to use the host to remove the 0
's in O(n)
time. But considering I have around 1000
elements, it probably will be faster to leave everything on the GPU and condense it first, before sending it.
The preferred method would be to create an on-device stack, such that each thread can pop and push (in any order) onto or off of the stack. However, I don't think CUDA has an implementation of this.
An equivalent (but much slower) method would be to keep attempting to write, until all threads have finished writing:
kernalRemoveSpacing(int * array, int * outArray, int arraySize) {
if (array[threadId.x] == 0)
return;
for (int i = 0; i < arraySize; i++) {
array = arr[threadId.x];
__threadfence();
// If we were the lucky thread we won!
// kill the thread and continue re-reincarnated in a different thread
if (array[i] == arr[threadId.x])
return;
}
}
This method has only benefit in that we would perform in O(f(x))
time, where f(x)
is the average number of non-zero values there are in an array (f(x) ~= ln(n)
for my implementation, thus O(ln(n))
time, but has a high O
constant)
Finally, a sort algorithm such as quicksort or mergesort would also solve the problem, and does in fact run in O(ln(n))
relative time. I think there might be an algorithm faster than this even, as we do not need to waste time ordering (swapping) zero-zero element pairs, and non-zero non-zero element pairs (the order does not need to be kept).
So I'm not quite sure which method would be the fastest, and I still think there's a better way of handling this. Any suggestions?
With this answer, I'm only trying to provide more details to Davide Spataro's approach.
As you mentioned, stream compaction consists of removing undesired elements in a collection depending on a predicate. For example, considering an array of integers and the predicate
p(x)=x>5
, the arrayA={6,3,2,11,4,5,3,7,5,77,94,0}
is compacted toB={6,11,7,77,94}
.The general idea of stream compaction approaches is that a different computational thread be assigned to a different element of the array to be compacted. Each of such threads must decide to write its corresponding element to the output array depending on whether it satisfies the relevant predicate or not. The main problem of stream compaction is thus letting each thread know in which position the corresponding element must be written in the output array.
The approach in [1,2] is an alternative to Thrust's
copy_if
mentioned above and consists of three steps:Step #1. Let
P
be the number of launched threads andN
, withN>P
, the size of the vector to be compacted. The input vector is divided in sub-vectors of sizeS
equal to the block size. The__syncthreads_count(pred)
block intrinsic is exploited which counts the number of threads in a block satisfying the predicate pred. As a result of the first step, each element of the arrayd_BlockCounts
, which has sizeN/P
, contains the number of elements meeting the predicate pred in the corresponding block.Step #2. An exclusive scan operation is performed on the array d_BlockCounts. As a result of the second step, each thread knows how many elements in the previous blocks write an element. Accordingly, it knows the position where to write its corresponding element, but for an offset related to its own block.
Step #3. Each thread computes the mentioned offset using warp intrinsic functions and eventually writes to the output array. It should be noted that the execution of step #3 is related to warp scheduling. As a consequence, the elements order in the output array does not necessarily reflect the elements order in the input array.
Of the three steps above, the second is performed by CUDA Thrust’s
exclusive_scan
primitive and is computationally significantly less demanding than the other two.For an array of
2097152
elements, the mentioned approach has executed in0.38ms
on anNVIDIA GTX 960
card, in contrast to1.0ms
of CUDA Thrust’scopy_if
. The mentioned approach appears to be faster for two reasons: 1) It is specifically tailored to cards supporting warp intrinsic elements; 2) The approach does not guarantee the output ordering.It should be noticed that we have tested the approach also against the code available at inkc.sourceforge.net. Although the latter code is arranged in a single kernel call (it does not employ any CUDA Thrust primitive), it has not better performance as compared to the three-kernels version.
The full code is available here and is slightly optimized as compared to the original Davide Spataro's routine.
Stream compaction is a well known problem for which lot code was written (Thrust,Chagg to cite two libraries that implements stream compaction on CUDA).
If you have a relatively new CUDA-capable device which supports intrinsic function as __ballot (compute cdapability >= 3.0) it is worth to try a small CUDA procedure that performs stream compaction much faster than Thrust.
Here finds the code and minimal doc. https://github.com/knotman90/cuStreamComp
Is uses ballotting function in a single kernel fashion to perform the compaction.
What you are asking for is a classic parallel algorithm called stream compaction1.
If Thrust is an option, you may simply use
thrust::copy_if
. This is a stable algorithm, it preserves relative order of all elements.Rough sketch:
If Thrust is not an option, you may implement stream compaction yourself (there is plenty of literature on the topic). It's a fun and reasonably simple exercise, while also being a basic building block for more complex parallel primitives.
(1) Strictly speaking, it's not exactly stream compaction in the traditional sense, as stream compaction is traditionally a stable algorithm but your requirements do not include stability. This relaxed requirement could perhaps lead to a more efficient implementation?