Given a MATLAB uint32 to be interpreted as a bit string, what is an efficient and concise way of counting how many nonzero bits are in the string?
I have a working, naive approach which loops over the bits, but that's too slow for my needs. (A C++ implementation using std::bitset count() runs almost instantly).
I've found a pretty nice page listing various bit counting techniques, but I'm hoping there is an easy MATLAB-esque way.
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
Update #1
Just implemented the Brian Kernighan algorithm as follows:
w = 0;
while ( bits > 0 )
bits = bitand( bits, bits-1 );
w = w + 1;
end
Performance is still crappy, over 10 seconds to compute just 4096^2 weight calculations. My C++ code using count() from std::bitset does this in subsecond time.
Update #2
Here is a table of run times for the techniques I've tried so far. I will update it as I get additional ideas/suggestions.
Vectorized Scheiner algorithm => 2.243511 sec Vectorized Naive bitget loop => 7.553345 sec Kernighan algorithm => 17.154692 sec length( find( bitget( val, 1:32 ) ) ) => 67.368278 sec nnz( bitget( val, 1:32 ) ) => 349.620259 sec Justin Scheiner's algorithm, unrolled loops => 370.846031 sec Justin Scheiner's algorithm => 398.786320 sec Naive bitget loop => 456.016731 sec sum(dec2bin(val) == '1') => 1069.851993 sec
Comment: The dec2bin() function in MATLAB seems to be very poorly implemented. It runs extremely slow.
Comment: The "Naive bitget loop" algorithm is implemented as follows:
w=0;
for i=1:32
if bitget( val, i ) == 1
w = w + 1;
end
end
Comment: The loop unrolled version of Scheiner's algorithm looks as follows:
function w=computeWeight( val )
w = val;
w = bitand(bitshift(w, -1), uint32(1431655765)) + ...
bitand(w, uint32(1431655765));
w = bitand(bitshift(w, -2), uint32(858993459)) + ...
bitand(w, uint32(858993459));
w = bitand(bitshift(w, -4), uint32(252645135)) + ...
bitand(w, uint32(252645135));
w = bitand(bitshift(w, -8), uint32(16711935)) + ...
bitand(w, uint32(16711935));
w = bitand(bitshift(w, -16), uint32(65535)) + ...
bitand(w, uint32(65535));
Implemented the "Best 32 bit Algorithm" from the Stanford link at the top. The improved algorithm reduced processing time by 6%. Also optimized the segment size and found that 32K is stable and improves time by 15% over 4K. Expect 4Kx4K time to be 40% of Vectorized Scheiner Algorithm.
Did some timing comparisons on Matlab Cody. Determined a Segmented Modified Vectorized Scheiner gives optimimum performance.
Have >50% time reduction based on Cody 1.30 sec to 0.60 sec change for an L=4096*4096 vector.
Try splitting the job into smaller parts. My guess is that if you want to process all data at once, matlab is trying to do each operation on all integers before taking successive steps and the processor's cache is invalidated with each step.
Unless this is a MATLAB implementation exercise, you might want to just take your fast C++ implementation and compile it as a mex function, once per target platform.
I'd be interested to see how fast this solution is:
Going back, I see that this is the 'parallel' solution given on the bithacks page.