I have a simple C function as follows:
unsigned char clamp(short value){
if (value < 0) return 0;
if (value > 0xff) return 0xff;
return value;
}
Is it possible to rewrite it without using any if / else branching while being efficient?
EDIT:
I basically wish to see if some bitwise arithmetic based implementation of clamping is possible. Objective is to process images on GPU (Graphics Processing Unit). This type of code will run on each pixel. I guess that if branches can be avoided, then overall throughput over the GPU would be higher.
A solution like (value <0? 0 : ((value > 255) ? 255 : value) ) is simply a rehash of if/else branching with syntactic sugar. So I am not looking for it.
EDIT 2:
I can cut it down to a single if as follows but I am not able to think better:
unsigned char clamp(short value){
int more = value >> 8;
if(more){
int sign = !(more >> 7);
return sign * 0xff;
}
return value;
}
EDIT 3:
Just saw a very nice implementation of this in FFmpeg code:
/**
* Clip a signed integer value into the 0-255 range.
* @param a value to clip
* @return clipped value
*/
static av_always_inline av_const uint8_t av_clip_uint8_c(int a)
{
if (a&(~0xFF)) return (-a)>>31;
else return a;
}
This certainly works and reduces it to one if nicely.
One way to make it efficient is to declare this function as inline to avoid function calling expense. you could also turn it into macro using tertiary operator but that will remove the return type checking by compiler.
You can do it without explicit
if
by using?:
as shown by another poster or by using interesting properties ofabs()
which lets you compute the maximum or minimum of two values.For example, the expression
(a + abs(a))/2
returnsa
for positive numbers and0
otherwise (maximum ofa
and0
).This gives
To convince yourself that this works, here is a test program:
When run, this prints
Of course, one may argue that there is probably a test in
abs()
, butgcc -O3
for example compiles it linearly:But note that this will be much more inefficient than your original function, which compiles as:
But at least it answers your question :)
You should time this ugly but arithmetic-only version.
You write that you want to avoid branching on the GPU. It is true, that branching can be very costly in a parallel environment because either both branches have to be evaluated or synchronization has to be applied. But if the branches are small enough the code will be faster than most arithmetic. The CUDA C best practices guide describes why:
Branch predication is fast. Bloody fast! If you look at the intermediate PTX code generated by the optimizing compiler you will see that it is superior to even modest arithmetic. So the code like in the answer of davmac is probably as fast as it can get.
I know you did not ask specifically about CUDA, but most of the best practices guide also applies to OpenCL and probably large parts of AMDs GPU programming.
BTW: in virtually every case of GPU code I have ever seen most of the time is spend on memory access, not on arithmetic. Make sure to profile! http://en.wikipedia.org/wiki/Program_optimization
Assuming a two byte short, and at the cost of readability of the code:
You could do a 2D lookup-table:
Sure this looks bizarre (a 64 KB table for this trivial computation). However, considering that you mentioned you wanted to do this on a GPU, I'm thinking the above could be a texture look-up, which I believe are pretty quick on GPUs.
Further, if your GPU uses OpenGL, you could of course just use the
clamp
builtin directly:This won't type-convert (there is no 8-bit integer type in GLSL, it seems), but still.