I have a number float x
which should be in <0,1> range but it undergo several numerical operations - the result may be slightly outside <0,1>.
I need to convert this result to uint y
using entire range of UInt32
. Of course, I need to clamp x
in the <0,1> range and scale it.
But which order of operations is better?
y = (uint)round(min(max(x, 0.0F), 1.0F) * UInt32.MaxValue)
or
y = (uint)round(min(max(x * UInt32.MaxValue, 0.0F), UInt32.MaxValue)
In another words, it is better to scale first, then clamp OR clamp and then scale? I am not very profound in the IEEE floating point representation, but I believe there is a difference in the order of computation of the above expressions.
Because the multiplication to get from [0.0f .. 1.0f] to [0 .. UInt32.MaxValue] can itself be approximative, the order of operations that most obviously has the property you desire is multiply, then clamp, then round.
The maximum value to clamp to is the float immediately below 232, that is,
4294967040.0f
. Although this number is several units below UInt32.MaxValue, allowing any larger value would mean overflowing the conversion toUInt32
.Either of the lines below should work:
In this first version, you have the option to multiply by
UInt32.MaxValue
instead. The choice is between having very slightly larger results overall (and thus rounding to 4294967040 a few more values that were close to 1.0f but below it), or only sending to 4294967040 the values 1.0f and above.You can also clamp to [0.0f .. 1.0f] if you do not multiply by too large a number afterwards, so that there is no risk of making the value larger than the largest float that can be converted:
Suggestion for your comment below, about crafting a conversion that goes up to
UInt32.MaxValue
:This computation considered as a function from
x
toy
is increasing (including around 0.5f) and it goes up toUInt32.MaxValue
. You can re-order the tests according to what you think will be the most likely distribution of values. In particular, assuming that few values are actually below 0.0f or above 1.0f, you can compare to 0.5f first, and then only compare to the bound that is relevant:Single can't support enough accuracy to maintain the interim result, so you'll need to scale then clamp, but you can't clamp to UInt32.MaxValue because it can't be represented by single. The maximum UInt32 you can safely clamp to is 4294967167
from this code here
See this test...
Given that
x
might be slightly outside[0,1]
the second approach is not as easy as the first one due to clamping issues in UInt32-valuespace, ie every number in UInt32 is valid. The first one is also easier to understand, ie get a number in an interval and scale.Ie:
Also, I tested it with a couple of millions of values, they give the same result. It doesn't matter which one you use.
The three essential attributes of correct color format conversion are:
A corollary of the second point is that color format conversions that use round are almost always incorrect, because the bins that map to the minimum and maximum results are usually too small by half. This isn’t as critical with high precision formats like uint32, but it’s still good to get right.
You mentioned in a comment that your C# code is being translated to OpenCL. OpenCL has by far the nicest set of conversions of any language I’ve encountered (seriously, if you’re designing a compute-oriented language and you don’t copy what OpenCL did here, you’re doing it wrong), which makes this pretty easy:
However, your question is actually about C#; I’m not a C# programmer, but the approach there should look something like this: