Without using any external counters or other state, I'm looking for an efficient function which takes an n-bit value (32 bits or thereabouts) and returns the subsequent value in a Gray code.
That is:
int fn(int x)
{
int y = gray_to_binary(x);
y = y + 1;
return binary_to_gray(y);
}
But while the binary_to_gray()
function is trivial (x ^ (x >> 1)
), the corresponding gray_to_binary()
is not so trivial at all (a loop of log(n)
iterations).
Perhaps there is a more efficient sequence of operations? Either for the standard reflected Gray code, or for another Gray code chosen to suit this problem.
Aside: I see two possible solution types to this problem -- one is to choose a code that is easier to convert to binary and to use the form given above (or to demonstrate a more efficient conversion to binary for reflected codes), and the other is to defer conversion to binary altogether and to produce a method which walks through a gray code without the use of a binary increment.
In the latter case, it might turn out to be especially difficult to convert the resulting code to binary. That's likely a down-side in practical terms, but it'd still be an interesting thing to see.
Update: Since it's been pointed out that the Gray decode is only log(n)
operations (using either of two different techniques), I spent some time trying to figure out if that is a strict limit on how far things can be simplified. All bits must be considered when determining the next operation to perform, otherwise the 'considered' bits would fail to change and the function would oscillate between two values. The input must be compressed, in some way, to a manageable scale to determine the next operation to perform.
To make it log(n-k)
operations, a 2k-entry LUT could be used to short-cut the last k
operations (a comment suggests k=32
).
Another technique which came to mind which can often reduce things very quickly is a combination of multiplication and bitmasks. For example, to compute the parity in order to implement the parity-based algorithm.
From the multiply-and-bitmask approach, it seems like there might be space to invent a Gray code which simplifies the set of operations even further... but I don't imagine any such code is known.
There are three ways I'd go with this depending on what you are after.
1) one common function: Write a single function that handles the widest possible graycode value you need to support. The follow the method that @harold suggested using ever greater shifts and xors:
extend the input data type and the shifts as needed until the next shift amount would equal or exceed the number of data bits. Setting up and testing even one loop will be less efficient than running these instructions. This will be only slightly slower than a lookup method.
2) function per power of two Same as above but with graycodeToBinary_8, _16, _32 versions. Can be of benefit if you do lots of small conversions and the occasional very large one. If using C++ overloading can automatically choose the appropriate version for you (and you can turn it up to ridiculous with some template metaprogramming).
3) lookup table: This seems like a good idea unless you consider cache behaviors. If you are not using the lookup table very often, then it's needlessly complex versus the above method. If you are using the lookup table often it will likely wreck your cache behavior (lots of scattered reads to a larger region of memory). There is a small slice of applications where this will turn out to be very slightly faster. Also, you have to create the lookup table, so you're likely to have the function for graycode_to_binary already available.
In the end I've rarely found a use for anything but option 1). I've seen one embedded application which hard-coded the lookup table into it's ROM. That was fine since the processor didn't have cache anyway.
I've implemented an algorithm in C# that seems to work:
First you need the parity of the integer. I've implemented it for a
ulong
(64-bit), but you can easily modify it to any desired output:Next you need to check if the parity is even (the number of bits set is even, if this is the case, you simply swap the last bit). If the parity is odd, you normally swap the bit on the left of the least significant set bit. This can be calculated with the following method:
There is one border case: if the least significant set bit, is the greatest bit of your Gray code, if that is the case, you can of course not swap the left bit, and you simply set your counter to zero.
To summarize, you can use the following code:
From wiki (http://en.wikipedia.org/wiki/Gray_code#Converting_to_and_from_Gray_code)
A simple algorithm for incrementing a gray code:
Finding the parity of x takes O(log(k)), where k is the bitlength of x. However, every step in the above algorithm changes parity, so in a loop you could just alternate the even and odd parity operations. (Of course, that fails the OP requirement that no state be kept; it requires one bit of state. Also, see below.)
Finding y is O(1) using a standard bit-hack:
y = x&-x
, where-
is the 2's complement negate operator; you could also write it asy = x and not (x - 1)
.You also might be able to use the parity-enhanced gray code, which is the gray code suffixed with an inverse parity bit (so that the parity of the enhanced code is always odd). In that case you can use the following O(1) algorithm:
In both the above algorithms, I've left out the overflow check for clarity. To make the code cycle on overflow, replace
y leftshift 1
withy leftshift 1 if y is not the high-order bit, else y
. (On most architectures, the test could beif y leftshift 1 is not 0
.) Alternatively, you could throw an exception or return an error in the event thaty
is too large to shift left.