I'm looking for a fast way to compute a 3D Morton number. This site has a magic-number based trick for doing it for 2D Morton numbers, but it doesn't seem obvious how to extend it to 3D.
So basically I have 3 10-bit numbers that I want to interleave into a single 30 bit number with a minimal number of operations.
I had a similar problem today, but instead of 3 numbers, I have to combine an arbitrary number of numbers of any bit length. I employed my own sort of bit spreading and masking algorithm and applied it to C# BigIntegers. Here is the code I wrote. As a compilation step, it figures out the magic numbers and mask for the given number of dimensions and bit depth. Then you can reuse the object for multiple conversions.
The simplest is probably a lookup table, if you've 4K free space:
The bit hack uses shifts and masks to spread the bits out, so each time it shifts the value and ors it, copying some of the bits into empty spaces, then masking out combinations so only the original bits remain.
for example:
In each iteration, each block is split in two, the rightmost bit of the leftmost half of the block moved to its final position, and a mask applied so only the required bits remain.
Once you have spaced the inputs out, shifting them so the values of one fall into the zeros of the other is easy.
To extend that technique for more than two bits between values in the final result, you have to increase the shifts between where the bits end up. It gets a bit trickier, as the starting block size isn't a power of 2, so you could either split it down the middle, or on a power of 2 boundary.
So an evolution like this might work:
Perform the same transformation on the inputs, shift one by one and another by two, or them together and you're done.
I took the above and modified it to combine 3 16-bit numbers into a 48- (really 64-) bit number. Perhaps it will save someone the small bit of thinking to get there.
Following is the code snippet to generate Morton key of size 64 bits for 3-D point.
You can use the same technique. I'm assuming that variables contain 32-bit integers with the highest 22 bits set to
0
(which is a bit more restrictive than necessary). For each variablex
containing one of the three 10-bit integers we do the following:Then, with
x
,y
andz
the three manipulated 10-bit integers we get the result by taking:The way this technique works is as follows. Each of the
x = ...
lines above "splits" groups of bits in half such that there is enough space in between for the bits of the other integers. For example, if we consider three 4-bit integers, we split one with bits 1234 into 000012000034 where the zeros are reserved for the other integers. In the next step we split 12 and 34 in the same way to get 001002003004. Even though 10 bits doesn't make for a nice repeated division in two groups, you can just consider it 16 bits where you lose the highest ones in the end.As you can see from the first line, you actually only need that for each input integer
x
it holds thatx & 0x03000000 == 0
.Here is my solution with a python script:
I took the hint from in his comment: Fabian “ryg” Giesen
Read the long comment below! We need to keep track which bits need to go how far!
Then in each step we select these bits and move them and apply a bitmask (see comment last lines) to mask them!
So for a 10bit number and 2 interleaving bits (for 32 bit), you need to do the following!:
And for a 21bit number and 2 interleaving bits (for 64bit), you need to do the following!:
And for a 42bit number and 2 interleaving bits (for 128bit), you need to do the following ( in case you need it ;-)) :
Python Script to produce and check the Interleaving Patterns!!!
I will add the decoding code as well in a while!