Mapping a texture from “1D” to “2D” with OpenGL ma

2019-03-02 08:43发布

问题:

(With this question I'm trying to investigate an idea I had for solving this other one)

If I have a standard 2D array of dimensions width and height in memory, I can turn that into a 1D array of length width * height and then index it via index = x + y * width. This mapping is extremely helpful when allocating and freeing memory for the array as the memory manager does not need to worry about packing the structures in 2D but only needs to worry about the overall length of every allocated array if expressed in 1D.

I am trying to see if I can use this same approach for image-memory management for OpenGL textures. The idea (as described in the above linked question) is to combine a whole bunch of needed textures into a single bigger one by bin-packing them (i.e. drawing them next to each other) into the big texture. This helps minimize costly texture-binding operations during rendering.

Let's say my big texture is 8×8 pixels (i.e. 64 pixels total):

8x8 texture:                5x5 image:            4x5 image:

   | 0 1 2 3 4 5 6 7           | 0 1 2 3 4           | 0 1 2 3
---+-----------------       ---+-----------       ---+---------
 0 | . . . . . . . .         0 | A B C D E         0 | a b c d
 1 | . . . . . . . .         1 | F G H I J         1 | e f g h
 2 | . . . . . . . .         2 | K L M N O         2 | i j k l
 3 | . . . . . . . .         3 | P Q R S T         3 | m n o p
 4 | . . . . . . . .         4 | U V W X Y         4 | q r s t
 5 | . . . . . . . .
 6 | . . . . . . . .
 7 | . . . . . . . .

And I would like to store a 5×5 image and a 4×5 image in it (i.e. 25 + 20 = 45 pixels total). Technically, I have plenty of pixels available, but I can't place these images next to each other into the big texture as that would require a minimum dimension of 9 in one direction and 5 in the other.

If I could simply treat my 8×8 texture as 64 continues pixels of memory and map the two images into 1D blocks of memory inside that, I could arrange the images as follows inside the texture: 8x8 texture:

   | 0 1 2 3 4 5 6 7
---+-----------------
 0 | A B C D E F G H
 1 | I J K L M N O P             
 2 | Q R S T U V W X
 3 | Y a b c d e f g             
 4 | h i j k l m n o             
 5 | p q r s t . . .
 6 | . . . . . . . .
 7 | . . . . . . . .

If I draw all my images at a scale of 1:1, i.e. no fractional pixel coordinates anywhere and no need for any linear filtering or other pixel blending, is it possible to come up with a transformation matrix that I can use to draw the 4×5 image using this texture?

With vertex and fragment shaders, this looks like it might be fairly easy (unless I'm forgetting something; I haven't tried this):

  • The vertex shader maps the four corners of the image to draw to the texture expressed as a 64×1 image:

    • a: (0, 0) → (0 + 0*4 + 25, 0) = (25, 0)     where 25 is the offset of the 4×5 image
    • d: (3, 0) → (3 + 0*4 + 25, 0) = (28, 0)
    • q: (0, 4) → (0 + 4*4 + 25, 0) = (41, 0)
    • t: (3, 4) → (3 + 4*4 + 25, 0) = (44, 0)

    The interpolation of other coordinates inside the texture should (?) then also map to the right offset along this line for integer coordinates

  • The fragment shader converts this 64×1-coordinate into the final 8×8 coordinate by simply taking the quotient and remainder of a division by 8, e.g.:
    • a: (0, 25) → (25 % 8, 25 / 8) = (1, 3)
    • d: (0, 28) → (28 % 8, 28 / 8) = (4, 3)
    • k: (0, 35) → (35 % 8, 35 / 8) = (3, 4)
    • q: (0, 41) → (41 % 8, 41 / 8) = (1, 5)
    • t: (0, 44) → (44 % 8, 44 / 8) = (4, 5)

Unfortunately custom shaders require OpenGL ES v2.0 or better, which is not available on all devices.

Is it somehow possible to achieve this mapping just via the matrix transformations offered by OpenGL ES 1.1?

回答1:

I haven't tried this yet, but I wanted to throw it out there as an idea already:

UPDATE: I tried it now and it works beautifully with one minor change (see comment)!

Let's say my big texture has width size and the image I want to draw has width width and starts at offset offset inside the big texture, where offset is the 1-D representation of the offset, i.e. x + y * size.

Then, the following 4x4 matrix will almost achieve this mapping:

     _                                           _
    |      1        width        offset      0    |
    |                                             |
    |   1/size   width/size   offset/size    0    |
M = |                                             |
    |      0          0            0         0    |
    |                                             |
    |_     0          0            0         1   _|

So, in the example above, to draw the 4×5 image, the matrix would be

 _                    _
|   1    4    25    0  |
|  1/8  1/2  25/8   0  |
|   0    0     0    0  |
|_  0    0     0    1 _|

The image coordinates will then need to be specified with a 4-vector containing

( x, y, 1, 1 )

So, for example the coordinates of k (i.e. (2,2)) will map to:

M*( 2, 2, 1, 1 ) => ( 35, 4.375, 0, 1 )

which will be interpreted as texture coordinate (35, 4.375).

If we now turn on nearest neighbor as the interpolation rule and enable texture wrapping in the x-direction, this should correspond to:

( 3, 4 )

(I was using integer coordinates here, whereas in the final implementation, the final coordinates would need to be floats in the range from 0 to 1. This might be achievable very easily by replacing the 1 in the bottom right corner of the matrix with size, since that will end up in the fourth position of the output vector and thus divide the other three. This, as @chbaker0 pointed out, would only work, though, if the texture coordinates are subject to the usual perspective division. If they are not, the entire matrix M needs to be divided by size instead to achieve the desired result.)

Does this sound reasonable at all or can someone see a problem with this before I go ahead and try to implement this? (Might take me a few days, since I have to do a couple other things first to get to a testable app...)