(With this question I'm trying to investigate an idea I had for solving this other one)
If I have a standard 2D array of dimensions width and height in memory, I can turn that into a 1D array of length width * height and then index it via index = x + y * width. This mapping is extremely helpful when allocating and freeing memory for the array as the memory manager does not need to worry about packing the structures in 2D but only needs to worry about the overall length of every allocated array if expressed in 1D.
I am trying to see if I can use this same approach for image-memory management for OpenGL textures. The idea (as described in the above linked question) is to combine a whole bunch of needed textures into a single bigger one by bin-packing them (i.e. drawing them next to each other) into the big texture. This helps minimize costly texture-binding operations during rendering.
Let's say my big texture is 8×8 pixels (i.e. 64 pixels total):
8x8 texture: 5x5 image: 4x5 image:
| 0 1 2 3 4 5 6 7 | 0 1 2 3 4 | 0 1 2 3
---+----------------- ---+----------- ---+---------
0 | . . . . . . . . 0 | A B C D E 0 | a b c d
1 | . . . . . . . . 1 | F G H I J 1 | e f g h
2 | . . . . . . . . 2 | K L M N O 2 | i j k l
3 | . . . . . . . . 3 | P Q R S T 3 | m n o p
4 | . . . . . . . . 4 | U V W X Y 4 | q r s t
5 | . . . . . . . .
6 | . . . . . . . .
7 | . . . . . . . .
And I would like to store a 5×5 image and a 4×5 image in it (i.e. 25 + 20 = 45 pixels total). Technically, I have plenty of pixels available, but I can't place these images next to each other into the big texture as that would require a minimum dimension of 9 in one direction and 5 in the other.
If I could simply treat my 8×8 texture as 64 continues pixels of memory and map the two images into 1D blocks of memory inside that, I could arrange the images as follows inside the texture: 8x8 texture:
| 0 1 2 3 4 5 6 7
---+-----------------
0 | A B C D E F G H
1 | I J K L M N O P
2 | Q R S T U V W X
3 | Y a b c d e f g
4 | h i j k l m n o
5 | p q r s t . . .
6 | . . . . . . . .
7 | . . . . . . . .
If I draw all my images at a scale of 1:1, i.e. no fractional pixel coordinates anywhere and no need for any linear filtering or other pixel blending, is it possible to come up with a transformation matrix that I can use to draw the 4×5 image using this texture?
With vertex and fragment shaders, this looks like it might be fairly easy (unless I'm forgetting something; I haven't tried this):
The vertex shader maps the four corners of the image to draw to the texture expressed as a 64×1 image:
a
: (0, 0) → (0 + 0*4 + 25, 0) = (25, 0) where 25 is the offset of the 4×5 imaged
: (3, 0) → (3 + 0*4 + 25, 0) = (28, 0)q
: (0, 4) → (0 + 4*4 + 25, 0) = (41, 0)t
: (3, 4) → (3 + 4*4 + 25, 0) = (44, 0)
The interpolation of other coordinates inside the texture should (?) then also map to the right offset along this line for integer coordinates
- The fragment shader converts this 64×1-coordinate into the final 8×8 coordinate by simply taking the quotient and remainder of a division by 8, e.g.:
a
: (0, 25) → (25 % 8, 25 / 8) = (1, 3)d
: (0, 28) → (28 % 8, 28 / 8) = (4, 3)k
: (0, 35) → (35 % 8, 35 / 8) = (3, 4)q
: (0, 41) → (41 % 8, 41 / 8) = (1, 5)t
: (0, 44) → (44 % 8, 44 / 8) = (4, 5)
Unfortunately custom shaders require OpenGL ES v2.0 or better, which is not available on all devices.
Is it somehow possible to achieve this mapping just via the matrix transformations offered by OpenGL ES 1.1?