I have an iPad app I am working on and one possible feature that we are contemplating is to allow the user to touch an image and deform it.
Basically the image would be like a painting and when the user drags their fingers across the image, the image will deform and the pixels that are touched will be "dragged" along the image. Sorry if this is hard to understand, but the bottom line is that we want to edit the content of the texture on the fly as the user interacts with it.
Is there an effective technique for something like this? I am trying to get a grasp of what would need to be done and how heavy an operation it would be.
Right now the only thing I can think of is to search through the content of the texture based on where was touched and copy the pixel data and do some kind of blend on the existing pixel data as the finger moves. Then reloading the texture with glTexImage2D periodically to get this effect.
There are at least two fundamentally different approaches:
1. Update pixels (i assume this is what you mean in the question)
The most effective technique to change the pixels in the texture is called Render-to-Texture and can be done in OpenGL/OpenGL ES via FBOs. On desktop OpenGL you can use pixel buffer objects (PBOs) to manipulate pixel data directly on GPU (but OpenGL ES does not support this yet).
On unextended OpenGL you can change the pixels in system memory and then update texture with glTexImage2D/glTexSubImage2D - but this is inefficient last resort solution and should be avoided if possible. glTexSubImage2D is usually much faster since it only updates pixel inside the existing texture, while glTexImage2D creates entirely new texture (as a benefit you can change the size and pixel format of the texture). On the other side, glTexSubImage2D allows to update only parts of the texture.
You say that you want it to work with OpenGL ES, so I would propose to do the following steps:
For FBOs the code can look like this:
Keep in mind that not all pixel formats can be rendered to. RGB/RGBA are usually fine.
2. Update geometry
You can also change the geometry of the object your texture is mapped on. The geometry should be tesselated enough to allow smooth interaction and prevent artifacts to appear. The deformation of geometry can be done via different methods: parametric surfaces, NURBS, patches.
Modifying the texture render target using FBO is tricky, but pretty straighforward.
So, we have:
The trick to "put" Src to Dest is to
For the Src to be rendered correctly you have to use orthographic projection and identity camera transform.
The (TX,TY) and (IW,IH) coordinates in my 4-step solution must be divided by TW and TH respectively to get mapped correctly to the [0..1, 0..1] framebuffer size. To avoid these divisions in shader you can just use the appropriate Orthographic projection for [0..TW, 0..TH] viewport.
Hope this solves problems with FBOs.
I have tried two things that might solve your problem. The methods are pretty different, so I suppose your specific usage case will determine which one is appropriate (if any).
First I've done image deformation with geometry. That is, mapping my texture to a subdivided grid and then overlaying this grid with another grid of bezier control points. The user will then move those control points and thus deforming the vertices of the rendered geometry in a smooth manner.
The second method is more similar to what you outline in your question. When creating your texture keep that data around, so you can manipulate the pixels directly there. Then call something like
every time you draw. This might be terribly inefficient, since it likely involves re-uploading all the texture data every frame and I'd love to hear if there's a better way to do it - like directly manipulating the texture data on the GPU. In my case, though the performance is adequate.
I hope this helps, even though it's pretty low on detail.