3D Physics simulation needs access to neighbor vertices' positions and attributes in shader to calculate a vertex's new position. 2D version works but am having trouble porting solution to 3D. Flip-Flopping two 3D textures seems right, inputting sets of x,y and z coordinates for one texture, and getting vec4s which contains position-velocity-acceleration data of neighboring points to use to calculate new positions and velocities for each vertex. The 2D version uses 1 draw call with a framebuffer to save all the generated gl_FragColors to a sampler2D. I want to use a framebuffer to do the same with a sampler3D. But it looks like using a framebuffer in 3D, I need to write one+ layer at a time of a 2nd 3D texture, until all layers have been saved. I'm confused about mapping vertex grid to relative x,y,z coordinates of texture and how to save this to layers individually. In 2D version the gl_FragColor written to the framebuffer maps directly to the 2D x-y coordinate system of the canvas, with each pixel being a vertex. but I'm not understanding how to make sure the gl_FragColor which contains position-velocity data for a 3D vertex is written to the texture such that it keeps mapping correctly to the 3D vertices.
This works for 2D in a fragment shader:
vec2 onePixel = vec2(1.0, 1.0)/u_textureSize;
vec4 currentState = texture2D(u_image, v_texCoord);
float fTotal = 0.0;
for (int i=-1;i<=1;i+=2){
for (int j=-1;j<=1;j+=2){
if (i == 0 && j == 0) continue;
vec2 neighborCoord = v_texCoord + vec2(onePixel.x*float(i), onePixel.y*float(j));
vec4 neighborState;
if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0){
neighborState = vec4(0.0,0.0,0.0,1.0);
} else {
neighborState = texture2D(u_image, neighborCoord);
}
float deltaP = neighborState.r - currentState.r;
float deltaV = neighborState.g - currentState.g;
fTotal += u_kSpring*deltaP + u_dSpring*deltaV;
}
}
float acceleration = fTotal/u_mass;
float velocity = acceleration*u_dt + currentState.g;
float position = velocity*u_dt + currentState.r;
gl_FragColor = vec4(position,velocity,acceleration,1);
This is what I have attempted in 3D in a fragment shader:#version 300 es
vec3 onePixel = vec3(1.0, 1.0, 1.0)/u_textureSize;
vec4 currentState = texture(u_image, v_texCoord);
float fTotal = 0.0;
for (int i=-1; i<=1; i++){
for (int j=-1; j<=1; j++){
for (int k=-1; k<=1; k++){
if (i == 0 && j == 0 && k == 0) continue;
vec3 neighborCoord = v_texCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
vec4 neighborState;
if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.z < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0 || neighborCoord.z >= 1.0){
neighborState = vec4(0.0,0.0,0.0,1.0);
} else {
neighborState = texture(u_image, neighborCoord);
}
float deltaP = neighborState.r - currentState.r; //Distance from neighbor
float springDeltaLength = (deltaP - u_springOrigLength[counter]);
//Add the force on our point of interest from the current neighbor point. We'll be adding up to 26 of these together.
fTotal += u_kSpring[counter]*springDeltaLength;
}
}
}
float acceleration = fTotal/u_mass;
float velocity = acceleration*u_dt + currentState.g;
float position = velocity*u_dt + currentState.r;
gl_FragColor = vec4(position,velocity,acceleration,1);
After I wrote that, I kept reading and found that a framebuffer doesn't access all layers of a sampler3D for writing at the same time. I need to somehow process 1 - 4 layers at a time. I'm both unsure of how to do that, as well as make sure the gl_FragColor goes to the right pixel on the right layer.
I found this answer on SO: Render to 3D texture webgl2 It demonstrates writing to multiple layers at a time in a framebuffer, but I'm not seeing how to equate this with the fragment shader, from one draw call, automatically running 1,000,000 times (100 x 100 x 100 ... (length x width x height)), each time populating the right pixel in a sampler3D with the position-velocity-acceleration data, which I can then flip-flop to use for the next iteration.
I have no results yet. I'm hoping to make a first sampler3D programatically, use it to generate new vertex data which is saved in a second sampler3D, and then switch textures and repeat.
WebGL is destination based. That means it does 1 operation for each result it wants to write to the destination. The only kinds of destinations you can set are points (squares of pixels), lines, and triangles in a 2D plane. That means writing to a 3D texture will require handling each plane separately. At best you might be able to do N planes separately on where N is 4 to 8 by setting up multiple attachments to a framebuffer up to the maximum allowed attachments
So I'm assuming you understand how to render to 100 layers 1 at a time. At init time either make 100 framebuffers and attach different layer to each one. OR, at render time update a single framebuffer with a different attachment. Knowing how much validation happens I'd choose making 100 framebuffers
So
now at render time render to each layer
WebGL1 does not support 3D textures so we know you're using WebGL2 since you mentioned using
sampler3D
.In WebGL2 you generally use
#version 300 es
at the top of your shaders to signify you want to use the more modern GLSL ES 3.00.Drawing to multiple layers requires first figuring out how many layers you want to render to. WebGL2 supports a minimum of 4 at once so we could just assume 4 layers. To do that you'd attach 4 layers to each framebuffer
GLSL ES 3.0 shaders do not use
gl_FragCoord
they use a user defined output so we'd declare an array outputand then use that just like you were previously using
gl_FragColor
except add an index. Below we're processing 4 layers. We're only passing in a vec2 forv_texCoord
and computing the 3rd coordinate based onbaseLayerTexCoord
, something we pass in each draw call.The last thing to do is we need to call
gl.drawBuffers
to tell WebGL2 where to store the outputs. Since we're doing 4 layers at a time we'd useExample:
Some other things to note: In GLSL ES 3.00 you don't need to pass in a texture size as you can query the texture size with the function
textureSize
. It returns anivec2
orivec3
depending on the type of texture.You can also use
texelFetch
instead oftexture
.texelFetch
takes an integer texel coordinate and a mip level so for examplevec4 color = texelFetch(some3DTexture, ivec3(12, 23, 45), 0);
gets the texel at x = 12, y = 23, z = 45 from mip level 0. That means you don't need to do the math about 'onePixel` you have in your code if you find it easier to work with pixels instead of normalized texture coordinates.