Matrix / vector multiplication order

2019-03-04 00:55发布

问题:

I've read a dozen articles online about the correct order of rotation, translation and scale matrix multiplication in OpenGL. However, now that I started implementing it myself, I came to the point where I'm really confused.

Let's assume that in my code I'm calculating the transformation matrix, and I'm passing it to the shader as a one result matrix:

shader.SetUniform("u_Matrix", scale * rotation * translation);

And in the vertex shader I'm multiplying the vertices by this matrix:

gl_Position = u_Matrix * vec4(a_Position, 0.0, 1.0);

Now, in this order (scale * rotation * translation) I'm getting exactly what I want: I rotate the object, then I move it to the specific point, and then I scale it. Can someone explain me why this is the correct order?

I always thought that all the transformations were applied "from the vector side".

For example, if we "expand" the multiplication:

gl_Position = scale * rotation * translation * vec4(a_Position, 0.0, 1.0);

then first the translation should be applied, then the rotation and the scale after that. Everything would seem ok to me, if it wasn't for the order of translation and rotation. If we don't want to rotate around a certain point, we should first rotate and then translate, which isn't the case here.

Why does this transformation work as intended?

回答1:

Your C++ matrices are probably stored as row major under the hood. Which means that multiplying them from left to right is an "origin" transformation, while right to left would be a "local" incremental transformation.

OpenGL however uses a column major ordering memory layout (The 13th, 14th and 15th elements in the 16 element array are treated as the translation component).

To use your row major matrices in OpenGL, there are 2 things you can do:

  1. glUniformMatrix* functions, have a parameter to pass in a GL_TRUE to the transpose:

void glUniformMatrix4fv(GLint location, GLsizei count,GLboolean transpose, const GLfloat *value);

This will re-arrange them to be column major.

  1. The other is to revert the ordering of operations in the shader:

gl_Position = vec4(a_Position, 0.0, 1.0) * u_Matrix;

But most GLSL literature you will find will use local left to right column major ordering, so it may be better to stick with that.

Another alternative is to change the layout on the C++ side to make them column major (but I personally think row major is either to deal with there).