I'm trying to reconstruct 3D world coordinates from depth values in my deferred renderer, but I'm having a heck of a time. Most of the examples I find online assume a standard perspective transformation, but I don't want to make that assumption.
In my geometry pass vertex shader, I calculate gl_Position using:
gl_Position = wvpMatrix * vec4(vertexLocation, 1.0f);
and in my lighting pass fragment shader, I try to get the world coordinates using:
vec3 decodeLocation()
{
vec4 clipSpaceLocation;
clipSpaceLocation.xy = texcoord * 2.0f - 1.0f;
clipSpaceLocation.z = texture(depthSampler, texcoord).r;
clipSpaceLocation.w = 1.0f;
vec4 homogenousLocation = viewProjectionInverseMatrix * clipSpaceLocation;
return homogenousLocation.xyz / homogenousLocation.w;
}
I thought I had it right, and indeed, objects near the camera appear to be lit correctly. But I recently realized as I move further away, objects are lit as if they're further from the camera than they actually are. I've played around with my lighting pass and verified my world coordinates are the only thing being miscalculated.
I can't help but think my clipSpaceLocation.z and clipSpaceLocation.w are the source of the problem, but I've tried every variation I can think of to calculate them, and the above code results in the most-correct results.
Any ideas or suggestions?