Phong lighting: add specular lighting separately o

2019-08-14 03:38发布

问题:

I am trying to implement Phong lighting. In some tutorials specular lighting is added to ambient and diffuse lighting and then total lighting is multiplied by texture color. I also saw a tutorial where specular lighting was added separately after the addition of ambient and diffuse lights was multiplied with texture color.

Here is a fragment shader with both options present and with screenshots.

#version 330 core
out vec4 FragColor;

in vec2 TexCoord;
in vec3 normals;
in vec3 fragPosition;


//texture samplers
uniform sampler2D texture1;
uniform vec3 ambientLight;
uniform vec3 lightPosition;
uniform vec3 lightColor;
uniform vec3 viewPosition;
uniform float specularStrength;
uniform float shineDamp;

void main()
{
    vec3 norm = normalize(normals);
    vec3 lightDir = normalize(lightPosition - fragPosition); 

    float diff = max(dot(norm, lightDir), 0.0);
    vec3 diffuse = diff * lightColor;

    vec3 viewDir = normalize(viewPosition - fragPosition);
    vec3 reflectDir = reflect(-lightDir, norm);
    float spec = pow(max(dot(viewDir, reflectDir), 0.0), shineDamp);
    vec3 specular = specularStrength * spec * lightColor;  

    // 1. Specular is added to ambient and diffuse lights and this result is multiplied with texture
    //FragColor = vec4((ambientLight + diffuse + specular), 1.0f) * texture(texture1, TexCoord);

    // 2. Specular is added separately to result of multiplication of ambient + diffuse and texture
    //FragColor = vec4((ambientLight + diffuse), 1.0f) * texture(texture1, TexCoord) + vec4(specular, 1.0);
}
  • Option 1. Specular is added to ambient and diffuse

  • Option 2. Specular is added separately.

In these screenshots shineDump value was 32.0f and specularStrength was 0.5f.

Which one looks correct? In my opinion, the 2nd option looks correct compared to the 1st one but a lot of tutorials use the formula from the 1st option.

回答1:

I am trying to implement Phong lighting. In some tutorials specular lighting is added to ambient and diffuse lighting and then total lighting is multiplied by texture color.

This is a historical artifact from the dark ages when the lighting equations were still hard-wired in the GPUs, and when Gouraud shading was the standard. The light model was evaluated just at the vertices, and the resulting light value interpolated across the whole primitive. Since a texture is often used to simulate the surface properties of the material, the texture usually is sampled at each fragment (so that we can give structure to our primitives beyond the level specified in the geometry). But due to gouraud shading, we need to modulate the already-calculated light value with the texture data per fragment. The easiest approach is to just multiply both.

I also saw a tutorial where specular lighting was added separately after the addition of ambient and diffuse lights was multiplied with texture color.

Modulating the whole lighting by the texture color gives unrealistic results for a lot of materials. To solve those issues, the specular part was sometimes separated. We now calculate a amibient+diffuse part and a specular part per vertex, interpolate those, modulate ambient+diffuse with the texture and add the specular part afterwards, per fragment.

However, nobody is going to use Gouraud shading nowadays, we instead calculate the lighting per fragment. We do not have different frequencies for the evaluation of the light model and the sampling of the textures any more, so these issues become absolutely meaningless. Since the actual light reflected by some surface depends on the material, and we use textures to simulate that material, we can directly use textures as the inputs of the lighting calculations, i.e. as diffuse albedo, or specular reflectiveness, or whatever. This also allows us to vary arbitrary material properties across the primitive, not just "colors", shininess/roughness can as well come from a texture as normal directions and whatever parameters your lighting model might use.



回答2:

In general in computer graphics the light is modeled by the Bidirectional reflectance distribution function (BRDF). The BRDF is a function that gives the relation between the light reflected along an outgoing direction and the light incident from an incoming direction.

In general BRDF models can be classified into one or two of these categories:

  • Empirical: Their main aim is to provide a simple formulation specifically designed to mimic a kind of reflection. Consequently, we get a fast computational model adjustable by parameters, but without considering the physics behind it.
  • Theoretical: These models try to accurately simulate light scattering by using physics laws. They usually lead to complex expression and high computational effort, thus they are not normally employed in rendering systems.
  • Experimental: The BRDF can be acquired using a Gonioreflectometer which mechanically varies light source and sensor positions. Other techniques use digital cameras to acquire many BRDF samples with a single photograph.

See futher:

  • Bidirectional reflectance distribution function
  • A Survey of BRDF Representation for Computer Graphics
  • Rosana Montes and Carlos Urena, An Overview of BRDF Models
  • Simon's Tech Blog - Microfacet BRDF
  • BRDF Explorer


The Phong model is an Empirical Isotropic model. Since both, empirical and theoretical acquired BRDF models are only approximations of the reflectance properties of real materials, it is completly up to you, to deside a BRDF for your scene and appearance of surfaces.