Blin-Phong Shader equation

 https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model

In Phong shading, one must continually recalculate the dot product  between a viewer (V) and the beam from a light-source (L) reflected (R) on a surface.

If, instead, one calculates a halfway vector between the viewer and light-source vectors,

 can be replaced with , where  is the normalized surface normal. In the above equation,  and  are both normalized vectors, and  is a solution to the equation  where  is the Householder matrix that reflects a point in the hyperplane that contains the origin and has the normal 


The below also needs to clamp certain dot products to zero in the case of negative answers. Without that, light heading away from the camera is treated the same way as light heading towards it. For the specular calculation, an incorrect "halo" of light glancing off the edges of an object and away from the camera might appear as bright as the light directly being reflected towards the camera.

struct Lighting
{
    float3 Diffuse;
    float3 Specular;
};

struct PointLight
{
	float3 position;
	float3 diffuseColor;
	float  diffusePower;
	float3 specularColor;
	float  specularPower;
};

Lighting GetPointLight(PointLight light, float3 pos3D, float3 viewDir, float3 normal)
{
	Lighting OUT;
	if (light.diffusePower > 0)
	{
		float3 lightDir = light.position - pos3D; //3D position in space of the surface
		float distance = length(lightDir);
		lightDir = lightDir / distance; // = normalize(lightDir);
		distance = distance * distance; //This line may be optimised using Inverse square root

		//Intensity of the diffuse light. Saturate to keep within the 0-1 range.
		float NdotL = dot(normal, lightDir);
		float intensity = saturate(NdotL);

		// Calculate the diffuse light factoring in light color, power and the attenuation
		OUT.Diffuse = intensity * light.diffuseColor * light.diffusePower / distance;

		//Calculate the half vector between the light vector and the view vector.
		//This is typically slower than calculating the actual reflection vector
		// due to the normalize function's reciprocal square root
		float3 H = normalize(lightDir + viewDir);

		//Intensity of the specular light
		float NdotH = dot(normal, H);
		intensity = pow(saturate(NdotH), specularHardness);

		//Sum up the specular light factoring
		OUT.Specular = intensity * light.specularColor * light.specularPower / distance; 
	}
	return OUT;
}

OpenGL Shading Language code sample[edit]

This sample in the OpenGL Shading Language consists of two code files, or shaders. The first one is a so-called vertex shader and implements Phong shading, which is used to interpolate the surface normal between vertices. The second shader is a so-called fragment shader and implements the Blinn–Phong shading model in order to determine the diffuse and specular light from a point light source.

Vertex shader[edit]

This vertex shader implements Phong shading:

attribute vec3 inputPosition;
attribute vec2 inputTexCoord;
attribute vec3 inputNormal;

uniform mat4 projection, modelview, normalMat;

varying vec3 normalInterp;
varying vec3 vertPos;

void main() {
    gl_Position = projection * modelview * vec4(inputPosition, 1.0);
    vec4 vertPos4 = modelview * vec4(inputPosition, 1.0);
    vertPos = vec3(vertPos4) / vertPos4.w;
    normalInterp = vec3(normalMat * vec4(inputNormal, 0.0));
}

Fragment shader[edit]

This fragment shader implements the Blinn–Phong shading model[5] and gamma correction:

precision mediump float;

in vec3 normalInterp;
in vec3 vertPos;

uniform int mode;

const vec3 lightPos = vec3(1.0, 1.0, 1.0);
const vec3 lightColor = vec3(1.0, 1.0, 1.0);
const float lightPower = 40.0;
const vec3 ambientColor = vec3(0.1, 0.0, 0.0);
const vec3 diffuseColor = vec3(0.5, 0.0, 0.0);
const vec3 specColor = vec3(1.0, 1.0, 1.0);
const float shininess = 16.0;
const float screenGamma = 2.2; // Assume the monitor is calibrated to the sRGB color space

void main() {

  vec3 normal = normalize(normalInterp);
  vec3 lightDir = lightPos - vertPos;
  float distance = length(lightDir);
  distance = distance * distance;
  lightDir = normalize(lightDir);

  float lambertian = max(dot(lightDir, normal), 0.0);
  float specular = 0.0;

  if (lambertian > 0.0) {

    vec3 viewDir = normalize(-vertPos);

    // this is blinn phong
    vec3 halfDir = normalize(lightDir + viewDir);
    float specAngle = max(dot(halfDir, normal), 0.0);
    specular = pow(specAngle, shininess);
       
    // this is phong (for comparison)
    if (mode == 2) {
      vec3 reflectDir = reflect(-lightDir, normal);
      specAngle = max(dot(reflectDir, viewDir), 0.0);
      // note that the exponent is different here
      specular = pow(specAngle, shininess/4.0);
    }
  }
  vec3 colorLinear = ambientColor +
                     diffuseColor * lambertian * lightColor * lightPower / distance +
                     specColor * specular * lightColor * lightPower / distance;
  // apply gamma correction (assume ambientColor, diffuseColor and specColor
  // have been linearized, i.e. have no gamma correction in them)
  vec3 colorGammaCorrected = pow(colorLinear, vec3(1.0 / screenGamma));
  // use the gamma corrected color in the fragment
  gl_FragColor = vec4(colorGammaCorrected, 1.0);
}

The colors ambientColordiffuseColor and specColor are not supposed to be gamma corrected. If they are colors obtained from gamma-corrected image files (JPEGPNG, etc.), they need to be linearized before working with them, which is done by scaling the channel values to the range [0, 1] and raising them to the gamma value of the image, which for images in the sRGB color space can be assumed to be about 2.2 (even though for this specific color space, a simple power relation is just an approximation of the actual transformation). Modern

댓글

이 블로그의 인기 게시물

About AActor!!! "UObject" has no member "BeginPlay"

UNREAL Android build information

C++ 생성자 위임 (delegating constructor)