GLSL - Multiple lights make cubes to bright

Started by TheBoneJarmer, September 28, 2015, 10:25:44

Previous topic - Next topic

TheBoneJarmer

Hey guys!

For a while now I've been orientating at lighting. Which works fine so far except for one thing I can't seem to figure out. When I have 1 point light, my cube looks fine but when I add another, my cube becomes more bright than it should. Below are 2 images that show what I mean. And here is my fragment shader code that involves my lighting:

#version 120

//Matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

//Material
uniform vec4 matDiffuse = vec4(1.0, 1.0, 1.0, 1.0);
uniform vec4 matAmbient = vec4(0, 0, 0, 0);
uniform vec4 matSpecular = vec4(1.0, 1.0, 1.0, 1.0);
uniform vec4 matEmission = vec4(1.0, 1.0, 0.0, 0.0);
uniform float matShininess = 0;
uniform sampler2D matTexture;
uniform int useTexture = 0;

//Light
uniform vec3 lightPosition[100];
uniform vec4 lightColor[100];
uniform float lightPower[100];
uniform float lightRadius[100];
uniform int lightEnabled[100];
uniform int lightDirectional[100];

varying vec4 vertex;
varying vec3 normal;
varying vec2 uv;

void main()
{
	//The final fragment color
	vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0);
	
	//Update the materials
	if (useTexture == 1) {
		finalColor = texture2D(matTexture, uv);	
	}
		
	//Apply the material based on all the lights in the scene	
	for (int i=0; i<100; i++) {
		if (lightEnabled[i] == 1) {
			vec4 finalMaterial = vec4(0.0, 0.0, 0.0, 0.0);
			
			if (lightDirectional[i] == 0) {
				vec3 vertexPosition = (model * vertex).xyz;
				vec3 vertexNormal = (view * model * vec4(normal, 0)).xyz;

				vec3 eyeDirection = -(view * model * vec4(vertexPosition, 1)).xyz;
				vec3 lightDirection = (view * vec4(lightPosition[i], 1)).xyz + eyeDirection;
	
				float lightDistance = length(lightPosition[i] - vertexPosition);
				float lightAttenuation = 1 / (1.0 + (0 * lightDistance) + ((1 / (lightRadius[i] * lightRadius[i])) * (lightDistance * lightDistance)));

				vec3 n = normalize(vertexNormal);
				vec3 l = normalize(lightDirection);

				vec3 e = normalize(eyeDirection);
				vec3 r = reflect(-l, n);

				float theta = clamp(dot(l, n), 0, 1);
				float alpha = clamp(dot(e, r), 0, 1);
		
				finalMaterial += matAmbient;
				finalMaterial += matDiffuse * lightColor[i] * lightPower[i] * theta / (lightDistance * lightDistance);
				finalMaterial += matSpecular * lightColor[i] * lightPower[i] * pow(alpha, 5) / (lightDistance * lightDistance);
				finalMaterial *= lightAttenuation;
			} else {
				vec3 vertexPosition = (model * vertex).xyz;
				vec3 vertexNormal = (view * model * vec4(normal, 0)).xyz;

				vec3 eyeDirection = -(view * model * vec4(vertexPosition, 1)).xyz;
				vec3 lightDirection = (view * vec4(lightPosition[i], 1)).xyz + eyeDirection;
	
				vec3 n = normalize(vertexNormal);
				vec3 l = normalize(lightPosition[i]);

				vec3 e = normalize(eyeDirection);
				vec3 r = reflect(-l, n);

				float theta = clamp(dot(l, n), 0, 1);
				float alpha = clamp(dot(e, r), 0, 1);
		
				finalMaterial += matAmbient;
				finalMaterial += matDiffuse * lightColor[i] * theta;
				finalMaterial += matSpecular * lightColor[i] * pow(alpha, 5);
			}
			
			finalColor += finalMaterial;
		}
	}
	
	finalColor += matEmission;
	
	//Set the result
	gl_FragColor = finalColor;
}


Thanks in advance!
TBJ

Kai

It's hard to tell from your images whether it is wrong or right.

However, there are a few things with your shader:

First, you cannot transform your "normal" simply by your model and view matrices. This is probably the biggest issue.
A normal is not a position but a direction.
You have to compute a "normal" matrix instead.
This can be done with JOML using Matrix4f.normal() based on 'this' being your view-matrix.

Second, please use view-space coordinates for your vertices consistently within your fragment shader. So you would transform your vertex positions and normals in your vertex shader once into view-space and then hand them as varyings to your fragment shader, so you need not do that for each fragment in your fragment shader.

Third, you compute the light attenuation twice it seems. Once with "lightAttenuation" and then again with dividing by lightDistance squared. Do it once.

Fourth: Be clear about the lighting model that you want to implement. Phong or Blinn-Phong, or something more physically based. Then read the theory about it. And then try to implement it in a shader. Currently, it seems you do both, traditional Phong and a bit of physically based lighting with your "lightPower." Using some emission power for your light will eventually make "tone mapping" necessary. You maybe don't want to do this.

And lastly: If you really have hundred(s) of lights, you might want to implement deferred lighting instead, to speed things up a bit, in case you have many small point lights.

TheBoneJarmer

Hey Kai

Thanks to your notes I managed to solve my problem. Below are my new shaders in case someone else is interested. Still, I have some feedback that might be something nice to discuss about. I also added some screenshots to show the results. As you can see the colors work great. At least, I think so haha

Quote from: Kai on September 28, 2015, 10:45:22
It's hard to tell from your images whether it is wrong or right.

I thought this was clear enough. The correct image clearly is as red as it should be while the wrong image is not.

Quote from: Kai on September 28, 2015, 10:45:22
First, you cannot transform your "normal" simply by your model and view matrices. This is probably the biggest issue.
A normal is not a position but a direction.
You have to compute a "normal" matrix instead.
This can be done with JOML using Matrix4f.normal() based on 'this' being your view-matrix.

Done that. It didn't make much difference but I guess it is more correct this way. Thanks for the tip!

Quote from: Kai on September 28, 2015, 10:45:22
Second, please use view-space coordinates for your vertices consistently within your fragment shader. So you would transform your vertex positions and normals in your vertex shader once into view-space and then hand them as varyings to your fragment shader, so you need not do that for each fragment in your fragment shader.

Done that as well.

Quote from: Kai on September 28, 2015, 10:45:22
Third, you compute the light attenuation twice it seems. Once with "lightAttenuation" and then again with dividing by lightDistance squared. Do it once.

This is the one I've been busy with today and yesterday. In the new vertex shader I'll show what I changed and what made the final change worth it.

Quote from: Kai on September 28, 2015, 10:45:22
Fourth: Be clear about the lighting model that you want to implement. Phong or Blinn-Phong, or something more physically based. Then read the theory about it. And then try to implement it in a shader. Currently, it seems you do both, traditional Phong and a bit of physically based lighting with your "lightPower." Using some emission power for your light will eventually make "tone mapping" necessary. You maybe don't want to do this.

The lighting model I use is my own creativity. Didn't really care what the model should have been. Thing is, it works and all I wanted it to do is act like a bulb. Turns out Phong came close to that result although the lightPower varying remained an important part of my formula.

Quote from: Kai on September 28, 2015, 10:45:22
And lastly: If you really have hundred(s) of lights, you might want to implement deferred lighting instead, to speed things up a bit, in case you have many small point lights.

I'll take a look at that later. Thanks for the tip though but I gotta finish other stuff as well. Anyway, here are my shader codes. Feel free to use it the way you like and any feedback is appreciated for improvement.

Fragment
#version 120

varying vec4 finalColor;

void main()
{	
	//Set the result
	gl_FragColor = finalColor;
}


Vertex
#version 120

uniform mat4 MVP;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 NormalMatrix;

//Material
uniform vec4 MaterialDiffuse = vec4(1.0, 1.0, 1.0, 1.0);
uniform vec4 MaterialAmbient = vec4(0, 0, 0, 0);
uniform vec4 MaterialSpecular = vec4(1.0, 1.0, 1.0, 1.0);
uniform vec4 MaterialEmission = vec4(1.0, 1.0, 0.0, 0.0);
uniform float MaterialShininess = 0;
uniform sampler2D MaterialTexture;
uniform int useTexture = 0;

//Light
uniform vec3 lightPosition[100];
uniform vec4 lightColor[100];
uniform float lightPower[100];
uniform float lightRadius[100];
uniform int lightEnabled[100];
uniform int lightDirectional[100];

varying vec4 finalColor;

void main()
{
	gl_TexCoord[0] = gl_MultiTexCoord0;
	
	//Set the vertex position
	gl_Position = MVP * gl_Vertex;
	
	//The final fragment color
	finalColor = vec4(0.0, 0.0, 0.0, 0.0);
	
	//Update the final color if there is a texture
	if (useTexture == 1) {
		finalColor = texture2D(MaterialTexture, gl_TexCoord[0].xy);	
	}
	
	//Apply the material based on all the lights in the scene	
	for (int i=0; i<100; i++) {
		if (lightEnabled[i] == 1) {
			vec4 finalMaterial = vec4(0.0, 0.0, 0.0, 0.0);
			
			if (lightDirectional[i] == 0) {
				vec3 vertexPosition = (ModelMatrix * gl_Vertex).xyz;
				vec3 vertexNormal = (NormalMatrix * vec4(gl_Normal, 0)).xyz;

				vec3 eyeDirection = -(ViewMatrix * ModelMatrix * vec4(vertexPosition, 1)).xyz;
				vec3 lightDirection = (ViewMatrix * vec4(lightPosition[i], 1)).xyz + eyeDirection;
	
				float lightDistance = length(lightPosition[i] - vertexPosition);
				float lightAttenuation = 1 / (1.0 + (0 * lightRadius[i]) + ((1 / (lightRadius[i] * lightRadius[i])) * (lightDistance * lightDistance)));

				vec3 n = normalize(vertexNormal);
				vec3 l = normalize(lightDirection);

				vec3 e = normalize(eyeDirection);
				vec3 r = reflect(-l, n);

				float theta = clamp(dot(l, n), 0, 1);
				float alpha = clamp(dot(e, r), 0, 1);
		
				finalMaterial += MaterialAmbient;
				finalMaterial += MaterialDiffuse * lightColor[i] * lightPower[i] * theta;
				finalMaterial += MaterialSpecular * lightColor[i] * lightPower[i] * pow(alpha, 5);
				finalMaterial *= lightAttenuation;
				
				finalColor += finalMaterial / (lightDistance * lightDistance);
			} else {
				vec3 vertexPosition = (ModelMatrix * gl_Vertex).xyz;
				vec3 vertexNormal = (ViewMatrix * ModelMatrix * vec4(gl_Normal, 0)).xyz;

				vec3 eyeDirection = -(ViewMatrix * ModelMatrix * vec4(vertexPosition, 1)).xyz;
				vec3 lightDirection = (ViewMatrix * vec4(lightPosition[i], 1)).xyz + eyeDirection;
	
				vec3 n = normalize(vertexNormal);
				vec3 l = normalize(lightPosition[i]);

				vec3 e = normalize(eyeDirection);
				vec3 r = reflect(-l, n);

				float theta = clamp(dot(l, n), 0, 1);
				float alpha = clamp(dot(e, r), 0, 1);
		
				finalMaterial += MaterialAmbient;
				finalMaterial += MaterialDiffuse * lightColor[i] * theta;
				finalMaterial += MaterialSpecular * lightColor[i] * pow(alpha, 5);
				
				finalColor += finalMaterial;
			}
		}
	}
	
	finalColor += MaterialEmission;
}

Kai

QuoteI thought this was clear enough. The correct image clearly is as red as it should be while the wrong image is not.
Yeah, of course this totally depends on what you mean by "right" and "wrong." With a local shading model, as is used by OpenGL, nothing is really right or wrong, because it is so very far from reality, and everyone therefore has a different opinion about what looks wrong and what looks right.

QuoteThe lighting model I use is my own creativity. Didn't really care what the model should have been.  Thing is, it works..
Using a non-standard lighting model is of course another reason why only you can really tell/evaluate whether a render looks "right" or "wrong."

Another feedback about your new shader:

With your current shader you now have "Gouraud" shading, because you are computing the final color in the vertex shader instead of in the fragment shader. This results in low tessellated models to look unrealistic, especially with specular highlights, because lighting calculations are now only performed by vertex.
What I meant by my previous comment was, if you wanted to do Phong then you would interpolate the view-space vertex positions and the view-space normals and do the lighting calculations (lambert factor, attenuation, ...) in the fragment shader based on the interpolated positions/normals output from the vertex shader.

TheBoneJarmer

Quote from: Kai on September 29, 2015, 10:30:49
Yeah, of course this totally depends on what you mean by "right" and "wrong." With a local shading model, as is used by OpenGL, nothing is really right or wrong, because it is so very far from reality, and everyone therefore has a different opinion about what looks wrong and what looks right.
Using a non-standard lighting model is of course another reason why only you can really tell/evaluate whether a render looks "right" or "wrong."

That makes sense haha. Yea you're right about that.

Quote from: Kai on September 29, 2015, 10:30:49
With your current shader you now have "Gouraud" shading, because you are computing the final color in the vertex shader instead of in the fragment shader. This results in low tessellated models to look unrealistic, especially with specular highlights, because lighting calculations are now only performed by vertex.
What I meant by my previous comment was, if you wanted to do Phong then you would interpolate the view-space vertex positions and the view-space normals and do the lighting calculations (lambert factor, attenuation, ...) in the fragment shader based on the interpolated positions/normals output from the vertex shader.

Aah I see. Thought you meant all calculations had to be done in vertex but only those vertex one's had to be done that way. I'll change it post my result. Thanks for the feedback!

TheBoneJarmer

Okay, I changed my fragment shader and my vertex shader to this. I noticed the specular was gone with the previous shader (shame on me) but it seems to have returned now. Anyway, here is the result:

Fragment
#version 120

uniform mat4 MVP;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 NormalMatrix;

//Material
uniform vec4 MaterialDiffuse = vec4(1.0, 1.0, 1.0, 1.0);
uniform vec4 MaterialAmbient = vec4(0, 0, 0, 0);
uniform vec4 MaterialSpecular = vec4(1.0, 1.0, 1.0, 1.0);
uniform vec4 MaterialEmission = vec4(1.0, 1.0, 0.0, 0.0);
uniform float MaterialShininess = 0;
uniform sampler2D MaterialTexture;
uniform int useTexture = 0;

//Light
uniform vec3 lightPosition[100];
uniform vec4 lightColor[100];
uniform float lightPower[100];
uniform float lightRadius[100];
uniform int lightEnabled[100];
uniform int lightDirectional[100];

varying vec3 vertexPosition;
varying vec3 vertexNormal;
varying vec3 eyeDirection;

void main()
{	
	//The final fragment color
	vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0);
	
	//Update the final color if there is a texture
	if (useTexture == 1) {
		finalColor = texture2D(MaterialTexture, gl_TexCoord[0].xy);	
	}
	
	//Apply the material based on all the lights in the scene	
	for (int i=0; i<100; i++) {
		if (lightEnabled[i] == 1) {
			vec4 finalMaterial = vec4(0.0, 0.0, 0.0, 0.0);
			vec3 lightDirection = (ViewMatrix * vec4(lightPosition[i], 1)).xyz + eyeDirection;
			
			if (lightDirectional[i] == 0) {	
				float lightDistance = length(lightPosition[i] - vertexPosition);
				float lightAttenuation = 1 / (1.0 + (0 * lightRadius[i]) + ((1 / (lightRadius[i] * lightRadius[i])) * (lightDistance * lightDistance)));
				
				vec3 n = normalize(vertexNormal);
				vec3 l = normalize(lightDirection);

				vec3 e = normalize(eyeDirection);
				vec3 r = reflect(-l, n);

				float theta = clamp(dot(l, n), 0, 1);
				float alpha = clamp(dot(e, r), 0, 1);
		
				finalMaterial += MaterialAmbient;
				finalMaterial += MaterialDiffuse * lightColor[i] * lightPower[i] * theta;
				finalMaterial += MaterialSpecular * lightColor[i] * lightPower[i] * pow(alpha, 5);
				finalMaterial *= lightAttenuation;
				
				finalColor += finalMaterial / (lightDistance * lightDistance);
			} else {	
				vec3 n = normalize(vertexNormal);
				vec3 l = normalize(lightPosition[i]);

				vec3 e = normalize(eyeDirection);
				vec3 r = reflect(-l, n);

				float theta = clamp(dot(l, n), 0, 1);
				float alpha = clamp(dot(e, r), 0, 1);
		
				finalMaterial += MaterialAmbient;
				finalMaterial += MaterialDiffuse * lightColor[i] * theta;
				finalMaterial += MaterialSpecular * lightColor[i] * pow(alpha, 5);
				
				finalColor += finalMaterial;
			}
		}
	}

	finalColor += MaterialEmission;

	//Set the result
	gl_FragColor = finalColor;
}


Vertex
#version 120

uniform mat4 MVP;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 NormalMatrix;

//Light
uniform vec3 lightPosition[100];
uniform vec4 lightColor[100];
uniform float lightPower[100];
uniform float lightRadius[100];
uniform int lightEnabled[100];
uniform int lightDirectional[100];

varying vec3 vertexPosition;
varying vec3 vertexNormal;
varying vec3 eyeDirection;

void main()
{
	gl_TexCoord[0] = gl_MultiTexCoord0;
	
	//Set the vertex position
	gl_Position = MVP * gl_Vertex;
		
	//Calculate the lighting stuff
	vertexPosition = (ModelMatrix * gl_Vertex).xyz;
	vertexNormal = (NormalMatrix * vec4(gl_Normal, 0)).xyz;

	eyeDirection = -(ViewMatrix * ModelMatrix * vec4(vertexPosition, 1)).xyz;
}


So what you think? Is this what you meant by doing all normal and vertex calculations in the vertex shader and all the rest in the fragment?

Kai

Yes, that's better. :)
Now, what you might want to use are Uniform Buffer Objects, if you are willing to bump your minimum required GL version to 3.1.
I see you have many 100-element arrays for the different properties of each of your 100 lights.
There is now a fantastic opportunity to improve performance as well as code quality of your shader by using UBOs.
You would then declare your lights in the following way:

#define NUM_LIGHTS 100
struct Light {
   vec3 position;
   vec3 color;
   float power;
   float radius;
   bool enabled;
   bool directional;
}
layout (std140) uniform Lights {
   Light lights[NUM_LIGHTS];
}


And access the individual lights via:
for (int i = 0; i < NUM_LIGHTS; i++) {
  Light light = lights[i];
  if (light.enabled) {
    ...
  }
}


You can then use a buffer object as the source for all light properties.
So you would create a buffer object, store all 100 light properties in a ByteBuffer (with FloatBuffer and IntBuffer views on it), and upload that to the OpenGL buffer object via glBufferData.
The std140 format is a bit twiddly. It will be:
<vec3>position, <float>padding, <vec3>color, <float>padding, <float>power, <float>radius, <int>enabled, <int>directional

Now you don't have to update 6 times 100 uniform variables, or rather 6 times an array; but instead only all lights at once, or even partially via glBufferSubData.

TheBoneJarmer

Hey Kai! Thanks for your reply! Unfortunately, my own OpenGL version is 3.0 so I'm afraid I can't use OpenGL 3.1. Thanks for the advice though!