[SOLVED] Tangent space normal mapping

Started by plummew, July 20, 2010, 12:20:13

Previous topic - Next topic

plummew

I'm having issues implementing robust normal (bump) mapping using GLSL shaders and wondered if this was the best forum to seek advice?

Without going into excessive detail at this point and potentially wasting time, I thought I'd check first to see if anyone out there had successfully implemented normal mapping on arbitary meshes (triangular) where the tangents are being calculated in code rather than supplied as imported model data.

My problem is this: I can normal map a quad made of 2 triangles, where the quad normal points along the positive z-axis. Moving viewpoint and camera both render correctly. This is the simple "lighting and normal map in same coord space" example most web tutorials use.

I can then rotate the quad in object space (via modelling tool) so the quad normal points along positive y-axis (i.e. quad forms the floor) My calculated TBN is correct (According to Nvidia bump mapping tutorial) and lighting/normal mapping is also rendered correctly - great.

Problem is when I rotate original quad about the y-axis so quad normal is positive x-axis. Lighting on this quad is plainly wrong and I can't figure out what the issue is.

So the short question is: If I start posting shader code and tangent/normal/binormal data, are there folk out there with an understanding of coordinate spaces to help me out, or should I post on a different site (e.g. gamedev)

I'll take no responses to indicate that this isn't the place to resolve this.


Some background to discount more obvious suggestions:

1. My shaders are the standard ones you see on the web implementing per-pixel phong/blinn lighting with a normal map look-up and binormal calculated in vertex shader from normal cross tangent.
2. Strip out the normal mapping and the per-pixel lighting is fine.
3. Normal map is in TBN space with red pointing to the right and green pointing up.
4. Texture mapping has no mirroring and TBN handedness is same for all verts.
5. Vert tangents calculated using the couple of examples found on the web and verified using Nvidias bump mapping tutorial using plane equations and inverse gradients.
6. As the approach is to take normal, light direction and half vector from eye space to tangent space, the incoming tangent is pre-multiplied by the normalMatrix as is the gl_Normal. Cross product then gives the binormal. The transpose of this eye space TBN is then used as an inverse to get the lighting vectors from eye space to tangent space (using the classic 3 dot product approach) So I'm assuming orthogonality of the TBN - which is OK for a flat quad.

spasi

I think you're doing something wrong in step 6, you shouldn't need to multiply the tangent and normal with the normal matrix. If you pass the light position to the shader in object space, then you can simply multiply it with a mat3(tangent, binormal, normal) to go from object space to tangent space. Same goes for the camera position, you calculate the object space eye vector per vertex, then rotate with TBN. You get the half-angle vector using the already in tangent-space light and eye vectors.

plummew

Thanks for responding.

I've posted my vertex shader at the bottom, but the problem with not involving the normalmatrix, is that this is needed to apply any rotation the vertex is undergoing. This goes alongside the pre-multiplication of the gl_vertex by the modelview matrix to get light position, vertex normal and vertex into the same coord space (eye space) so lighting can be performed.

The transpose TBN transforms from object space to tangent space, so (as you point out) we need to get the light direction, normal and eye vector into object space first.

So light position should be passed in as a uniform or using glLight with modelview set to identity - this gives object space for light?
gl_vertex and should not be transformed by normalMatrix

The problem is that for moving objects we have to involve the modelview and normalMatrix at some point otherwise we don't take into account object movement and the effect that has on lighting.

I'll think on this some more and find somewhere to post some images to illustrate the problem.

For now though the shader below (using normalMatrix * gl_Normal but not normalMatrix * tangent) actually works for a static shape with moving lightworks.

attribute vec3 tangent;

varying vec3 tbnDirToLight; // Direction to light in tangent space
varying vec3 tbnHalfVector; // Half vector in tangent space
varying vec2 texCoord;

void main(void) {
   
    // Set vertex location in projection space
    gl_Position = ftransform();
    //gl_TexCoord[0] = gl_MultiTexCoord0;
    texCoord = vec2(gl_MultiTexCoord0);
   
    // Calculate tbn that transforms from eye space to tangent space
//    vec3 t = normalize(gl_NormalMatrix * tangent);
    vec3 t = normalize(tangent);
    vec3 n = normalize(gl_NormalMatrix * gl_Normal);
    vec3 b = cross(n, t);

    // Eye direction from vertex, for half vector
    vec3 dirToEye = -vec3(gl_ModelViewMatrix * gl_Vertex);
   
    // Transform vertex-to-eye direction from eye to tangent space
    vec3 tbnDirToEye;
    tbnDirToEye.x = dot(dirToEye, t);
    tbnDirToEye.y = dot(dirToEye, b);
    tbnDirToEye.z = dot(dirToEye, n);

    // Point light direction for positional light (lightPos - vertexPos)
    vec3 dirToLight = normalize(gl_LightSource[0].position.xyz + dirToEye);

    // Transform vertex-to-eye direction from eye to tangent space
    tbnDirToLight.x = dot(dirToLight, t);
    tbnDirToLight.y = dot(dirToLight, b);
    tbnDirToLight.z = dot(dirToLight, n);
   
    // Tangent space half-vector is normalised average of tangent space
    // eye and light directions
    tbnHalfVector = normalize(tbnDirToEye + tbnDirToLight);
}

spasi

Again, for a non-deformable object, you don't have to do anything to gl_Normal. It's already in object space by definition. The only thing you need to do is transform the eye and light vectors from object space to tangent space.

IIRC gl_LightSource[0].position.xyz will be the light direction in world-space. For performance reasons, you should use a custom uniform that defines the light direction in object space. You do that by transforming the light direction at the Java level and pass it to the shader. You do this per-object of course. You could also do it for the camera position, but you can derive it from the MVP matrix (like you do).

My vertex shader code looks like this:

// Pseudo-instancing
vec3 position = vec3
(
	dot(gl_MultiTexCoord5, gl_Vertex),
	dot(gl_MultiTexCoord6, gl_Vertex),
	dot(gl_MultiTexCoord7, gl_Vertex)
);

gl_Position = gl_ModelViewProjectionMatrix * vec4(position, 1.0);

#if CALC_EYE_VECTOR
	vec3 eyePos = CAM_POS.xyz - gl_Vertex.xyz;
	vec4 eyeVector = vec4(eyePos, dot(eyePos, eyePos));
	eyeVector.xyz *= inversesqrt(eyeVector.w); // Normalized vector & distance^2

	VAR_fogCoord = getFogCoord(eyeVector.w);
	lightingPassVaryings(TANGENT, BINORMAL, gl_Normal, eyeVector.xyz);
#else
	VAR_fogCoord = getFogCoord(lengthSQ3f(gl_Vertex, CAM_POS));
	lightingPassVaryings(TANGENT, BINORMAL, gl_Normal);
#endif


and the lightingPassVaryings function:

mat3 tangentBasis = mat3(tangent, binormal, normal);

vec4 lightVector;
lightVector.xyz = LIGHT_POS * tangentBasis;
lightVector.w = clamp(lightVector.z * 8.0, 0.0, 1.0);

VAR_lightVector = lightVector;

#if CALC_EYE_VECTOR
	eyeVector.xyz = eyeVector.xyz * tangentBasis;
	#if PARALLAX_MAPPING
		VAR_eyeVector = half3(eyeVector.xyz);
	#endif
	#if SPECULAR_MAPPING
		VAR_lightHalfAngle = half3(normalize(eyeVector.xyz + lightVector.xyz));
	#endif
#endif


Other notes:

- You could also use vertex attributes for passing light/camera positions, just like pseudo-instancing. Assuming there are enough available of course.

- You don't need to normalize the incoming normal/tangent in the vertex shader.

plummew

Many thanks for your help.
Just a  couple more queries if you can spare the time:

1) I'm always confused by OpenGL lighting - I fully understand object space (being the space you build your model in) If you want a stationary light (like the sun or a street light) you set the camera view then the glLight position, this gives you a gl_LightSource[0].position in world space (i.e. multiplied by the modelview) So how do you specify a light position (or direction) in object space, per object but have the scene lit consistently when each object is transformed by different modelView matrices?

2) When you say "gl_LightSource[0].position.xyz will be the light direction in world-space" is this just a terminology thing and world space is the same as eye space ? (OpenGL docs seem to state that world space is a non-openGL concept being the model transforms without the view)

3) "For performance reasons, you should use a custom uniform that defines the light direction in object space" can you elaborate? Is it use of a uniform over a built in gl uniform like gl_LightSource[0].position ?

spasi

Quote from: plummew1) I'm always confused by OpenGL lighting - I fully understand object space (being the space you build your model in) If you want a stationary light (like the sun or a street light) you set the camera view then the glLight position, this gives you a gl_LightSource[0].position in world space (i.e. multiplied by the modelview) So how do you specify a light position (or direction) in object space, per object but have the scene lit consistently when each object is transformed by different modelView matrices?

You transform the light position/direction using the inverse of the model matrix. Objects with different transforms will get a different light vector in the vertex shader, but the overall lighting of the scene will be correct.

Quote from: plummew2) When you say "gl_LightSource[0].position.xyz will be the light direction in world-space" is this just a terminology thing and world space is the same as eye space ? (OpenGL docs seem to state that world space is a non-openGL concept being the model transforms without the view)

Yes, it will be in eye space. As in, if you only specify the light position once after setting up the camera view, then you won't get the correct object-space light direction in the shader (unless the object has no transformation).

Well, I really meant world space because in my engine I only care about the "sun" direction in world coordinates and I don't use gl_LightSource.

Quote from: plummew3) "For performance reasons, you should use a custom uniform that defines the light direction in object space" can you elaborate? Is it use of a uniform over a built in gl uniform like gl_LightSource[0].position ?

Yes, instead of using gl_LightSource[0].position and the inverse MV matrix in the shader, simply pass the object-space direction using a uniform. It's a simple vector that's constant over the whole object, so you don't need to calculate it per-vertex. You could also use gl_LightSource[0].position if you update the light's GL_POSITION per object, but I would advise against using fixed functionality in shader code.

plummew

Thanks for the additional response.

I've altered my shader and passed in the light pos and camera/eye pos in object space (per object) but results still aren't correct.

I'll need to look over the changes more thoroughly as the algorithm seems sound to me, so I'm unsure where the problem lies. (I bet it's some trivial error, these things usually are!)

I assume you work out the camera pos in object space by applying the current inverse modelview matrix to a vector of 0,0,0 ?

Do you calculate your tangents/binormals in code or take them from model files?

I wonder if it's my tangents that are causing a problem. You don't have access to a model with tangents that you know render correctly do you? (That way I could eliminate tangent creation as an issue)

spasi

Quote from: plummewI assume you work out the camera pos in object space by applying the current inverse modelview matrix to a vector of 0,0,0 ?

No, each object has a transformation matrix (it's position/orientation in the world), I apply that to the camera position (in world coordinates again). The math is really simple, iirc I don't even use matrices. I think it's the vector cam_pos - obj_pos, rotated by the inverse object orientation (quaternion rotation).

Quote from: plummewDo you calculate your tangents/binormals in code or take them from model files?

Calculated by the engine for terrain, taken from the DCC app for all other models. I also use NVMeshMender in my toolchain.

Quote from: plummewI wonder if it's my tangents that are causing a problem. You don't have access to a model with tangents that you know render correctly do you? (That way I could eliminate tangent creation as an issue)

A bad tangent basis would result in a wrong, but static result. Since you're seeing different results based on the object orientation, it's more likely that the problem lies elsewhere.

plummew

What did I say previously?

QuoteI bet it's some trivial error, these things usually are!

Well it does help if you enable your tangent attributes...

One line of code missing:
"glEnableVertexAttribArray(ResourceFactory.TANGENT_ATTRLOC);"
that's all it was.

Problem was the shader didn't complain that I was accessing an attribute that wasn't enabled and even gave the correct results when tangent space coincided with object space.

In fact there was nothing wrong with my original approach of transforming the TBN matrix to eye space in the vertex shader. However Spasi's advice to transform light and eye to object space in Java is more efficient, so I'm sticking with that approach.

Thanks for the help Spasi.