GL 3 Projection and ModelView Matrices

Started by TeamworkGuy2, October 06, 2012, 04:29:14

Previous topic - Next topic

TeamworkGuy2

Java JDK 1.6.0_25 - LWJGL 2.8.4
nVidia GTS 450 - Driver: 8.17.12.9610
Subject: Reimplementing glfrustrum and glulookat

I have been trying to re-implement a project in GL 3.3 (moving from GL 1.1) and can't get the camera translation/rotation working. 

Currently the projection matrix appears correct (before moving the mouse everything on the screen looks relatively normal).
But once the mouse is used to rotate the camera (which causes the modelView matrix to be updated), geometry starts morphing, triangles turn into squares and back into triangles, primitives rotating around some central point (probably 0,0,0) and all kinds of weird graphical glitches occur.

Currently I have a shader program with one 4x4 MVP matrix, which is the result of multiplying the modelView matrix for the current frame (camera transform and rotation) with a constant projection/frustrum matrix which is created at the beginning of the program. 

The following code is what I outlined above, where the two methods getFrustumMatrix() and getLookAtMatrix() are simple methods that set the GL matrix mode, loading the identity matrix, calling glFrustum or gluLookat and then using glGetFloat to get the resulting matrix back, and finally the resulting matrix is "mirrored" so that it is in row-major form rather than column-major. 
I know this is a bad way of creating a frustum and lookat matrix, but I needed some type of working matrices to test before trying to implement my own frustrum and lookAt matrices. 

Here's the initial setup for the projection matrix
projectionMatrix = Matrix.getFrustumMatrix(left, right, bottom, top, zNear, zFar);


Here's the main update loop:
public void update() {
		GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);

		// Get the camera modelView matrix as a row-major set of 16 floats
		modelViewMatrix = Matrix.getLookAtMatrix(camera.getXPos(), camera.getYPos(), camera.getZPos(), camera.getHorzRotation(), camera.getVertRotation());

		// Multiply the model view matrix by the constant projection matrix and store the result in the MVP matrix
		modelViewProjectionMatrix = Matrix.multiply(modelViewMatrix, projectionMatrix, modelViewProjectionMatrix);

		// Load matrix into Float Buffer
		// get() simply returns a row-major array of floats representing the 4x4 matrix
		tempBuffer.put(modelViewProjectionMatrix.get());
		tempBuffer.flip();

		// Upload MVP matrix to shader with 'true' for row-major matrix
		GL20.glUniformMatrix4(modelViewProjectionMatrixUniform, true, tempBuffer);

		// Render objects
		for(Renderable overlay : overlays) {
			overlay.render();
		}

		// Update frames
		Display.update();
		Display.sync(maxFps);
	}


I think I got the row-major matrix multiplication right, but I'm not sure:
/** multiply, multiply two 4x4 row-major matrices together<br/>
	 * It is permissible for the result matrix to be one of the input matrices, the multiplication will
	 * be carried out as if an intermediate matrix had been used for multiplication with the results copied into
	 * the result matrix after multiplication.
	 * @param a - the first matrix to multiply
	 * @param b - the second matrix to multiply
	 * @param result - the matrix to store the result in
	 * @return the 'result' matrix holding the multiplication of the two input matrices
	 */
	public static Matrix multiply(Matrix a, Matrix b, Matrix result) {
		// Multiply a * b and store the result in a temporary 'scratch' matrix
		for(int row = 0; row < 4; row++) {
			for(int column = 0; column < 4; column++) {
				scratchMatrix.m[row*4 + column] =
					a.m[row*4 + 0] * b.m[0 + column] +
					a.m[row*4 + 1] * b.m[4 + column] +
					a.m[row*4 + 2] * b.m[8 + column] +
					a.m[row*4 + 3] * b.m[12 + column];
			}
		}
		// Copy the result into the result matrix
		System.arraycopy(scratchMatrix.m, 0, result.m, 0, scratchMatrix.m.length);
		// Reset the scratch matrix to the indentity matrix
		System.arraycopy(indentity, 0, scratchMatrix.m, 0, indentity.length);
		return result;
	}


I am wondering what I did wrong, the code was converted directly from the GL fixed pipeline version of my program, GL and GLU are still being used to generated the Projection and ModelView matrices just like previously and the resulting matrix is sent to the video card where it is multiplied by the input vertex like so "gl_Position = mvpMatrix * vertexPosition;".
Other than the multiplication of the two matrices and the one line of shader code I have no idea why my result would be so corrupted compared to the original fixed pipeline version..?
Your insite would be greatly appreciated.

TeamworkGuy2

Ok, after nearly coming to the point of tears over this, I went to bed.
In the morning I thought, I'm using the identical matrices that I used in Open GL 1.1, so the matrices must be correct, only difference between my original program and this program is the multiplication of the modelView and projection matrices before sending it to the shader (because the shader only has one modelViewProjectionMatrix).
And the only thing to change in a multiplication is the order, so I flipped the order of multiplication.
modelViewProjectionMatrix = Matrix.multiply(projectionMatrix, modelViewMatrix, modelViewProjectionMatrix);


And it worked!
I've got a working camera :)
I hope this saves someone else all the trouble I went through.