Hi all,
Ive been trying to get interleaved arrays to work with VBO's ( i have been successful ), but the stride parameter in glVertexPointer takes the number of bytes between consecutive chunks of data.
Can I just assume that the bytes is the number of floats between the consecutive chunks is numOfFloatsBetweenChunks * 8? (8 bytes per float). That does work, but is it guaranteed to be valid for all supported platforms?
Quote from: "darkprophet"Hi all,
Ive been trying to get interleaved arrays to work with VBO's ( i have been successful ), but the stride parameter in glVertexPointer takes the number of bytes between consecutive chunks of data.
Can I just assume that the bytes is the number of floats between the consecutive chunks is numOfFloatsBetweenChunks * 8? (8 bytes per float). That does work, but is it guaranteed to be valid for all supported platforms?
Floats are typically the size of a word, so thats 32bits / 4 bytes on desktop machines.
Dunno what the heck would happen on a 64bit machine though - I'd expect you'd end up running in emulated 32bit mode so it'd still work.
in that case, anybody know why the below code works?
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
GL11.glLoadIdentity(); // Reset The Current Modelview Matrix
GL11.glTranslatef(0f, 0.0f, -6.0f); // Move Into The Screen "6.0"
GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, vboID);
GL11.glVertexPointer(3, GL11.GL_FLOAT, 3 * 8, 0);
GL11.glEnableClientState(GL11.GL_NORMAL_ARRAY);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, vboID);
GL11.glNormalPointer(GL11.GL_FLOAT, 3 * 8, 3);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ELEMENT_ARRAY_BUFFER_ARB, vboID2);
GL12.glDrawRangeElements(GL11.GL_TRIANGLES, 0, 2, 3, GL11.GL_UNSIGNED_INT, 0);
And this doesnt?
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
GL11.glLoadIdentity(); // Reset The Current Modelview Matrix
GL11.glTranslatef(0f, 0.0f, -6.0f); // Move Into The Screen "6.0"
GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, vboID);
GL11.glVertexPointer(3, GL11.GL_FLOAT, 3 * 4, 0);
GL11.glEnableClientState(GL11.GL_NORMAL_ARRAY);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ARRAY_BUFFER_ARB, vboID);
GL11.glNormalPointer(GL11.GL_FLOAT, 3 * 4, 3);
ARBVertexBufferObject.glBindBufferARB(ARBVertexBufferObject.GL_ELEMENT_ARRAY_BUFFER_ARB, vboID2);
GL12.glDrawRangeElements(GL11.GL_TRIANGLES, 0, 2, 3, GL11.GL_UNSIGNED_INT, 0);
Both render, but 3*8 as the stride parameter, the triangle is rendered fine, but with 3*4, the interleaved array parameters get mixed up (e.g. the x value from a normal gets put for a vertex). 3 float stride * 4 to convert to number of bytes...
Thanks to elias and the awsome LWJGL IRC channel, we've cracked it.
GL11.glVertexPointer(3, GL11.GL_FLOAT, 3*4, 0);
GL11.glNormalPointer(GL11.GL_FLOAT, 3*4, 3*4);
That works beautifully...