Unsigned Int and Element Array

Started by jon291, October 05, 2010, 08:03:14

Previous topic - Next topic

jon291

Hi, I'm in the process of making the jump from "old" to "new" opengl and I'm working on vertex buffer objects at the moment.  I wasn't able to find an existing straight answer in the forums to this issue so I'd appreciate your help.

Let's assume I successfully create a buffer full of vertex attribute data and then try to make a buffer full of indices (element array).  In doing so, I would write the indices of the vertices to the buffer (so something like 0,1,2,...).  Those are regular java signed ints that I would be writing to the buffer.  One of the arguments to glDrawElements() is for the "type" of the values in the index array and GL_UNSIGNED_INT is the only option I believe (let's ignore bytes and shorts in this example).  So do I run into a problem here given that opengl is expecting unsigned ints and I just supplied a bunch of signed ints through my element array buffer?  None of the VBO posts that I was looking at mentioned anything like this, so I'm wondering if I'm thinking about this correctly.

If my concerns are valid, I would need to subtract an offset from my signed index to give it the 1's and 0's equivalent to what the unsigned index would be, right?

Thanks

spasi

For integer values up to Integer.MAX_VALUE, you can simply use the integer itself. For higher values, you can cast a long to int. The binary values will match GL_UNSIGNED_INT in both cases.

long a = Integer.MAX_VALUE;
long b = a + 1;

int c = (int)a;
int d = (int)b;

long e = d & 0xFFFFFFFFL;

System.out.println("a = " + a);
System.out.println("b = " + b);
System.out.println("c = " + c);
System.out.println("d = " + d);
System.out.println("e = " + e);


Also, try to use GL_UNSIGNED_SHORT when possible (when the model has less than 64k vertices), all GPUs handle shorts just fine and you save bandwidth. The same principle applies to shorts, for values above Short.MAX_VALUE, use an int and cast it to short.