Not quite understanding how to use VBOs.

Started by Epicbo, April 16, 2012, 12:00:17

Previous topic - Next topic

Epicbo

I've finally decided to stop using glBegin/glEnd and start using modern opengl. Right now I'm trying to implement VBOs but I've run into some problems, mainly because I don't understand how to use them correctly. Most tutorials only show how draw a single polygon with them which is definitly not enough for my program.

Here's how I use them right now. I initialize the vbo at startup by calling glGenBuffers(). I can't render my whole world at once because it's too big, so I made a method called addData() which stores vertex, color, normal and texcoord data in a list. Then I've another method, render(), which copies all the data from the list into a FloatBuffer and then clears the list. Finally I upload the data in the float buffer to the vbo and render it by calling glDrawArrays().

Is this a good way to do it? Either way I did something wrong. I tried rendering a simple triangle with this method, and it worked. However something is definitly wrong, because the frame rate is about 8000, which is fine I guess, but I get a few lagspikes every second. The times between each frame looks something like this:

0.1 ms
0.1 ms
0.1 ms
25 ms
0.1 ms
0.1 ms
200 ms
0.1 ms
...

and so on. Any idea what could be causing this?

Another question I have is about transformations. Right now I'm using alot of glPushMatrix(), glTranslate(), glRotate(), glPopMatrix(), etc,
and this works because I use immediate mode rendering. But when I switch to VBO, I won't render inbetween the glPushMatrix and glPopMatrix, only store the vertex data etc, in a list. This will cause everything to be incorrectly transformed when I finally render everything at once. Is it possible to manually multiply the vertices by the current transform matrix before storing the data?

That's all I want for now. Sorry for the huge post. Any help would be greatly appreciated.  :)

Fool Running

There are a couple possibilities for the lag spikes. First, I don't know what timer you are using to get those values, but there aren't very many timer implementations that are accurate to the sub-millisecond level so some of that might just be the inaccuracy of the timer. Second, the larger spikes are probably garbage collection caused by the creation of objects in your rendering loop that you throw away the next rendering pass.
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

Epicbo

Actually that values aren't the real ones, they're just some approximations I made. I can only measure the length of the spikes, which seems to range from 20 to 150. Anyway, you're probably right. The spikes seems to disappear if I use Display.sync(), even with a high number such as 5000.

RiverC

Epicbo: You need to do the transformation in the Vertex Shader, then I think. The best way I can think to do that is to compute the Quaternion for the translation you're doing and pass it to the shaders as a uniform every frame. Right now I have it basically do all that math for every vertex, but only (pretty much) the final multiplication is needed.

Epicbo

I understand, thanks!

I've been reading some more tutorials, and there seems to be two ways to use vbos. What's the difference between

glEnableVertexAttribArray()
glVertexAttribPointer()

and

glEnableClientState()
glVertexPointer()

Some tutorials teach the first one and some teach the second one, but is there any difference? If so, which one should I use?

RiverC

The first one turns on the vertex buffers themselves, the second call is to actually set  the GL state machine to read at a given point in said buffer.