Weird behaviors when using the JRE

Started by nbilyk, April 01, 2016, 15:40:02

Previous topic - Next topic

nbilyk

Hmm, yeah, I was hoping it was a clue...

I made a plain Java version just to make sure it wasn't Kotlin messing with threading or anything like that.

https://gist.github.com/nbilyk/4d0af9fd885bca5f87e5aa1379d3706e

This example is very similar to the lwjgl example http://wiki.lwjgl.org/wiki/The_Quad_with_DrawElements

The main difference is the shader (initShader()). Without the initShader() the example works, with it, it doesn't work on the Intel card; only the first drawElements succeeds.

I'm pretty new to opengl, but I made the simplest shader I can, so I'm not sure what I'm doing wrong :(


Kai

And yet another bug in Intel drivers, which I could reproduce.
To fix it, do not call `GL20.glVertexAttribPointer(0, 3, GL11.GL_FLOAT, false, 0, 0);` in setupQuad() but do it after `glBufferData(GL_ARRAY_BUFFER, ...)` in draw(). This will fix it.
Usually, to update the data inside a buffer object you would use glBufferSubData(). glBufferData() is used to completely initialize the buffer object. However, it should not lead to the shader's generic vertex attribute losing the binding to that buffer object.

nbilyk

Kai, you can't see it, but I am bowing down to you right now. That solved it!

You've certainly earned the Nerdus imperius badge, I wouldn't have figured that one out in a million years...

Thank you. I think I spent about 30 hours this week troubleshooting this one... I can finally sleep at night once more.

nbilyk

I switched to glBufferSubData, and it works for the vertices, but not the indices (for Intel gpu). It's as if glBufferSubData for GL_ELEMENT_ARRAY_BUFFER doesn't actually update on Intel. 

https://gist.github.com/nbilyk/e41e1f5532bde4c6428db5a48681f6e9


Kai

How do you know it's the updating of the indices that is not working? I mean, after all, the indices should be more or less the same for each batch. And really, you should avoid that batching of single quads, because each batch update will flush the rendering pipeline for that buffer object to become available to the client for data submission. Also, it is quite hard to comprehend. :)
Simply allocate an exact large buffer, put all vertices and indices in it, and submit that buffer once to OpenGL.
It will be ridiculously faster.

nbilyk

Well, the example was just the smallest program I could create that reproduces the problem, not representative of what I'm trying to do.
I think that it's the updating of indices that aren't working because if I use the same indices every time, there's no problem, if I change the indices every flush using glBufferSubData, then it fails on the Intel card. I'm not certain what about the update isn't working, but the glBufferSubData is working fine for the vertex buffer, just not the index buffer...

Kai

Hm. Can you maybe try to reproduce it with some static constant array calls of glBufferSubData ?
Like glBufferSubData(constantVertices={...}), glBufferSubData(constantIndices={0, 1, 2}), render()
and so forth?
Without any loops and array to buffer conversion utility and such?
That would be very easy to reproduce then. And maybe we find some error while doing so, too.
And can you do it in Java, please? :)

nbilyk

Hehe, it's amazing how fast muscle memory switched from Java to Kotlin ;)

Java!
https://gist.github.com/nbilyk/189cd36f22cbb8582cff5853edcec6b1

Drawing two quads, the second has the vertices/indices in a different order, but on Intel gpu the updated indices from the second glBufferSubData don't take.

glBufferSubData for the vertices seem to work fine.


Kai

Seriously... what on earth is going on there. I can reproduce, too, on Intel.
I mean, I know Intel has lots of bugs, but I would not have expected to see so many of them in a row. :)
You can workaround this with "orphaning" the element/index buffer by allocating it again with glBufferData (like you did in your earlier example with the vertex buffers but them losing the attribute bindings).
Before you do glBufferSubData() just size the buffer again using glBufferData(GL_ELEMENT_ARRAY_BUFFER, ...) and using the exact same size you used before when initially initializing it.
Normally, orphaning is a way to avoid flusing the render pipeline so that a new buffer can be filled and used immediately, as described in that article. But in your case you actually need it to not get misbehaviour.
"Normally" with a sane driver it would make sure that the buffer is finished rendering when a call to glBufferSubData() is done on that same buffer. Apparently, not so with Intel. You can see that this is the case and the buffer is likely still in use or primed for rendering (and finally rendered at glSwapBuffers(), so Thread.sleep() would not help) if you simply insert a GL11.glFinish() in between your two draws (without orphaning the buffer using glBufferData).
Intel... Jesus...

So to summarize:
- Updating an ARRAY_BUFFER with glBufferData does not work when using a shader, because it will make the shader lose the attribute binding to that buffer. So here, either use glBufferSubData or re-specify the buffer binding via glVertexAttribPointer()
- Updating an ELEMENT_ARRAY_BUFFER the opposite is the case. Here glBufferSubData does not work and we must orphan the buffer with glBufferData or flush and wait for all rendering to finish using glFinish.

It will be interesting to see how mapped buffers behave (glMapBuffer/glUnmapBuffer). Also in that article.