anti threading..

Started by dronus, January 25, 2008, 20:31:36

Previous topic - Next topic

dronus

Umm, its obvious GL doesn't support multi threading in a single state machine.
But why is it impossible to cleanly call OpenGL commands in order from different threads?  If I try i just got mixed java but also native exceptions.. too bad.  This for example is a pain in the ass with JMF as many JMF things uses different threads. Also its impossible to keep things clean with finalizers. Currently I destruct all gl textures, shaders etc. by hand in very medival c manner.  I think it should be the job of the programmer to keep gl things in order and not of exceptions :-).

Matzon

afaik, it has to do with thread local storage being used, which means that depending on which thread calls the gl methods, different (and wrong) values will be in the threads associated context and TLS.

dronus

Ok, guess i was wrong...   there IS possibilty to do threads, the GLContext enables multitreading.. almost. It seems buggy. While most gl ops work perfectly, creating a renderable texture results in that:

Exception in thread "Thread-3" org.lwjgl.opengl.OpenGLException: Cannot use Buffers when Pixel Unpack Buffer Object is enabled
	at org.lwjgl.opengl.GLChecks.ensureUnpackPBOdisabled(GLChecks.java:120)
	at org.lwjgl.opengl.GL11.glTexImage2D(GL11.java:2699)
	at org.dronus.gl.RenderTexture.<init>(RenderTexture.java:44)
	at org.dronus.gl.ShaderStage.<init>(ShaderStage.java:33)
	at Tests$1.run(Tests.java:79)


threading was just used for testing purposes, the gl commands where executed in strict order.

any workarounds?

ndhb

Implement a queue in your main OpenGL thread, that accepts GL commands from other threads and empties (executes) the commands when convenient (e.g. render method). It is robust and not hard to implement.

Fool Running

QuoteException in thread "Thread-3" org.lwjgl.opengl.OpenGLException: Cannot use Buffers when Pixel Unpack Buffer Object is enabled
That error doesn't appear to have anything to do with threading. Does the program run correctly if you aren't doing multiple threads?
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

dronus

Quote from: Fool Running on February 04, 2008, 14:27:04
QuoteException in thread "Thread-3" org.lwjgl.opengl.OpenGLException: Cannot use Buffers when Pixel Unpack Buffer Object is enabled
That error doesn't appear to have anything to do with threading. Does the program run correctly if you aren't doing multiple threads?
Yes, exactly. It occurs every time a glTexImage is used in an other thread than the Display.create one. With one thread only, all runs fine. With multiple threads and carefully GLContext.useContext(...) it runs up to the glTexImage in abitrary threads which not created the display.

the2bears

I ran into odd texture stuff too, while dealing with multiple threads.  I'm crafting a bit of an OSGi game framework, and I'd created a thread to do all my OpenGL (well, especially Display) related calls.  However, I had one object that got created in a different thread and it just happened to create a texture.  No errors thrown, but the texture was rather empty.

I learned to make sure I put all the calls into one thread, something I had missed while originally intending to do.

Bill
the2bears - the indie shmup blog

wolf_m

On Windows, you would use WGL for sharing textures and the like across different contexts; on Mac OSX, there's AGL and on Linux and similar, GLX. The method for sharing textures in WGL is called wglShareLists() because it historically was used for sharing DisplayLists.

As long as WGL, AGL and / or GLX aren't implemented, preferably transparent, you will run into issues like not being able to share textures across different threads. More importantly, the outcomes of your multithreaded GL code will vary on different platforms and drivers. So don't do it.

dronus

Quote from: wolf_m on February 05, 2008, 23:26:08
. More importantly, the outcomes of your multithreaded GL code will vary on different platforms and drivers. So don't do it.
Is there any strong specification on this? I guessed the only valid hook to opengl data is the "context" and handover this would allow shure access within one application.  But I was wrong.. why?

Fool Running

QuoteI guessed the only valid hook to opengl data is the "context" and handover this would allow shure access within one application.
That is correct, but context switches are usually time consuming so its generally best not to do it that way anyways.
QuoteBut I was wrong.. why?
Best guess: Driver writers know context switches are expensive and don't program good handling for it ;D

Quote from: ndhbImplement a queue in your main OpenGL thread, that accepts GL commands from other threads and empties (executes) the commands when convenient (e.g. render method).
How do you do that without inducing massive copies of game data (like transferring particle information from one thread to the rendering thread) or massive object creation (from creating a new object for each command in the queue)?
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

dronus

Quote from: Fool Running on February 08, 2008, 15:15:15
That is correct, but context switches are usually time consuming so its generally best not to do it that way anyways.

Sorry..I don't get it. As threads share same heap memory, why does it matter at all which thread try to invoke an opengl function? I even don't know why different threads have to set the context, as the one static opengl instance shouldn't even notice who calls theire commands. On the java side, threads can only be distinuished by explicitly get their current runtime Thread class..

VeAr

Quote from: Matzon on January 25, 2008, 21:52:41
afaik, it has to do with thread local storage being used, which means that depending on which thread calls the gl methods, different (and wrong) values will be in the threads associated context and TLS.

Quote from: dronus on February 08, 2008, 21:25:31
Quote from: Fool Running on February 08, 2008, 15:15:15
That is correct, but context switches are usually time consuming so its generally best not to do it that way anyways.

Sorry..I don't get it. As threads share same heap memory, why does it matter at all which thread try to invoke an opengl function? I even don't know why different threads have to set the context, as the one static opengl instance shouldn't even notice who calls theire commands. On the java side, threads can only be distinuished by explicitly get their current runtime Thread class..


Sorry for necroing, but this question was not answered, and the issue bugs me for long time too.

Why is necessary for LWJGL to control the thread access to OGL, why cannot this control be delegated to the application? If i am sure in my application that a block of code has exclusive access to GL, and other threads will not call GL commands at that time, why is GL access still restricted by LWJGL? As dronus said, and this is what i know too, Java threads share the same address space, and the GL function pointers should be the same for all of them. Is this not the case, or perhaps it is platform dependant?

The problem with queues is that Java likes to keep local copies of data for the threads, and transferring data from one thread to the rendering thread is problematic. Any chance of having the possibility to build LWJGL with static context?

Not related to threading and this topic but another build option could be to not add code to check for ByteBuffers if they are direct or not. Don't know how much would it affect performance (probably not much), but its again a thing that the application can do more efficiently.

Matthias

As a GL context can only be used by one thread at a time, you need to transfer the context to the thread which should do the rendering. This requires that the original thread releases the context, and to make the context current in the new thread.

LWJGL provides functions for this - see the Display class for details.

So LWJGL does not enforce threading onto your application - OpenGL does this.

There is also the possibility of creating a shared context - which shares "heavy" objects like textures, VBOs, display lists - but not container like FBOs. The shared context then can be used in another thread concurrently with the main context. But be aware that you might discover driver bugs much more easily then :) This is done using the Drawable class, and methods in the Display and Pbuffer classes.

Ciao Matthias

VeAr

Quote from: Matthias on September 25, 2008, 21:40:39
As a GL context can only be used by one thread at a time, you need to transfer the context to the thread which should do the rendering. This requires that the original thread releases the context, and to make the context current in the new thread.

LWJGL provides functions for this - see the Display class for details.

So LWJGL does not enforce threading onto your application - OpenGL does this.

There is also the possibility of creating a shared context - which shares "heavy" objects like textures, VBOs, display lists - but not container like FBOs. The shared context then can be used in another thread concurrently with the main context. But be aware that you might discover driver bugs much more easily then :) This is done using the Drawable class, and methods in the Display and Pbuffer classes.

Ciao Matthias


Thanks for reply, i searched the net some more and found on devmasters that access can be transferred between threads, and someone even described the same scenario i want. I have a guess now why single-thread access is enforced by OGL, it probably does similar thread-local access like LWJGL itself.

The only question left is, why isn't this working properly in my application? Gonna check it again.