Window crashes on GL11.glClear()

Started by irongiant, January 27, 2007, 02:17:22

Previous topic - Next topic

irongiant

I'm making a Java game engine, for personal use, using Java2D. But now, i need some performance so i'm adding an implementation of LWGL. So far, i setup the basics but when it comes to render in the main loop i get this exception:

Exception in thread "GraphicsEngine" java.lang.NullPointerException
	at org.lwjgl.opengl.GL11.glClear(GL11.java:576)
	at APE.engines.graphics.OpenGLGraphicsEngine.render(OpenGLGraphicsEngine.java:37)
	at APE.engines.graphics.GraphicsEngine.run(GraphicsEngine.java:52)
	at java.lang.Thread.run(Thread.java:595)


The OpenGL graphics engine has these relevant methods:
public void start() {
        super.start();
        GL11.glEnable(GL11.GL_TEXTURE_2D);
        GL11.glDisable(GL11.GL_DEPTH_TEST);
        GL11.glMatrixMode(GL11.GL_PROJECTION);
        GL11.glLoadIdentity();
        GLU.gluOrtho2D(0, getGraphicsMode().getWidth(), getGraphicsMode().getHeight(), 0);
        GL11.glMatrixMode(GL11.GL_MODELVIEW);
        GL11.glLoadIdentity();
    }
public void render() {
        GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
        GL11.glMatrixMode(GL11.GL_MODELVIEW);
        GL11.glLoadIdentity();
        ((OpenGLGameContext) getGameContext()).render();
        Display.update();
    }


The problem is in "GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);"; the Display is setup and everything else seems right.

I'm new to LWJGL and i've been reading tutorials and documentation. Also i searched the forum for some related issue but unsuccessfully, so i apologize if this had already been answered. 

darkprophet

Two things:

1) Have you successfully created a Display?

2) The thread that made the Display (i.e. called Display.create()) has whats called a "Rendering context". Only that thread can call any GL calls. Are you sure you are using the same thread that created the Display to do your GL11 calls ?

DP :)

irongiant

The display is successfully created (so it seems) as a window with black background shows up. It disappears after 1-2 seconds or so...
Actually i make any calls to Display by a thread from where the action begins; notice the call to super.start() in the example code of my previous post. It's where i create the animation thread responsible for subsequent calls to OpenGL.
Guess i'll have to work this around...

By the way, since i only want 2D rendering, do i really need the depth buffer?
At the moment i'm reading the RedBook; it's quite a load...

ndhb

No, you shouldnt need the depth buffer. Disable depth test and mask by:

GL11.glDisable(GL11.GL_DEPTH_TEST);
GL11.glDepthMask(false);


... obviously you dont need to clear the depth buffer either:

GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);


As DarkProphet said, you only want 1 thread with OpenGL code (aka RenderThread). If you really want to do change, add, remove you can implement a "message queue" in the RenderThread that empties any rendering requests at convenient periods (every 10 frames perhaps or something).

Nicolai de Haan Brøgger

irongiant

Thanks for helping guys! Everything works fine now.
ndhb, i don't understand that RenderThead, add, and remove thing you wrote in your last post. Can you point me some documentation to learn about it?

ndhb

I don't know where to find some documentation, it's mostly an idea/idiom for delegating responsibility to another thread.

Say you have your "animation thread that makes subsequent calls to OpenGL". In your animation thread you might want to add new objects to be rendered by OpenGL, remove others at certain times or change the color, vertex data, textures etc. depending on states in your game engine. The problem is that LWJGL is fundamentally single threaded, working with a single graphics context. So you have this issue to solve. One solution might be to release and acquire the rendering context every time you want to change whats being rendered. That is, release it from your main rendering loop, then acquire it in another thread that wants to perform OpenGL calls. Another solution which I suggest is to create an interface to your rendering thread that allows arbitrary threads to submit updates to what is being rendered.

Imagine you have a "RenderThread" which contains state for everything being rendered. We can store this state information in an object and call it a "RenderBin". The RenderThread works by iterating this RenderBin in its main loop. You write a Java Interface we can call "Renderable" which specifies that everything that can be rendered has some methods like "InitGL", "RenderGL" and "DestroyGL". The RenderBin object has some internal datastructure like a list of renderable (eg. List<Renderable>) and also methods for adding, removing "Renderable" object from its internal datastructure (eg. RenderBin.add(Renderable r) and RenderBin.remove(Renderable r)). In the code for RenderBin.add, you can assume responsibility for calling "InitGL" on the argument r (allocating necessary texture names or buffer data, loading textures etc.). Likewise in RenderBin.remove you can make sure that "destroyGL" gets called (cleaning up any allocated resources like texture names or buffer data, etc.). In the main loop in RenderThread you don't have any specific OpenGL code (maybe you want to clear the screen with glClear or something). All you do is to run through the RenderBin object which contains state for everything that is to be drawn. That could be as simple as retrieving the Iterator<Renderable> from it's internal datastructure, knowing each of these objects from the iterator complies with the Renderable interface, you can call "RenderGL" on each.

To use this construction you might have a "SpriteAnimationThread" that runs in parallel with the RenderThread. Thus you probably need to synchronize access to RenderThreads state, eg. "synchronized RenderBin.add(Renderable r) and "synchronized RenderBin.remove(Renderable r)" - that is dependant on what internal datastructure you choose in RenderBin. When your SpriteAnimationThread in your game engine, wants to draw a new sprite, it can create an instance of the Sprite class which implements Renderable (the sprite can be rendered). Once constructed it submits this instance to RenderThread.getRenderBin().add(newSprite) and when RenderThread iterates through the RenderBin on the next iteration of its main loop, it will be calling the RenderGL method in your new sprite object and the sprite will be drawn. The Sprite object probably contains state for its location on screen, whether it's visible, it's colour and so on, which you can change in your game engine and which it's RenderGL method address.

The net result is you move the responsibility of performing OpenGL calls from "SpriteAnimationThread" to "RenderThread". That means your engine's rendering works in a single thread but your engine itself runs in a number of other threads. Some of these might do audio, heavy computation or god knows what. This design might be faster and probably will be slower but it's more flexible than putting your game engine into one gigantic thread and maintaining all the code there.

When this is done, you can start extending the rendering interface. That is, you probably want an ordering on everything that is rendered. Maybe the Renderable interface has a member/field/variable for it's rendering priority, say "int renderPriority" which you use when iterating through it. Perhaps a background image is "a Renderables with renderPriority == 0" and should be drawn before those sprites that have "renderPriority == 5". You can sort your RenderBin on this value. Maybe you decide you need transparent objects and want to opaque objects first so the transparency looks correct, you can use renderPriority for that. Maybe you decide some objects should be drawn less frequently for performance issues, and you construct another RenderBin for those objects. There are varies ways to extend this design. Hope this clears up the idea, otherwise make a post.

Nicolai de Haan Brøgger

irongiant

I'm quiet impressed by you post!
I'm not sure to have understood everything but i'll only know for sure when implementing the solution you gave me. I'll give it a try; it's gonna take me a while since i'll probably need to change the current engine implementation.
Thank you, i may use your help some other times!

spasi

Good post by Nicolai, I only have a tiny correction to make.

Quote from: ndhbThe problem is that LWJGL is fundamentally single threaded, working with a single graphics context.

It isn't LWJGL's design that forces single threaded rendering. It's the way OpenGL drivers work in general. For any GL context there can be only one thread that renders to it.

Modern drivers have support for multithreading, but it's currently something internal for sending commands/data to the GPU.