Hi everyone,
While i was profiling my application, I noticed a large part of the CPU time was spent on checking for GL errors. Here's an image of the top of the list:
(http://www.xyzw.nl/lwjglforum/profiler.png)
When I take a look at the stack traces it says that checkGLError(), which calls glGetError(), is called after every OpenGL command.
Together, glGetError() and checkGLError() take more than 10% of the CPU time in my test application. That's fine during development, but it would be nice if i could disable the error checking in a distribution: it would mean a significant performance gain.
When searching the forum, I found the following (old) thread, which describes plans for such functionality:
http://lwjgl.org/forum/index.php/topic,369.msg2870.html#msg2870
But the functionality discussed in that thread is not available in the current LWJGL version. Also, I couldn't find anything similar in the library. Does anybody know if it is possible to disable error checking?
Rene
http://lwjgl.org/forum/index.php/topic,2941.0.html
Now that was a quick reply :)
I've made two different configurations now, one release and one debug config. Everything works perfectly, and the release config has an performance gain from about 5%. Thanks for your help.
Rene