GL_MAX_TEXTURE_SIZE

Started by renanse, September 10, 2004, 19:06:10

Previous topic - Next topic

renanse

Hey folks,

I'm getting an odd error in my application that I just updated to using lwjgl .92.  When building mipmaps in a certain part of the game, it crashes with an array index out of bounds exception.  Obviously this wasn't the case prior...  I did a lot of investigating and ended up discovering that when scaling for mipmaps, you check the size against the max texture size of the card (you pull out this max size every time and maybe that's not neccessary more than once, but that's another topic...)

For some reason in that part of the game, the result coming back from that query is 0.  Normally it is 2048.  According to the textbooks, this call should never come back < 64.  If I code in a if = 0, then = 64, the game runs fine enough that I can see that later on lwjgl's call to get max texture size starts returning 2048 again.

Any ideas why this might be happening?  I notice in the .9 code that there was no check against max texture size, so that's probably why this just now started occuring.

edit: clarification

renanse

In case it matters:  Radeon 9800XT, latest drivers as of this post.  win2k sp5.

elias

Hmm, hard to say for sure. Are there any special circumstances in which you're creating the texture? Like in a Pbuffer context, between begin()/end() (should raise error though) or something like that? You mention that later on, the calls return 2048 again. Could you investigate that further?

- elias

renanse

No special circumstances, the first part is some menu code, the second part that has the issue is also a menu (or the game itself, both have the issues) and if I skip to just running that code from the start, it works ok...  

The flip back to 2048 seems to occur after the scene is in place and it's had a chance to render a frame...

renanse

Ok, turns out there are special circumstances...  When you move fro mthe menu into the game, a loading state is shown while the game information loads in a background thread.  As textures are created, the texturemanager applies them to get valid opengl ids for cache purposes.  It appears that having it in that other thread is what is causing the return value of 0.  It goes back to 2048 later because it is now in the main thread again.

I understand the issue, but is there a way to still keep my whole background thread architecture and avoid this issue?

edit:  The annoying thing is that if I take out the max size check, *everything else* works fine regardless of the fact that it is in another thread...  all the other gl calls return fine and the textures appear in the game once it is done.

renanse

To rephrase another way, are there GL calls that won't be successful when called from a new Thread and if so, why?

cfmdobbie

Ah, bingo!  That'll be it - all OpenGL calls must be made within a single thread.  Note that this is also why you can't free OpenGL resources directly from a finaliser - finalisers run in a different thread from the main application.
ellomynameis Charlie Dobbie.

renanse

Yeah, that sort of makes sense (elias told me as much on IM) but what doesn't make sense is that out of all the opengl calls I make on that other thread, only the call to get the max texture size doesn't work.

the2bears

Aaahh... putting things into an "unknown state" is a bit of a catchall for when you have no idea what's going on in the background.  Don't worry too much about what makes sense;) and don't try to rely on behaviour exhibited by one driver.  

Personally I enjoy consolidating into one thread as during my day job we do distributed Jini and there's just a whole mess of synchronization and latency issues:)

Bill
the2bears - the indie shmup blog