LWJGL Forum

Programming => Lightweight Java Gaming Library => Topic started by: renanse on March 05, 2004, 19:42:09

Title: Buffer checks
Post by: renanse on March 05, 2004, 19:42:09
Have a couple concerns about how buffer checks are being done...  I noticed in the latest cvs that glGetInteger asks for a size 16 IntBuffer (due to your checks...)  I assume you are doing this because you expect to get back a 4 int result?

Question 1:  Why?  :)  Will we get back 4 ints from GL for a certain pname?

Question 2:  Your BufferChecks class uses buf.remaining() to determine if the buffer has enough room in it.  For an IntBuffer this will return number of 4 byte elements, ie, to pass the 16 check, the IntBuffer would have to hold 64 bytes.  Am I assuming wrong and you really do want to get back 16 ints??


Your checkBuffer(Buffer buf) method in BufferChecks also uses the magic word 4 for doing checks...  Again, this would force any Buffer other than ByteBuffers to require 4 ints, 4 floats, etc.

Thanks for your attention to this.
Title: Buffer checks
Post by: princec on March 06, 2004, 11:43:08
1. glGet* methods can return anything from 1 to 16 "values". (The largest set of values returned is a matrix which is 16 elements). I started out by mapping each enum to a returned size, when I realised this was largely pointless bloat, and just mandated that the buffer had to be big enough to cope with the biggest possible returned size, which is 16. Code is secure, fast, simple, and bugfree now.

2. That's right. If you call glGetDoublev with GL_PROJECTION_MATRIX, then you'll be needing 16 doubles to put it in. If you call glGetFloatv, then you need 16 floats to put it in. Therefore we check the remaining elements, not the bytes remaining. The generic Buffer argument in checkBuffer checks the number of elements, not the bytes remaining, so it's working just right.

Don't worry, it's all correct :) Still haven't done the glGetTexture and glReadPixels code properly though, and these are the most serious ones, because there's proper scope for actually doing a buffer overrun and executing arbitrary code embedded in an image!

Cas :)
Title: Buffer checks
Post by: tomb on March 06, 2004, 15:29:51
Please, for the love of god, document this in the javadoc and give description of it in the thrown Exception.
Title: Buffer checks
Post by: renanse on March 06, 2004, 16:20:16
Verbose explanations would be helpful, especially in the exceptions.  I'd been using a buffer with a single element for GetInteger for a long time and had to sort through the lwjgl src to figure out what it wanted me to do.

Out of curiosity, doesn't glGetInteger return a single int whereas glGetIntegerv is the one that could return up to 16?  Why force these to be together, thus requiring a larger buffer allocation than neccessary?
Title: Buffer checks
Post by: princec on March 06, 2004, 17:25:24
Now you know why we don't like releasing works-in-progress!

There is no glGetInteger function.

Cas :)
Title: Buffer checks
Post by: renanse on March 06, 2004, 17:30:49
Heh, my mistake, one of my resources on this was using a different programming language than c.  (python)  :lol:

http://pyopengl.sourceforge.net/documentation/ref/gl/get.html
Title: Buffer checks
Post by: cfmdobbie on March 06, 2004, 20:40:29
JOOI, is the extra range check all that heavy?  Doesn't it basically amount to a single switch block before your check?

int minSize;
switch(pname){
 case GL_CURRENT_INDEX:
 case GL_CURRENT_RASTER_INDEX:
 case GL_MODELVIEW_STACK_DEPTH:
 case GL_PROJECTION_STACK_DEPTH:
 case GL_TEXTURE_STACK_DEPTH:
 [...]
   minSize = 1;
   break;
 case GL_CURRENT_COLOR:
 case GL_CURRENT_TEXTURE_COORDS:
 [...]
   minSize = 4;
   break;
 [...]
}
if(buffer.remaining() < minSize)
 throw new Exception("...");


Wouldn't that inline pretty nicely?  I expect it can't be more than a couple of instructions.

Bit of a nightmare to write and maintain, though... ;)
Title: Buffer checks
Post by: elias on March 07, 2004, 08:06:01
That was it, basicly. There were simply too many enums to make it worthwhile. glGet* is not a performance hotspot anyway...

- elias
Title: Buffer checks
Post by: princec on March 07, 2004, 11:38:54
WHS. We decided that size and simplicity were more important than optimising a super low performance function anyway.

There was a much more complex issue with extensions as well. Every new extension added more enums. So it was just not worth the complexity.

Cas :)
Title: Buffer checks
Post by: Orangy Tang on March 08, 2004, 00:40:51
Forcing a buffer to be at least 16 long seems slightly wastefull (and excessive) to me, getting matrices may be common but I don't like the idea of creating a 16 long buffer just so I can grab a single value (like for depth buffer size or colour depth).

Couldn't you enforce a length of a single float for all calls, and have special case checking for the common 'big' calls (like projection and modelview matrices). Or is this not safe enough?
Title: Buffer checks
Post by: Mojomonkey on March 08, 2004, 02:37:50
In our case, the biggest problem with this new scheme is how counter-intuitive it is from the user standpoint. We faught with this for awhile... why is it complaining when we only want a single int buffer, we gave it 4 bytes?!?! It wasn't until the source was investigated that the whole thing was figured out. I know you say this is because it's a work in progress, but with your history of documentation, it's quite possibly going to bite people in the ass.
Title: Buffer checks
Post by: princec on March 08, 2004, 09:19:11
Wasteful - no. Look carefully at the use cases.

Quotebut with your history of documentation, it's quite possibly going to bite people in the ass
:P We assure you this will be documented properly!

Cas :)