Buffer checks

Started by renanse, March 05, 2004, 19:42:09

Previous topic - Next topic

renanse

Have a couple concerns about how buffer checks are being done...  I noticed in the latest cvs that glGetInteger asks for a size 16 IntBuffer (due to your checks...)  I assume you are doing this because you expect to get back a 4 int result?

Question 1:  Why?  :)  Will we get back 4 ints from GL for a certain pname?

Question 2:  Your BufferChecks class uses buf.remaining() to determine if the buffer has enough room in it.  For an IntBuffer this will return number of 4 byte elements, ie, to pass the 16 check, the IntBuffer would have to hold 64 bytes.  Am I assuming wrong and you really do want to get back 16 ints??


Your checkBuffer(Buffer buf) method in BufferChecks also uses the magic word 4 for doing checks...  Again, this would force any Buffer other than ByteBuffers to require 4 ints, 4 floats, etc.

Thanks for your attention to this.

princec

1. glGet* methods can return anything from 1 to 16 "values". (The largest set of values returned is a matrix which is 16 elements). I started out by mapping each enum to a returned size, when I realised this was largely pointless bloat, and just mandated that the buffer had to be big enough to cope with the biggest possible returned size, which is 16. Code is secure, fast, simple, and bugfree now.

2. That's right. If you call glGetDoublev with GL_PROJECTION_MATRIX, then you'll be needing 16 doubles to put it in. If you call glGetFloatv, then you need 16 floats to put it in. Therefore we check the remaining elements, not the bytes remaining. The generic Buffer argument in checkBuffer checks the number of elements, not the bytes remaining, so it's working just right.

Don't worry, it's all correct :) Still haven't done the glGetTexture and glReadPixels code properly though, and these are the most serious ones, because there's proper scope for actually doing a buffer overrun and executing arbitrary code embedded in an image!

Cas :)

tomb

Please, for the love of god, document this in the javadoc and give description of it in the thrown Exception.

renanse

Verbose explanations would be helpful, especially in the exceptions.  I'd been using a buffer with a single element for GetInteger for a long time and had to sort through the lwjgl src to figure out what it wanted me to do.

Out of curiosity, doesn't glGetInteger return a single int whereas glGetIntegerv is the one that could return up to 16?  Why force these to be together, thus requiring a larger buffer allocation than neccessary?

princec

Now you know why we don't like releasing works-in-progress!

There is no glGetInteger function.

Cas :)

renanse

Heh, my mistake, one of my resources on this was using a different programming language than c.  (python)  :lol:

http://pyopengl.sourceforge.net/documentation/ref/gl/get.html

cfmdobbie

JOOI, is the extra range check all that heavy?  Doesn't it basically amount to a single switch block before your check?

int minSize;
switch(pname){
  case GL_CURRENT_INDEX:
  case GL_CURRENT_RASTER_INDEX:
  case GL_MODELVIEW_STACK_DEPTH:
  case GL_PROJECTION_STACK_DEPTH:
  case GL_TEXTURE_STACK_DEPTH:
  [...]
    minSize = 1;
    break;
  case GL_CURRENT_COLOR:
  case GL_CURRENT_TEXTURE_COORDS:
  [...]
    minSize = 4;
    break;
  [...]
}
if(buffer.remaining() < minSize)
  throw new Exception("...");


Wouldn't that inline pretty nicely?  I expect it can't be more than a couple of instructions.

Bit of a nightmare to write and maintain, though... ;)
ellomynameis Charlie Dobbie.

elias

That was it, basicly. There were simply too many enums to make it worthwhile. glGet* is not a performance hotspot anyway...

- elias

princec

WHS. We decided that size and simplicity were more important than optimising a super low performance function anyway.

There was a much more complex issue with extensions as well. Every new extension added more enums. So it was just not worth the complexity.

Cas :)

Orangy Tang

Forcing a buffer to be at least 16 long seems slightly wastefull (and excessive) to me, getting matrices may be common but I don't like the idea of creating a 16 long buffer just so I can grab a single value (like for depth buffer size or colour depth).

Couldn't you enforce a length of a single float for all calls, and have special case checking for the common 'big' calls (like projection and modelview matrices). Or is this not safe enough?

Mojomonkey

In our case, the biggest problem with this new scheme is how counter-intuitive it is from the user standpoint. We faught with this for awhile... why is it complaining when we only want a single int buffer, we gave it 4 bytes?!?! It wasn't until the source was investigated that the whole thing was figured out. I know you say this is because it's a work in progress, but with your history of documentation, it's quite possibly going to bite people in the ass.
ever send a man to do a monkey's work.

princec

Wasteful - no. Look carefully at the use cases.

Quotebut with your history of documentation, it's quite possibly going to bite people in the ass
:P We assure you this will be documented properly!

Cas :)