Prerelease 2 of LWJGL 0.7

Started by Matzon, August 17, 2003, 21:22:15

Previous topic - Next topic

Morgrog

So what you're saying is hardcode the 16 and if higher resolutions are available, it'll get set to that automagically?  :shock: <-- that sums up my current look

princec

What you're actually doing when you're specifying those parameters is saying the minimum you'll be happy with. Therefore you should aways set them to the minimum you'll be happy with :) It's then up to the drivers to come up with the best possible match.

One thing you can do is try to create an optimised Window with 32 bit this-and-that and catch the Exception when it fails and try to create one with the minimum requirements.

Cas :)

nala-naj

i get choppy triangles with any depth buffer minimum set less than 24... do you think this is because i am using bitmaps rather than something like a gif or a jpeg for my textures?  or could it be a problem with my hardware using 16 bpp depth buffers(i am running an nvidia geforce4 card, but it is in a laptop)?

thanks

princec

Your problem is that you've set your viewing frustum with a far greater range than 16-bit precision can allow accurately enough not to have visual artifacts - this caught me out a few years ago when I did that terrain demo.

Try setting near to 100 and far to 20000 or something like that, and make sure it's fogged out by 20000 or things will just pop into existence.

Cas :)

nala-naj

Quote from: "princec"Try setting near to 100 and far to 20000 or something like that, and make sure it's fogged out by 20000 or things will just pop into existence.
Cas :)

thanks... that worked great!  i need to brush up on my opengl - its been a long time.

i can now set my depth buffer back down to 8bpp and it works perfectly!

thanks again!  :D

spasi

After porting ~50.000 lines of code (Yeah! We are progressing really well!  :D) to pre2 0.7, I have some things to note:

A) I really liked the way buffer.position is used now. It saves a lot of slices, although it requires you to be careful with flips and stuff.

B) I really liked the way some functions were modified to accept different types of Buffers, thus not requiring you to specify the type explicitly. Very Java-styled. But I think you missed quite a few others that could be handled that way. A couple examples:

glMultMatrix
glGet (glGetInteger(int param, IntBuffer buffer) => glGet(int param, IntBuffer buffer)

C) glTexImage2D and glTexSubImage2D should also take a FloatBuffer as pixel sources. Has some usages with dynamically generated textures. Also, glMultMatrixd is missing.

D) The VBO case was handled quite elegantly. Liked that too.

cfmdobbie

(Still getting used to the new buffer position stuff!)

I would guess glGetInteger et al are like that because there are other get methods that accept the same arguments, like glGetPixelMap(int, IntBuffer).  If there were a glGet(int, IntBuffer), you can guarantee people would try to glGet(GL_PIXEL_MAP_R_TO_R, intBuf) with it.  (But hey, maybe they should be able to?)

glMultMatrixd: Most of the double producing and consuming methods have been culled, for performance reasons.  Can you use glMultMatrixf instead?  (Or "glMultMatrix" as it should be called!)
ellomynameis Charlie Dobbie.

spasi

QuoteCan you use glMultMatrixf instead? (Or "glMultMatrix" as it should be called!)

That's what I did. I just converted the buffer from Double to Float. I used it only one time, anyway. But why was it culled? It makes sense to remove the color*v functions, but why this one? Higher precision is sometimes needed in matrix operations (e.g. projection matrices). I'd like it back in LWJGL, since it doesn't have any "hidden" performance implications, like color*v. If you want higher precision, you should be able to have it. What do you guys think?

elias

No current 3d hardware uses double precision internally anyway, so why bother keeping the *d methods? I'm also a little curious as to why glTexImage2D needs to take floatbuffers. What did you mean?

- elias

spasi

glTexImage2D (and TexSubImage2D) can be used with GL_FLOAT as pixel data type. That's why they need to take a FloatBuffer. You can still do it with a ByteBuffer (that holds floats), but doing it with a FloatBuffer would be cleaner.

OT: Is it possible to add query methods to Keyboard, that would return the state of Caps and Num Lock? I'm asking because I have troubles with the current Keyboard translation (especially with languages other than English) and I want to implement my own, but not knowing the Caps and Num state is the only thing missing to do it correctly.

Matzon

also OT
public static final int KEY_CAPITAL = 0x3A;
public static final int KEY_NUMLOCK = 0x45;

should do ?

elias

There, FloatBuffer method versions added.

- elias

princec

Didn't realise you could use floating point textures :)
What kinds of groovy uses might I put them to I wonder...

Cas :)

spasi

Quotepublic static final int KEY_CAPITAL = 0x3A;
public static final int KEY_NUMLOCK = 0x45;

should do ?

I know I can track their state AFTER I'm in the game, but what if Caps Lock was ON BEFORE running it? That would mess it up. At least I'd need to know their initial state.

spasi

Quote from: "princec"Didn't realise you could use floating point textures :)
What kinds of groovy uses might I put them to I wonder...

:) Helpfull, indeed. I use them to dynamically generate the minimap texture. I do all the lighting calculations (based on sun's position, terrain geometry and texture colors) using floats, and by using a float texture, I don't have to convert float->(unsigned)byte. The card converts for me  8).