Input lag

Started by W3bbo, March 12, 2010, 03:30:38

Previous topic - Next topic

W3bbo

I'm a LWJGL newbie and just investigating it for emergency/contingency use in a coursework project that went titsup after half the team went AWOL (and the deadline's 12 days away!)

Anyway, back to the topic:

I've been toying with various LWJGL samples and FOSS projects, including Space Invaders and Starship2D, and one thing I kept on noticing was the input lag from my cursor to the on-screen results. I'm running XPx64 (though the 32-bit JVM) on an Intel Core 2 Quad Q9450, and Task Manager reports none of the javaw instances consume more than 10% of each core on my system, so it isn't my system that's lagging it down.

Is the lag inherently in LWJGL? I couldn't find any code in the Space Invaders or Starship2D code to suggest delays were intentionally added.

Can anyone reproduce or otherwise explain the issue?

Matzon

Unless you're using the Hardware Cursor, then you will be drawing it yourself - which means that its fixed to the frames per second your games does. For the best experience, you should use the hardware cursor - but it doesn't allow you to do AS fancy things that you can do when you render it yourself.

check http://lwjgl.org/jnlp/lwjgl-demo.php/test.input.HWCursorTest

Fool Running

If you are using the hardware cursor and you are still getting lag between when your cursor moves somewhere over the game view to the time the game responds, then you should make sure that you don't have a render-ahead option enabled in your video card settings.
I don't remember exactly what the option is called, but I know that NVidia drivers default to rendering 3 frames ahead (which means that after you make an action, there will be at least another 3 frames before it is shown on screen).
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

princec

I've seen that option, but when you hover over it in the Nvidia control panel it states that this will only work for D3D games?

Cas :)

Matthias

I found a way to reduce the input lag:
            Display.setVSyncEnabled(true);

            while(!Display.isCloseRequested() && !chat.quit) {
                GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);

                gui.update();
                Display.update();

                // reduce input lag by polling input devices after waiting for vsync
                GL11.glGetError();          // this call will burn the time between vsyncs
                Display.processMessages();  // process new native messages since Display.update();
                Mouse.poll();               // now update Mouse events
                Keyboard.poll();            // and Keyboard too
            }


This works on Windows 7 with a nVidia GTX 275.
Would be good to get results also from other systems.

Ciao Matthias

Fool Running

I'm curious as to the symptoms of your input lag. I would think that at 60fps a lag of one frame would not be noticeable by a user. What do you see that makes you sure that you have input lag? Do you have a demo app that we could run to see it?

I can't see that adding in the explicit polling (Mouse, Keyboard) helps at all since that is what Display.update() does as its last step.
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

Matthias

The issue is when the OpenGL driver does not block on swapBuffers but instead renders the next frame into another buffer. Forcing a synchronization prevents that.
You can test it with my demo http://twl.l33tlabs.org/demo/twldemo.jnlp press CTRL-SHIFT-L to toggle the log reduction algorithm on/off - default is on. It is visible when dragging one of the smaller windows around the screen - use the windows lookalike theme - the blue theme uses GL rendered mouse cursors.

princec

Aha yes this is indeed because OpenGL can render up to two frames ahead (I don't know of any implementations using more than 2 backbuffers). At 60Hz that's a potential 35ms of lag behind the mouse cursor being read - which is updated onscreen immediately - and your rendered GUI actually reflecting the input. When vsync's off you notice that the lag vanishes as well.

Cas :)

Fool Running

I tried it on my work machine which has a pathetic Intel graphics chip (I'll try it on my home machine when I get home) and didn't notice any lag (besides the one frame I would expect).
Quote from: princec on March 16, 2010, 10:35:05
Aha yes this is indeed because OpenGL can render up to two frames ahead (I don't know of any implementations using more than 2 backbuffers). At 60Hz that's a potential 35ms of lag behind the mouse cursor being read - which is updated onscreen immediately - and your rendered GUI actually reflecting the input. When vsync's off you notice that the lag vanishes as well.

Cas :)
Did you somehow confirm that is the issue? It is what I would expect to be the problem. I'm not sure you can do much about it without changing your driver settings (if that will even help).
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

spasi

I've committed the change proposed by Matthias. Display.processMessages() will poll the input devices now and when vsync is enabled Display.processMessages() will be called both before and after swapBuffers() in Display.update().

basil

mouse lag almost went away when i used

org.lwjgl.opengl.Display.swapBuffers ();


instead of update() at the renderloop bottom. and

org.lwjgl.opengl.Display.processMessages ();
org.lwjgl.input.Mouse.poll ();

while ( org.lwjgl.input.Mouse.next () ) { ... }


somewhere in between.

tho' javadoc says, 'should not be used that way' ;)

one thing that is actualy bad about this is the

if ( parent_resized ) {
  reshape();
  parent_resized = false;
}


part from update() method.

would be realy nice if you could move that little block into a new public method or something like that.

ofc it works with an own 'resized-check' and update() call. just a little overhead.

spasi

Could you please try the new build first and see if you're happy with the current solution? If I read your post right, the new Display.update() should do exactly what you're doing now manually.

basil

oh, yes I'll do that. using the stable 2.3 version atm.

basil

mhmmm im not too sure about it.

i see the point, putting pollDevices() into processMessages().
update() is more useful now for sure.

still, my point was, when we do pollDevices(), least input lag will be given when we take the shortest path from 'capture device message' -> 'do something with the messages'

thats why i call processMessages() + Mouse.poll() etc just before i loop through the events and swapBuffers() without update() later.

now I'm wondering, actualy, does swapBuffers() behave 'correct' when I call processMessages() way before and not immediately ?

spasi

Assuming a render loop that goes like:

1) Handle input
2) Render
3) Swap buffers

we want to call processMessages either before 1) or after 3). In the first case, you may call processMessages manually and then do your input handling. In the second case, you can depend on update to call processMessages for you, after swap buffers, which has basically the same effect as the first case (processMessages immediately before handling input).

With vsync off, when you actually call processMessages was never a problem. The issue was that LWJGL was calling processMessages before swap buffers, which was a problem for vsync on, you'd almost get 2 frames of lag. Now processMessages will be called after swap buffers so there's no problem with extra lag. And you can't really avoid vsync lag no matter when/how you do input handling.

Lets say we add the option for Display.update() to never call processMessages. A render loop could be written in 2 ways:

Method A
while ( run ) {
    Display.processMessages();
    handleInput();

    render();
    Display.update(false); // swap buffers without input processing
}


Method B
while ( run ) {
    handleInput();

    render();
    Display.update(); // swap buffers with input processing
}


The effect would be the same (with vsync either on or off), but you write 1 extra line with the first method. The question is, would anyone gain something from doing their own input handling?