LWJGL's Timer Not What It Claims To Be?

Started by jakj, November 01, 2012, 01:53:59

Previous topic - Next topic

jakj

Looking at the 2.8.4, LWJGL's supposed high-resolution timer seems to be nothing of the sort, unless I'm vastly misunderstanding the code. It claims this for "getTimerResolution":

QuoteObtains the number of ticks that the hires timer does in a second. This method is fast;
it should be called as frequently as possible, as it recalibrates the timer.

@return timer resolution in ticks per second or 0 if no timer is present.

But it does nothing of the kind: Everywhere I look, it's just hardcoded to return the value 1000 with no actual logic.

For getTime, it claims:

QuoteGets the current value of the hires timer, in ticks. When the Sys class is first loaded
the hires timer is reset to 0. If no hires timer is present then this method will always
return 0.

But the Windows version is the only one that actually calls a native function, and the rest just use Java's currentTimeMillis.

What's going on? There's even an entire tutorial on the front page of the wiki talking about using this method and how accurate it is, but other than on Windows, it doesn't even do anything different, and the bit about calibration seems to be a flat-out lie or mistake. Have I missed something crucial here?

kappa

Windows is the only OS (out of the big 3) that can't return proper 1ms timer accuracy (even varies between different versions of windows) hence it needs special code to handle the timer. Linux and OS X both have 1ms timer accuracy by default so you can get away with a hardcoded value there.

jakj

Well, that makes the actual code make sense, but that still leaves the misleading/false comments like using getTimerResolution to recalibrate the timer by calling it more often, when in actuality, since every single timer is millisecond granularity, it would be faster to say getTime() instead of getTime()*1000/getTimerResolution(), and that getTimerResolution() shouldn't ever be called at all, and Sync.sync() could even be sped up a bit by removing that unnecessary call.

kappa

The comments aren't false or misleading, its how the API is designed to be used. The API design is so that it will work with any type of resolution the hardware can provide and is not tied to (nor should it be) to any platform specific implementation or millisecond accuracy. Some hardware/OS's can't provide millisecond accuracy or in future an implementation could change to some other resolution like microseconds, nanosecond, picoseconds, etc but code using the getTime() and getTimerResolution() will continue to function correctly without change. Though you are correct that the current 3 implementations do currently aim for millisecond accuracy, it is however not a rule strictly set for the implementations.

As for speeding up the code, not sure how much performance impact (if any) a single multiplication and/or division will have on modern applications, it certainly won't be a bottleneck.

jakj

That is a good point: I hadn't considered the future-proofing aspect, where getTimerResolution at some future time could actually recalibrate a timer.

In terms of performance, I suppose we can only hope that the JIT compiler is smart enough to inline constants, but I'm worried it won't, because you can't write "getTime()*(1000/getTimerResolution())" for milliseconds without destroying any resolution below a millisecond. But yes, I'm falling into the trap of trying to optimize before profiling, and it may very well be that I'm worrying over nothing at all.