Hello Guest

[BUG] (maybe) - large number of byte buffer created when setting shader variable

  • 5 Replies
  • 8582 Views
*

Offline abcdef

  • ****
  • 336
Hi

I could be doing something wrong here. In my render method, I capture the latest camera transformation matrix and I update the shaders who are interested in knowing this so they can render the particular frame. To do this I use the following generic code (code to pass the matrix to the shader)

Code: [Select]
int location = GL20.glGetUniformLocation(shaderId, name);
GL20.glUniformMatrix4(location, transpose, matrix4);

I noticed when running this that the ByteBuffer count keeps going up and up. The line that is doing this is

Code: [Select]
int location = GL20.glGetUniformLocation(shaderId, name);

It seems to create a new ByteBuffer per call so the number of new ByteBuffer's only goes up.

I'm trying to eliminate all object creation during the render cycle to eliminate any garbage collection that could take place and this is something I can't eliminate myself. Is this something I will just have to put up with or is this something LWJGL can fix?

*

Online spasi

  • *****
  • 2261
    • WebHotelier
It sounds like you're using the glGetUniformLocation version that accepts a CharSequence. This is a convenient method generated by LWJGL and indeed allocates a new ByteBuffer on each call. You have two options:

- Reuse a ByteBuffer, in which you serialize the uniform name manually.
- Cache the uniform location per shader. Highly recommended.

*

Offline abcdef

  • ****
  • 336
Thanks, good to know. I'll go for your recommended solution (It was what I was thinking of doing too if there wasn't a fix or if it was as intended)

*

Offline SHC

  • **
  • 94
    • GoHarsha.com
That made a large difference in memory usage in my case, thanks a lot! When using that, memory usage goes as high as 1300 MB, but caching the locations kept the memory at constant 30-31 MB. I thought I was messing up somewhere, but I dunno that's this. Thanks a lot.

*

Kai

I thought I was messing up somewhere, but I dunno that's this. Thanks a lot.
For situations like these I always always recommend using a real memory profiler, like YourKit.
Especially for building real-time applications, a profiler (memory- and cpu-profiler) is an invaluable tool, that takes all that guessing-what-the-reason-might-be out of the way. ;)
There is currently a free fully functional version of YourKit available, until March, 16.
No, I am not with the company YourKit or something like that  ;D, I just think that this and JProfiler are excellent tools for the job.
Of course there are also free versions of other profiling tools available.

*

Online spasi

  • *****
  • 2261
    • WebHotelier
I was worried that this will affect more users and did some work that will be available in the next nightly build:

- Added support for CharSequence parameters to APIBuffer. This is the class LWJGL uses as thread-local storage for all the "convenient" methods, e.g. for passing or returning single values (instead of arrays/pointers of values).

- Implemented custom UTF-8 encoding/decoding. ASCII & UTF-16 were already custom, but for UTF-8 the data went through NIO's CharsetEncoder/Decoder. The encoder is zero-allocation, but requires two passes over the input text (to compute the encoded byte count). The decoder does the minimum possible allocation (a char[] and the returned String), unfortunately the JDK does not yet provide a more efficient alternative.

- The above are used almost everywhere, in LWJGL internals and in bindings, except functions that accept string arrays (e.g. glShaderSource). This is not a problem because we don't want big shaders/kernels to "stretch" the thread-local buffers. Also, shaders/kernels are usually read from files, so the source is already in ByteBuffers.

These changes eliminate the OP's problem* and significantly reduce the amount of ByteBuffers allocated during LWJGL startup.

* though you should still cache uniform locations etc, always aim for the render loop to be free of glGet* calls.