LWJGL Forum

Programming => OpenGL => Topic started by: Max9403 on November 28, 2015, 21:48:37

Title: glTexSubImage2D skipping a pixel
Post by: Max9403 on November 28, 2015, 21:48:37
I'm trying to create a texture using glTexImage2D with no pixels, which appears to work correctly and then update it using glTexSubImage2D, I'm testing with a 4x4 image but for some reason glTexSubImage only wants to use a width of 3, is there a way to solve this?

Here is an example of what is happening:
(http://puu.sh/lC5QA/f8b1728999.png)

Using this code to set the colours:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureId);
IntBuffer bb = BufferUtils.createIntBuffer(16);
bb.put(new int[]{0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF}).rewind();

GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, 1, 4, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, bb);
GLUtil.checkGLError();


and this is how the original texture was created:
int textureId = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureId);

GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE);

GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);

GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB8, 4, 4, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (IntBuffer)null);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);
Title: Re: glTexSubImage2D skipping a pixel
Post by: Kai on November 28, 2015, 23:28:29
The problem is that you specify the texel values via an IntBuffer.
There you can interpret each int as holding a 32-bit/RGBA8 encoded texel.
Then you upload that 32-bit data via <format>=GL_RGB with glTexSubImage2D, which tells the driver that every 24-bits is one texel (and not every 32-bits).
This effectively means that every three 32-bit integers in your IntBuffer correspond to 4 texels in your texture, and therefore the driver will only read 12 ints from your IntBuffer and not 16.
You can fix this by telling the driver via <format>=GL_RGBA of glTexSubImage2D that your texel values inside your IntBuffer are actually 32-bit RGBA8 encoded. The driver will then convert this 32-bit RGBA8 format to RGB8, which effectively ignores the most-significant 8 bits (the alpha channel) of each int in your IntBuffer.
Instead of using the GL_RGBA format you could also just use a ByteBuffer instead of the IntBuffer and put each R,G,B color component as separate byte.
Title: Re: glTexSubImage2D skipping a pixel
Post by: Max9403 on November 29, 2015, 08:14:32
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, 1, 4, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, bb);

Gives me the same result, I'll try using a byte buffer, though the last time I did (creating a byte buffer, asIntBuffer, put values) it also gave me the same issue, though I'll try with some bit shifting

ByteBuffer bb = BufferUtils.createByteBuffer(16 * 16 *3);
for (int i = 0; i < 16 * 16 * 3; i++) {
        bb.put((byte) 0xFF);
}
bb.rewind();


also gives the same result

I guess I should have said that even using RGBA and RGBA8 to create the texture also gives the same issue:

GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA8, width, height, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, (IntBuffer)null);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, 1, 4, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, bb);


I have tried this with a simple image which has each pixel coloured, and pushed that to OpenGL and has the same result:

The source image:
(http://puu.sh/lCDs0/2897fb7d5d.png)
The result:
(http://puu.sh/lCDzD/9758387069.png)
Title: Re: glTexSubImage2D skipping a pixel
Post by: Kai on November 29, 2015, 11:50:43
I did a small demo program: SimpleProceduralTextureDemo (https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/opengl/textures/SimpleProceduralTextureDemo.java)
and I cannot reproduce this.
It correctly updates the first column of the texture with white texels.
Could you provide the sources of a minimal working example that demonstrates your problem?
Maybe there are other parameters/functions involved than what you showed.
Title: Re: glTexSubImage2D skipping a pixel
Post by: Max9403 on November 29, 2015, 16:32:00
I figured it out, partly of the point you made earlier of the 24 bit issue, and copying over the code I noticed that where I'm using glGetTexImage (to check if I modified the texture correctly) I passed in GL_RGB because I'm using buffered image with TYPE_INT_RGB but changing it to the GL_RGBA fixed it, do you know how I can use something like CodeXL to view the textures in memory?
Title: Re: glTexSubImage2D skipping a pixel
Post by: Kai on November 29, 2015, 16:36:50
You're welcome! I'm glad it works for you now.

I do not know how CodeXL works exactly - meaning how it interfaces with OpenGL.
But if it works like gDEBugger by providing a shim OpenGL shared library against which the program under debug links, then any LWJGL application will just work as is under the profiler/debugger.

For gDEBugger you just specify java.exe/javaw.exe (or equivalent) with the command line arguments to start your Java application.
Title: Re: glTexSubImage2D skipping a pixel
Post by: Max9403 on November 29, 2015, 16:48:36
Thanks for the help ^^ gDEBugger errors when trying to view textures, I've gotten it running in CodeXL, GLInject and APITraces and with none of them I've been able to debug the textures