Loading texture arrays crashes the JVM

Started by Waffles, May 30, 2020, 16:47:23

Previous topic - Next topic

Waffles

Hello everybody!

I've recently decided to switch out my single-texture sprite sheets with the use of texture arrays. I cannot seem to get this to work as the JVM crashes in the middle of loading the images. I have 12 images of size 128x128 that I loop over and load with the glTextureSubImage3D command, the relevant code is given below. My code loops over the load method 12 times, and you can see I print some text to the console. The end result is that the JVM crashes after finishing a handful (between 6 and 8 images) of these methods, and failing to do the next one. Am I actually using the TextureSubImage method right? Is this perhaps not a method you can use repeatedly in quick succession? I've also added a crash log, see here. This was run through LWJGL 3.2.3. Thanks in advance for any help!

private void initialize(int lod, int count, int... size)
	{		
		switch(size.length)
		{
		case 1:
			GL45.glTextureStorage2D(ID(), lod, GL11.GL_RGBA8, size[0], count); break;
		case 2:
			GL45.glTextureStorage3D(ID(), lod, GL11.GL_RGBA8, size[0], size[1], count); break;
		default:
			break;
		}
	}
	
	@Override
	public void load(ByteBuffer data, int lod, int index, int... size)
	{
		System.out.print("Loading " + lod + ":" + index + ":" + size[0] + ":" + size[1] + ":" + data.remaining() + ":");
		switch(size.length)
		{
		case 1:
			GL45.glTextureSubImage2D(ID(), lod, 0, 0, size[0], index, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, data); break;
		case 2:
			GL45.glTextureSubImage3D(ID(), lod, 0, 0, 0, size[0], size[1], index, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, data); break;
		default:
			break;
		}
		System.out.println("Done");
	}
		
	@Override
	public void load(Image image, int lod, int index)
	{
		int w = image.Width(); int h = image.Height();
		byte[] data = ((DataBufferByte) image.Raster().getDataBuffer()).getData();
		ByteBuffer buffer = BufferUtils.createByteBuffer(data.length);
		load(buffer.put(data).flip(), lod, index, w, h);
	}

KaiHH

This:
glTextureSubImage3D(ID(), lod, 0, 0, 0, size[0], size[1], index, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, data);

makes no sense. Please look at the definition of this function again: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexSubImage3D.xhtml
You are using width=size[0] and height=size[1] which is probably fine, but then you are telling OpenGL that the depth=index. So, later indexes will upload an always bigger and bigger sub-texture. This is wrong. Your slices all have the same size 128x128x1 (w x h x d). You should use index as the zoffset parameter instead.

Waffles

Quote from: KaiHH on May 30, 2020, 18:51:33
You are using width=size[0] and height=size[1] which is probably fine, but then you are telling OpenGL that the depth=index.

Ah so I'm misinterpreting that third size component? The way I understood it, it refers to the index of the given sprite in the texture array. But when I read the documentation again, it seems to refer to the total amount of sprites that you are trying to load, is this correct? So to be fully clear, my line would have to look as follows

GL45.glTextureSubImage3D(ID(), lod, 0, 0, index, size[0], size[1], 1, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, data);


As an aside, I'm assuming this also means you can load multiple sprites through one call, and appropriately changing that '1' into however many sprites are stored in your data buffer? How would this data then have to be laid out? I assume the sprites cannot be interleaved, they would have to be stored in the buffer sequentially?