How to pad NonPowerOf2(NPOT) Textures in OpenGL?

Started by karlchendeath, June 10, 2021, 20:35:40

Previous topic - Next topic


I'am working on iplementing a Game Framework in LWJGL. Currently I try to get the basics classes like Shaders and Framebuffers usw. to work reliable, so that hopefully after exame time i can still reuse them.
For texture loading i want to be able to load any image and not have to worry about OpenGl. I managed to convert JPG images to RGBA images by padding (byte) 0xFF after every rgb value so that opengl can interpret the ByteBuffer as a RGBA image. (r,g,b) -> (r,g,b,255)

Now i want to work on getting NPOT Textures to work.
My question is how to pad the ByteBuffer so that OpenGL interprets them as normal images (if even possible)
I know that when we put a 16x17 image in opengl pads it to 32x32, at least i believe that how it works but how is the pixel data padded. Each line padded to 32 and doesn't seem to be working.
Btw iam loading my ByteBuffers with stbi_load_from_memory  and POT Textures work perfecly fine.

If there are any more questions for clarification feel free to ask =^).


1. you don't need to do any (manual) padding of RGB8 to RGBA8 data. You can let the OpenGL driver do that automatically by specifying a `format` of GL_RGB while using an `internalformat` of GL_RGBA8 in a call to glTexImage2D. The driver will do the necessary conversion automatically. Just note that GL_PACK_ALIGNMENT must be 1 bytes in that case!

2. non-power-of-two textures work without you doing anything. NPOT textures are core since OpenGL 2.0 and even way before that (which is, before 2004) it worked due to hardware having support for it which is advertized by an OpenGL extension. So, you can simply upload a e.g. 37x12 pixels texture without problems. No padding or anything involved.


Oh okay, well than it has to be a bug on my end. i'm working with lwjgl 3.2.3
Here is how i load textures.
internalFormat = GL_RGB
format = GL_RGB
min_filter = mag_filter = GL_NEAREST
wrap_s = wrap_t = GL_WRAP_TO_EDGE
samplers = 1
target = GL_TEXTURE_2D
        ID = glGenTextures();
        gfx.bind(this); -> glBindTexture
        glTexParameteri(target, GL_TEXTURE_MIN_FILTER, min_filter);
        glTexParameteri(target, GL_TEXTURE_MAG_FILTER, mag_filter);
        glTexParameteri(target, GL_TEXTURE_WRAP_S, wrap_s);
        glTexParameteri(target, GL_TEXTURE_WRAP_T, wrap_t);

        glPixelStorei(GL_PACK_ALIGNMENT, 1);

        if (target == GL_TEXTURE_2D_MULTISAMPLE) {
            if (pixels == null) {
                glTexImage2DMultisample(target, samples, internal_format,
                        width, height, true);
            } else {
                throw new UnsupportedOperationException("can't create multisampled textures based on pixel data");
        } else {
            glTexImage2D(target, 0, internal_format, width, height,
                    0, format, type, pixels);
        gfx.unbind(this); -> glBindTexture(0)

And here is the code how i load the ByteBuffer pixels.
This is probably a bit much because i copied it mostly from a lwjgl demo.

    public static PixelBuffer loadTexture(Class accessClass, String resourcePath) {
        ByteBuffer imageBuffer;
        try {
            imageBuffer = ioResourceToByteBuffer(accessClass, resourcePath);
        } catch (IOException e) {
            throw new RuntimeException("could not read file : " + resourcePath);
        ByteBuffer image;
        int width;
        int height;
        int channels;
        try (MemoryStack stack = stackPush()) {
            IntBuffer w = stack.mallocInt(1);
            IntBuffer h = stack.mallocInt(1);
            IntBuffer c = stack.mallocInt(1);

            image = stbi_load_from_memory(imageBuffer, w, h, c, 0);
            if (image == null) {
                throw new RuntimeException("Failed to load image : " + stbi_failure_reason());
            width = w.get(0);
            height = h.get(0);
            channels = c.get(0);
        return new PixelBuffer(image, width, height, channels);

    private static ByteBuffer ioResourceToByteBuffer(Class accessClass, String resource) throws IOException {
        ByteBuffer buffer;
        Path path = Paths.get(resource);
        if (Files.isReadable(path)) {
            try (SeekableByteChannel fc = Files.newByteChannel(path)) {
                buffer = createByteBuffer((int) fc.size() + 1);
                while ( != -1) ;
        } else {
            try (InputStream source = accessClass.getClassLoader().getResourceAsStream(resource);
                 ReadableByteChannel rbc = Channels.newChannel(source)) {
                buffer = createByteBuffer(8 * 1024);
                while (true) {
                    int bytes =;
                    if (bytes == -1) break;
                    if (buffer.remaining() == 0) buffer = resizeBuffer(buffer, buffer.capacity() * 3 / 2);
        return memSlice(buffer);

    private static ByteBuffer resizeBuffer(ByteBuffer buffer, int newCapacity) {
        ByteBuffer newBuffer = BufferUtils.createByteBuffer(newCapacity);
        return newBuffer;

attached is a screenshot what i get when i run it. the image that i load is a 16*19 png with 3 channels that is just plain yellow.
Nope for some reason can't attach screenshots [Server Error] any way its a gliched yellow quad in the middle of the window first 16*19 pixels from the bottom right are yellow rest is random rgb. I'm currently working on an older laptop with integrated graphics havn't though of that, maybe thats the problem because iam running integrated graphics.

One more question about rgb -> rgba convertion. how does it work
is format the format of the byteBuffer we put in or is internal_format that format. which is which?


I can't answer about the specifics, but LWJGL has STB bindings, and STB has methods for stuff like this. If you want to reinvent the wheel, I have full respect for that though.

               IntBuffer w = stack.mallocInt(1);
			IntBuffer h = stack.mallocInt(1);
			IntBuffer comp = stack.mallocInt(1);
			Bytebuffer image = STBImage.stbi_load("someimage.png", w, h, comp, 4);