Hello Guest

Old fixed pipeline textures won't render under LWJGL 3

  • 2 Replies
  • 3636 Views
I'm trying to set up my first textured quad under LWJGL 3 using the old pipeline. The new system isn't useful for me for what I'm trying to do. Here's the code:

Code: [Select]

public void repaint() {
glClearColor(0.2f,0,0.2f,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glMatrixMode(GL_PROJECTION);
glLoadIdentity(); // resets any previous projection matrices
glOrtho(0, 640, 480, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, (Integer)texture.get());
glBegin(GL_TRIANGLES);

glTexCoord2f(1, 0);
glVertex2i(450, 10);
glTexCoord2f(0, 0);
glVertex2i(10, 10);
glTexCoord2f(0, 1);
glVertex2i(10, 450);

glTexCoord2f(0, 1);
glVertex2i(10, 450);
glTexCoord2f(1, 1);
glVertex2i(450, 450);
glTexCoord2f(1, 0);
glVertex2i(450, 10);

glEnd();

glfwSwapBuffers(window);
}


This seems a pretty obvious code to me. The result is a plain white quad where an image is supposed to be rendered. The texture is 400x300 (but any other POT and NPOT texture is giving the same result, before you ask). I can't spot anything weird here. For reference, this is how I create the texture (it's supposed to work, I get all the id's correcty and glGetError returns 0):

Code: [Select]
public TextureHandler getTexture(String textureName) throws IOException
{
IntBuffer width = BufferUtils.createIntBuffer(1), height = BufferUtils.createIntBuffer(1), comp = BufferUtils.createIntBuffer(1);

ByteBuffer dataBuffer = utils.Utils.ioResourceToByteBuffer(textureName.substring(1), 8192);
if(STBImage.stbi_info_from_memory(dataBuffer, width, height, comp) == 0){
String reason = STBImage.stbi_failure_reason();
log.error("Image couldn't be loaded from path = " + textureName, new Exception(reason));
return null;
}
ByteBuffer imageBuffer = STBImage.stbi_load_from_memory(dataBuffer, width, height, comp, 4);
if(imageBuffer == null) {
String reason = STBImage.stbi_failure_reason();
log.error("Image couldn't be loaded from path = " + textureName, new Exception(reason));
return null;
}

int w = width.get(0), h = height.get(0), c = comp.get(0);

int texId = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, texId);
if ( c == 3 ) {
if ( (w & 3) != 0 )
GL11.glPixelStorei(GL11.GL_UNPACK_ALIGNMENT, 2 - (w & 1));
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, w, h, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, imageBuffer);
} else {
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGBA, w, h, 0, GL11.GL_RGBA, GL11.GL_UNSIGNED_BYTE, imageBuffer);

int error;
if((error = GL11.glGetError()) != 0) {
log.debug("Error loading texture = " + error);
}

GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
}

textureH = new TextureHandler(texId, w, h);
_textureNameMap.put(textureName, textureH);

return textureH;
}

This is safe to assume it works correctly. Just for reference there's the code for ioResourceToByteBuffer:

Code: [Select]

public static ByteBuffer ioResourceToByteBuffer( String resource, int bufferSize )
           throws IOException
    {
        ByteBuffer buffer;

        File file = new File( resource );
        if ( file.isFile() )
        {
            FileInputStream fis = new FileInputStream( file );
            FileChannel fc = fis.getChannel();
            buffer = BufferUtils.createByteBuffer( ( int )fc.size() + 1 );

            while ( fc.read( buffer ) != -1 ) ;

            fc.close();
            fis.close();
        }
        else
        {
            buffer = BufferUtils.createByteBuffer( bufferSize );

            InputStream source = Thread.currentThread()
                        .getContextClassLoader().getResourceAsStream( resource );
            if ( source == null )
                throw new FileNotFoundException( resource );

            try
            {
                ReadableByteChannel rbc = Channels.newChannel( source );
                try
                {
                    while ( true )
                    {
                        int bytes = rbc.read( buffer );
                        if ( bytes == -1 )
                            break;
                        if ( buffer.remaining() == 0 )
                            buffer = resizeBuffer( buffer, buffer.capacity() * 2 );
                    }
                }
                finally
                {
                    rbc.close();
                }
            }
            finally
            {
                source.close();
            }
        }

        buffer.flip();
        return buffer;
    }


    /**
     * Resize the buffer to a new capacity.
     * @param buffer
     * @param newCapacity
     * @return
     */
    private static ByteBuffer resizeBuffer( ByteBuffer buffer, int newCapacity )
    {
        ByteBuffer newBuffer = BufferUtils.createByteBuffer( newCapacity );
        buffer.flip();
        newBuffer.put( buffer );
        return newBuffer;
    }


Is there anything really obvious that I'm missing?

*

Kai

Re: Old fixed pipeline textures won't render under LWJGL 3
« Reply #1 on: May 20, 2016, 17:42:08 »
Your texture is likely incomplete for rendering, because you are lacking the specification of the minification filter. Please try the following after you bound your texture target:
Code: [Select]
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
See the first paragraph of this article: https://www.opengl.org/wiki/Common_Mistakes#Creating_a_complete_texture
« Last Edit: May 20, 2016, 17:46:26 by Kai »

Re: Old fixed pipeline textures won't render under LWJGL 3
« Reply #2 on: May 23, 2016, 08:46:36 »
That was it. Thank you!