GL15.glBufferData: FloatBuffer not rendering but float[] works

Started by RDM, June 22, 2017, 09:13:40

Previous topic - Next topic


Hi all,

I've recently started dabbling in OpenGL and was following one of the tutorials on the wiki. The following happened to me using the OpenGL context 4.0 core profile on a macOS Sierra 10.12.5 MBP with an integrated graphics card capable of running up to OpenGL 4.1, with JDK 1.8.0_121 and LWJGL 3.1.2.

The triangle tutorial is very simple and basically does the following:
- use glfw to create a window
- create a vertex shader and fragment shader
- create a VBO describing the triangle's position and color
- draw the triangle

The tutorial code ran without issues (though it took me a while to figure out I had to move GL.createCapabilities() from the render loop to my init method, before trying to create the shaders etc!) but the screen remained black. After hours I found that the reason for this was related to the
method. I was using the overloaded version which takes a FloatBuffer as second parameter. Replacing this FloatBuffer by the backing float[] solved my issue and suddenly my triangle was visible on screen.

I tried creating the FloatBuffer in numerous ways:
- via MemoryStack.stackPush().mallocFloat(...).put(..).put(..)....flip()
- via FloatBuffer.allocate(..).put(..).put(..)....flip()
- via FloatBuffer.wrap(new float[]{...})
but in none of these cases did the triangle appear on screen. However, when using the overloaded version of GL15.glBufferData which directly takes the float[] as second parameter, the triangle is shown.

Below you can find a minimal verifiable code example (2 java files and 2 shader files) to reproduce this issue. I say issue but I'm not sure if it even is an issue, I might just be doing something wrong with the buffer, but I don't see what.

import org.lwjgl.Version;
import org.lwjgl.glfw.GLFWErrorCallback;
import org.lwjgl.glfw.GLFWVidMode;
import org.lwjgl.opengl.GL;
import org.lwjgl.system.MemoryStack;

import java.nio.IntBuffer;

import static org.lwjgl.glfw.Callbacks.glfwFreeCallbacks;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL15.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.*;
import static org.lwjgl.system.MemoryStack.stackPush;
import static org.lwjgl.system.MemoryUtil.NULL;

 * Created by RDM on 21/06/2017.
public class Test3 {
    // The window handle
    private long window;
    // the vertex array
    private int vao;
    // a vertex buffer object
    private int vbo;
    // a vertex shader
    private int vertexShader;
    // a fragment shader
    private int fragmentShader;
    // a shader program
    private int shaderProgram;

    public static void main(String[] args) {
        new Test3().run();

    public void run() {
        System.out.println("Hello LWJGL " + Version.getVersion() + "!");


        System.out.println("Starting clean up routine!");

        // Free the window callbacks and destroy the window

        // Terminate GLFW and free the error callback

    private void init() {
        // Setup an error callback. The default implementation
        // will print the error message in System.err.

        // Initialize GLFW. Most GLFW functions will not work before doing this.
        if (!glfwInit())
            throw new IllegalStateException("Unable to initialize GLFW");

        // Configure GLFW
        glfwDefaultWindowHints(); // optional, the current window hints are already the default
        glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE); // the window will stay hidden after creation
        glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE); // the window will be resizable
//        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
//        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
        glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
//        glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);

        // Create the window
        window = glfwCreateWindow(300, 300, "Hello World!", NULL, NULL);
        if (window == NULL)
            throw new RuntimeException("Failed to create the GLFW window");

        // Setup a key callback. It will be called every time a key is pressed, repeated or released.
        glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
            if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE)
                glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop

        // Get the thread stack and push a new frame
        try (MemoryStack stack = stackPush()) {
            IntBuffer pWidth = stack.mallocInt(1); // int*
            IntBuffer pHeight = stack.mallocInt(1); // int*

            // Get the window size passed to glfwCreateWindow
            glfwGetWindowSize(window, pWidth, pHeight);

            // Get the resolution of the primary monitor
            GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());

            // Center the window
                    (vidmode.width() - pWidth.get(0)) / 2,
                    (vidmode.height() - pHeight.get(0)) / 2
        } // the stack frame is popped automatically

        // Make the OpenGL context current
        // Enable v-sync

        // This line is critical for LWJGL's interoperation with GLFW's
        // OpenGL context, or any context that is managed externally.
        // LWJGL detects the context that is current in the current thread,
        // creates the GLCapabilities instance and makes the OpenGL
        // bindings available for use.

        vao = glGenVertexArrays();

        glViewport(0, 0, 300, 300);


        // Make the window visible

    private void loop() {
        // Set the clear color
        glClearColor(0.6f, 0.6f, 0.6f, 0.0f);

        // Run the rendering loop until the user has attempted to close
        // the window or has pressed the ESCAPE key.
        while (!glfwWindowShouldClose(window)) {
            glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the framebuffer

//            System.out.println("drawing VAO");
            glDrawArrays(GL_TRIANGLES, 0, 3);

            glfwSwapBuffers(window); // swap the color buffers

            // Poll for window events. The key callback above will only be
            // invoked during this call.
//            glfwPollEvents();
            // "If instead you only need to update your rendering once you have received new input, glfwWaitEvents is a better choice. It waits until at least one event has been received, putting the thread to sleep in the meantime, and then processes all received events. This saves a great deal of CPU cycles and is useful for, for example, many kinds of editing tools."

    private void createVBO() {

        try (MemoryStack stack = MemoryStack.stackPush()) {
            float[] vertexData = {-.8f, -.8f, 0, 1, 1, 0, 0, 1,
                    0, .8f, 0, 1, 0, 1, 0, 1,
                    .8f, -.8f, 0, 1, 0, 0, 1, 1};
            vbo = glGenBuffers();
            glBindBuffer(GL_ARRAY_BUFFER, vbo);
//            glBufferData(GL_ARRAY_BUFFER, FloatBuffer.wrap(vertexData), GL_STATIC_DRAW);
            glBufferData(GL_ARRAY_BUFFER, vertexData, GL_STATIC_DRAW);
            int positionLocation = glGetAttribLocation(shaderProgram, "position");
            int colorLocation = glGetAttribLocation(shaderProgram, "color");
            int floatSize = 4;
            glVertexAttribPointer(positionLocation, 4, GL_FLOAT, false, 8 * floatSize, 0);
            glVertexAttribPointer(colorLocation, 4, GL_FLOAT, false, 8 * floatSize, 4 * floatSize);

            int e = glGetError();
            if (e != GL_NO_ERROR) {
                throw new RuntimeException("Error creating VBO");

            System.out.println("Initialized VBO (pos=" + positionLocation + ", col=" + colorLocation + ")");

    private void createShaders() {
        Shader vertexShader = Shader.loadShader(GL_VERTEX_SHADER, "triangle.vs");
        this.vertexShader = vertexShader.getID();
        System.out.println("Created vertex shader");
        Shader fragmentShader = Shader.loadShader(GL_FRAGMENT_SHADER, "triangle.fs");
        this.fragmentShader = fragmentShader.getID();
        System.out.println("Created fragment shader");

        shaderProgram = glCreateProgram();
        System.out.println("Created shader program");
        glAttachShader(shaderProgram, this.vertexShader);
        glAttachShader(shaderProgram, this.fragmentShader);
        System.out.println("Linked shader program");
        if (glGetProgrami(shaderProgram, GL_LINK_STATUS) != GL_TRUE) {
            throw new RuntimeException(glGetProgramInfoLog(shaderProgram));
        System.out.println("Using shader program");

    private void destroyShaders() {
        glDetachShader(shaderProgram, vertexShader);
        glDetachShader(shaderProgram, fragmentShader);

    private void destroyVBO() {
        // position
        // color

        glBindBuffer(GL_ARRAY_BUFFER, 0);


I use a the Shader utility class (copied directly from the SilverTiger tutorial to lwjgl3 from the wiki) to load 2 shaders:


#version 400 core

in vec4 position;
in vec4 color;

out vec4 vertexColor;

void main() {
    vertexColor = color;
    gl_Position = position;


#version 400

in vec4 vertexColor;

out vec4 fragColor;

void main() {
    fragColor = vertexColor;

The Shader utility class file containing Shader.loadShader:


import static org.lwjgl.opengl.GL11.GL_TRUE;
import static org.lwjgl.opengl.GL20.*;

 * This class represents a shader.
 * @author Heiko Brumme
public class Shader {

     * Stores the handle of the shader.
    private final int id;

     * Creates a shader with specified type. The type in the tutorial should be
     * either <code>GL_VERTEX_SHADER</code> or <code>GL_FRAGMENT_SHADER</code>.
     * @param type Type of the shader
    public Shader(int type) {
        id = glCreateShader(type);

     * Creates a shader with specified type and source and compiles it. The type
     * in the tutorial should be either <code>GL_VERTEX_SHADER</code> or
     * <code>GL_FRAGMENT_SHADER</code>.
     * @param type   Type of the shader
     * @param source Source of the shader
     * @return Compiled Shader from the specified source
    public static Shader createShader(int type, CharSequence source) {
        Shader shader = new Shader(type);

        return shader;

     * Loads a shader from a file.
     * @param type Type of the shader
     * @param path File path of the shader
     * @return Compiled Shader from specified file
    public static Shader loadShader(int type, String path) {
        StringBuilder builder = new StringBuilder();

        File file = new File(path);
        try (InputStream in = new FileInputStream(file);
//        try (InputStream in = Shader.class.getResourceAsStream(path);
             BufferedReader reader = new BufferedReader(new InputStreamReader(in))) {
            String line;
            while ((line = reader.readLine()) != null) {
        } catch (IOException ex) {
            throw new RuntimeException("Failed to load a shader file: <" + file.getAbsolutePath() + ">"
                                               + System.lineSeparator() + ex.getMessage());
        CharSequence source = builder.toString();

        return createShader(type, source);

     * Sets the source code of this shader.
     * @param source GLSL Source Code for the shader
    public void source(CharSequence source) {
        glShaderSource(id, source);

     * Compiles the shader and checks it's status afertwards.
    public void compile() {


     * Checks if the shader was compiled successfully.
    private void checkStatus() {
        int status = glGetShaderi(id, GL_COMPILE_STATUS);
        if (status != GL_TRUE) {
            throw new RuntimeException(glGetShaderInfoLog(id));

     * Deletes the shader.
    public void delete() {

     * Getter for the shader ID.
     * @return Handle of this shader
    public int getID() {
        return id;



Quote from: Evan407 on June 22, 2017, 09:22:45
mapped buffers and glsl

I'm sorry, could you elaborate a little bit? I'd like to understand what I was doing wrong but I'm afraid this is a tad too little to go on ^^


NIO buffers are either backed by Java arrays on the heap or direct (backed by off-heap memory). LWJGL supports direct buffers exclusively. Read the Memory FAQ for more information.

Of the ways you mentioned, FloatBuffer.allocate and FloatBuffer.wrap produce heap buffers and therefore do not work. Using the MemoryStack should have worked though, unless you had another bug in your code.


Quote from: spasi on June 22, 2017, 09:29:50
NIO buffers are either backed by Java arrays on the heap or direct (backed by off-heap memory). LWJGL supports direct buffers exclusively. Read the Memory FAQ for more information.

Of the ways you mentioned, FloatBuffer.allocate and FloatBuffer.wrap produce heap buffers and therefore do not work. Using the MemoryStack should have worked though, unless you had another bug in your code.

Thanks. I can indeed verify that using the MemoryStack allocated FloatBuffer indeed works. I must have had a different bug. Going forward, would you suggest I use float[] when available or that I use the MemoryStack to allocate a FloatBuffer? The FAQ makes no mention of directly using primitive arrays.


Quote from: RDM on June 22, 2017, 09:41:44The FAQ makes no mention of directly using primitive arrays.

That's because primitive array overloads were an afterthought, to take advantage of Hotspot Critical Natives. I have since regretted that decision. Critical Natives  is an undocumented and unsupported feature and the implementation has bugs. I have submitted fixes for these bugs to OpenJDK, but they won't incorporate them until after Java 9 is complete.

In general, using primitive arrays is recommended when you're doing some kind of work on the array data, other than passing it to native code. Also keep in mind that Critical Natives are only used after JIT compilation. The native call has to happen frequently to trigger JIT. Otherwise you'd be using standard JNI array handling, which usually means an extra copy of the data. So, NOT recommended for glBufferData.


I would use ByteBuffer.allocateDirect(numberOfFloats*4).asFloatBuffer() to get a float buffer that works if you want to stick with nio


Quote from: darkyellow on June 22, 2017, 12:54:09I would use ByteBuffer.allocateDirect(numberOfFloats*4).asFloatBuffer() to get a float buffer that works if you want to stick with nio

Which would not work properly, because NIO buffers are instantiated with a big endian order by default. The correct sequence is ByteBuffer.allocateDirect(numberOfFloats*4).order(ByteOrder.nativeOrder()).asFloatBuffer(). Since this is error-prone, LWJGL has a shortcut: BufferUtils.createFloatBuffer(numberOfFloats).


I'm fully aware of the order clause, but to keep things basic I just wanted to show you can create FloatBuffers which are compatible with opengl usage. I would also not recommend using the BufferUtils helper class though. Firstly it's good to know how you are actually allocating memory, and secondly when using UBO's it is useful to populate the buffers using a FloatBuffer but to use the underlying ByteBuffer when passing data to the graphics card because certain graphics cards don't like FloatBuffer when using UBO's.


QuoteI would also not recommend using the BufferUtils helper class though. Firstly it's good to know how you are actually allocating memory
Everyone can just jump into the implementation of BufferUtils and immediately see what's going on and then leave it at that. There is a difference between 'knowing about it' and 'efficiently using it'. When you start working, you want to have the simplest and least-error-prone solution. And that's what BufferUtils is.
Yes, there are other ways to allocate memory more efficiently and geared towards the actual usecase/lifecycle. However, BufferUtils is in many many cases a very good solution.

Quote...because certain graphics cards don't like FloatBuffer when using UBO's
This is like saying: Certain graphics cards don't like Java.
The graphics card and the OpenGL driver has absolutely no knowledge about what kind of memory you give to it. Memory is just: a contiguous sequence of bytes. Nothing more.
FloatBuffer and all other typed buffer views just allow you to write/read IEEE 754 floats/doubles and two's complement integer values into/from a bunch of bytes.
The UBO specification in the shader is the only way for OpenGL to tell "how it should _interpret_ the byte stream". When you use floating-point types like vec3, vec4, mat4 etc. then you _are_ essentially working with floats and thusly a FloatBuffer is the most convenient way to specify/format the values appropriately.

You are right in the sense that: Whenever you are _not_ working with floating point types, such as int, ivec2, ivec3 or even use a mixture of integral and floating point values then yes, a ByteBuffer would be appropriate, then.