Hello Guest

Recent Posts

Pages: 1 2 3 [4] 5 6 ... 10
OpenGL / Re: glMapBufferRange is limited to 2GB because of LWJGL
« Last post by KaiHH on March 16, 2021, 11:57:26 »
Accessing native/off-heap memory bigger than this limit is indeed a big issue in Java and known for quite some time in the Java world, because (currently) the NIO Buffer API is the only standard way to do that. Everything else is hacked using mostly sun.misc.Unsafe API. So, it's not so much a limitation of LWJGL as it is one of the Java platform and its API. LWJGL just provides methods returning the "standard" way of working with native/off-heap memory, by returning java.nio.ByteBuffer instances.

However, as a work-around for the lacking NIO Buffer API, LWJGL provides "unsafe" methods (with a lowercase 'N' prefix): https://javadoc.lwjgl.org/org/lwjgl/opengl/GL30C.html#nglMapBufferRange(int,long,long,int)
It returns the direct virtual memory address of the mapped memory region.
Obviously, you can also not use a single NIO Buffer to read from or write to those memory regions, so you have to either use org.lwjgl.system.MemoryUtil.memPut/Get*(address + offset, value) or create a NIO Buffer from a particular region using MemoryUtil.memByteBuffer(address, size).
Another option is to "page" the mappings: Map the first 2GB, write/read it, then map the next 2GB, and so forth.
OpenGL / glMapBufferRange is limited to 2GB because of LWJGL
« Last post by freduni on March 16, 2021, 11:53:14 »

According to the OpenGL spec, glMapBufferRange can access the contents of any buffer, including buffers larger than 2GB:


(GLintptr and GLsizeiptr are 64 bits on a 64-bit system, which LWJGL appropriately takes care of through 'long' parameters)

In LWJGL, however, glMapBufferRange returns a java.nio.ByteBuffer, which is a big issue.

A ByteBuffer has attributes such as 'position', 'limit' and 'capacity', that are defined in byte units. Unfortunately, ByteBuffer use 'ints' (integers) to define these values.

Because of this, ByteBuffer access won't work with buffer larger than 2GB.

I tested the following : if you call glMapBufferRange with a very large region (much larger than 2GB), it succeeds (as per OpenGL spec). However, the returned ByteBuffer is useless, as it won't work past its 'limit'.

I cannot use .asIntBuffer() as a workaround, as the returned ByteBuffer capacity is negative, and invalid.

What can I do to use glMapBufferRange with large buffers?

Lightweight Java Gaming Library / Re: mouse speed depends on fps
« Last post by Jakes on March 14, 2021, 04:47:17 »
Well, first off, it depends on what you're using to retrieve the input data, and from what I can see you're not using any type of event handler from within your own application.

When are you calling this method? in the rendering cycle?
If thats so, then it will obviously depend on the number of calls your rendering will have (thus FPS dependant)

My suggestion is to use (or take a look at) the glfwSetCursorPosCallback() method in order to retrieve the event when the mouse is moved.

I',m fairly new to LWJGL (3) and I right now I'm working with the inputs (Keyboard and Mouse) as well as it's cursor pointer image (Custom and Hidden), but so far I'm having some issues while using the canvas with custom cursors.

This is what I have:

An initiator code for Canvas java component:
Code: [Select]
void init() {
ds = JAWT_GetDrawingSurface(this, awt.GetDrawingSurface());
int lock = JAWT_DrawingSurface_Lock(ds, ds.Lock());
if((lock & JAWT_LOCK_ERROR) != 0) throw new IllegalStateException("ds->Lock() failed");

try {
dsi = JAWT_DrawingSurface_GetDrawingSurfaceInfo(ds, ds.GetDrawingSurfaceInfo());
if(dsi == null) throw new IllegalStateException("ds->GetDrawingSurfaceInfo() failed");

dsi_win = JAWTWin32DrawingSurfaceInfo.create(dsi.platformInfo());

context = createContextGLFW(dsi_win);
} finally {
JAWT_DrawingSurface_Unlock(ds, ds.Unlock());

fb = new FrameBuffer(getWidth(), getHeight());
        initiated = true;

And when I set the cursor either by using the java Component.setCursor() or the GLFW.glfwSetCursor(), I'm having some cursor conflict while rendering the canvas, as if both canvas cursor and GLFW cursor were fighting over dominance.

is there any way to overcome this?

best regards,
OpenGL / Integer Flags for Geometry shader
« Last post by SinTh0r4s on March 11, 2021, 19:32:35 »

I have a project in progress where i do some point to triangle generation in a geometry shader. To do this properly i need to pass some flag-like values to the shader. For example, i need to pass the lighting information of the surrounding 6 points. Lighting information is split up into two integers of length 4 bits. So i can pack the lighting information for 4 blocks in a single 32 bit integer. I tried to transfer integers to the shaders, but never got it to work. I am not sure if there was a bit format problem or whatever mistake(s) i made :-(

Currently i use floats and it works just fine, but i cannot stop but feel hacky about it. This is my current vertex shader (shortened):
Code: [Select]
layout (location = 1) in float light0;

out VS_OUT {
    vec2 lightXMinus;
    vec2 lightXPlus;
    vec2 lightYMinus;

void main(void) {
int tmp = int(round(light0));
    vs_out.lightXMinus = toLightLUTCoord((tmp >> 16) & 0xFF);
    vs_out.lightXPlus = toLightLUTCoord((tmp >> 8) & 0xFF);
    vs_out.lightYMinus = toLightLUTCoord(tmp & 0xFF);
int(round(float). 'nuff said  ::)

... and the obvious setup for the vao:
Code: [Select]
GL20.glVertexAttribPointer(1, 1, GL11.GL_FLOAT, false, STRIDE, OFFSET);

When i went for
Code: [Select]
layout (location = 1) in int light0;and a vao setup of
GL20.glVertexAttribPointer(1, 1, GL11.GL_INT, false, STRIDE, OFFSET);
i never got it to work. Is there any example code available? I tried it with IntBuffer, FloatBuffer, ByteBuffer and Float.intBitsToFloat() and never had any success.
I am not even sure that GPUs are required to support 32bit integers. Would be glad if somebody knows something!


Project Code: https://github.com/SinTh0r4s/HydroEnergy/
OpenGL / Mapping texture over half of the quad.
« Last post by Aisaaax on March 09, 2021, 10:10:00 »
I'm trying to create a simple overlay that is basically just a fullscrean quad that occupies the entire screen space.
I want to map some textures on it.

Now, can I map a texture over only some part of the quad, leaving the rest of the quad unrendered?

I know how to TILE a texture several times over a surface. But can I set it up so that instead of repeating it - it just draws nothing in that space?
Or the only way to do this would only be to create smaller mesh / or subdivide mesh into smaller quads?
OpenGL / openGL not rendering VAOs
« Last post by Rotten_potato on March 08, 2021, 04:55:55 »
I'm attempting to write my first game engine with lwjgl and i have a class to easily create new VAOs for different shapes, i tried to implement index buffer rendering and it stopped working, i decided to temporarily delete the index buffer code but still nothing is being rendered to the screen. Everything else is working fine.

Code: [Select]
public class GraphicsObject {
    private int vaoID,vboID,stride,offset;
    public int vertexCount;

    public GraphicsObject(){
        vaoID = glGenVertexArrays();
        vboID = glGenBuffers();

    public void bind(){

    public void unbind(){

    public void setVertexPositions(float[] vertexPositions){
        vertexCount = vertexPositions.length/3;
        stride = vertexPositions.length * 4;
        offset = glGetBufferParameteri(GL_ARRAY_BUFFER,GL_BUFFER_SIZE);


        glBufferData(GL_ARRAY_BUFFER, ArrayUtils.convertToBuffer(vertexPositions),GL_STATIC_DRAW);

That's the code to create new VAOs, here is the code to render them:
Code: [Select]
public class Renderer {

    //Temporary code, just for testing purposes.

    private static GraphicsObject polygon;

    public static void init(){
        polygon = new GraphicsObject();

    public static void setColor(float r, float g , float b){

    public static void drawPolygon(float[] vertices){
        glDrawArrays(GL_POLYGON,0, polygon.vertexCount);

and lastly here is the code for making the window:
Code: [Select]
public class Window {

    private long window;
    private int width,height;

    float[] vertices;
    String title;

    public Window(int width, int height, String title){ //creates the window and displays it

        this.height = height;
        this.width = width;
        this.title = title;


        vertices = new float[]{
                -0.5f, 0.5f, 1.0f,
                //Top triangle

                //Bottom triangle


        window = glfwCreateWindow(this.width,this.height,this.title,0,0);





    private void update(){ //updates the window






    private void destroy(){ //destroys the window and terminates glfw



    public void createInputListeners(){
        glfwSetKeyCallback(window, new Input());
I don't believe the problem is in the GraphicsObject class as i rolled it back to how it was before it stopped working thanks to a screenshot i had of it. However i'm not so sure about the renderer code.
Lightweight Java Gaming Library / Optimization suggestions?
« Last post by SinTh0r4s on March 07, 2021, 09:45:38 »
Hello LWJGL forum,

First time here! I've been modding around in Minecraft for some time now and while the performance i got is decent, i was wondering if i could get a bit more. Any hint would be greatly appreciated!

In tech mods one can store energy in single blocks and that gets dull quickly, so i want to be able to do hydro-energy storage aka a dam. But to get this to a solid performance level i need to go around the render pipeline. And i did: Classic Minecraft tessalation (for my special fake water) happens in my geometry shader and i can render the water level whereever i want it with a uniform.

For this, i collect water blocks in a FloatBuffer separately for each 16x16x16 chunk and if the chunk contains any water the float buffer is dumped into a VBO with corresponding VAO. Whenever a chunk is not displayed any more i reassign the VAO+VBO so i don't need to reallocate the memory. Also each VAO is set up once per VBO and never touched again. VBO updates are done with glBufferSubData.

At render time i sort the chunks, bind the program and set its uniforms once for all draw calls. Then i execute the following code foreach chunk:
(First the back sides and then the front sides for proper alpha blending)
Code: [Select]
GL11.glDrawArrays(GL11.GL_POINTS, 0, numWaterBlocks);

GL11.glDrawArrays(GL11.GL_POINTS, 0, numWaterBlocks);

Does anyone know how to speed up this render process? I've picked up the term vao decoupling once. Does that might help here? I also did not find any possibility to batch call different VAOs or different VBOs with the same VAO on identical positions.

Again: Happy for every suggestion.


Recently, I've been getting into LWJGL, so I am very beginner level at LWJGL and OpenGL, so please pardon me. I am trying to make stars the rotate (or a rotation system, of course), and I am having trouble with the rotation. On the first sprite, it rotates in the counter-clockwise direction, but in the second sprite, it does the same when it needs to rotate in the clockwise direction. This happens for all sprites I am drawing on screen.

Here is my code:

Code: [Select]
GL11.glTranslatef(0.0f, 0.0f, -300f); // Sprite 1
GL11.glRotatef(i+=0.5f, 0.0f, 0.0f, 1.0f );
drawStar2(0, 0, 0);


GL11.glPushMatrix(); // Sprite 2
GL11.glTranslatef(0.0f, 50.0f, -300f);
GL11.glRotatef(i-=0.5f, 0.0f, 0.0f, 1.0f);
drawStar0(0, 0, 0);

GL11.glTranslatef(0.0f, 50.0f, -300f);
GL11.glRotatef(i+=0.5f, 0.0f, 0.0f, 1.0f);
drawTitle(0, 0, 0);

Thank you for anyone's help,


Pages: 1 2 3 [4] 5 6 ... 10