LWJGL Forum

Programming => OpenGL => Topic started by: SinTh0r4s on March 11, 2021, 19:32:35

Title: Integer Flags for Geometry shader
Post by: SinTh0r4s on March 11, 2021, 19:32:35
Hi,

I have a project in progress where i do some point to triangle generation in a geometry shader. To do this properly i need to pass some flag-like values to the shader. For example, i need to pass the lighting information of the surrounding 6 points. Lighting information is split up into two integers of length 4 bits. So i can pack the lighting information for 4 blocks in a single 32 bit integer. I tried to transfer integers to the shaders, but never got it to work. I am not sure if there was a bit format problem or whatever mistake(s) i made :-(

Currently i use floats and it works just fine, but i cannot stop but feel hacky about it. This is my current vertex shader (shortened):
Code: [Select]
layout (location = 1) in float light0;

out VS_OUT {
    vec2 lightXMinus;
    vec2 lightXPlus;
    vec2 lightYMinus;
}

void main(void) {
int tmp = int(round(light0));
    vs_out.lightXMinus = toLightLUTCoord((tmp >> 16) & 0xFF);
    vs_out.lightXPlus = toLightLUTCoord((tmp >> 8) & 0xFF);
    vs_out.lightYMinus = toLightLUTCoord(tmp & 0xFF);
}
int(round(float). 'nuff said  ::)

... and the obvious setup for the vao:
Code: [Select]
GL20.glVertexAttribPointer(1, 1, GL11.GL_FLOAT, false, STRIDE, OFFSET);

When i went for
Code: [Select]
layout (location = 1) in int light0;and a vao setup of
Quote
GL20.glVertexAttribPointer(1, 1, GL11.GL_INT, false, STRIDE, OFFSET);
i never got it to work. Is there any example code available? I tried it with IntBuffer, FloatBuffer, ByteBuffer and Float.intBitsToFloat() and never had any success.
I am not even sure that GPUs are required to support 32bit integers. Would be glad if somebody knows something!

Cheers,
Sin

Project Code: https://github.com/SinTh0r4s/HydroEnergy/ (https://github.com/SinTh0r4s/HydroEnergy/)