Most/all drivers, when linking a GLSL program with a single fragment shader output variable, will assign that output variable to framebuffer color attchment 0.
Okay, that answers 90% of my question, basically everyone assumes that it works this way and paranoid people like me just do a
glBindFragDataLocation(programId, 0, "fragColor" );
before linking the program with the fragment shader in the original post. Thanks.
As for the GLFW hints regarding color depth: The graphics card supports a list of fixed render buffer configurations, most commonly 8 bits per pixel for all color channels with optionally a 24-bit depth buffer and 8-bit stencil buffer. The hint you give to GLFW only specifies that the actual framebuffer configuration you get when GLFW creates the window will at least have that many bits per color channel (but not fewer).
Okay, but how exactly do I then control the exact format of my render framebuffer in an LWJGL setup then?
The whole GLFWWindow
obviously does a ton of magic, among them probably calls to glRenderbufferStorage()
, especially since resizeable windows are supported.
So you are saying that the following code
will probably produce a GL_RGBA8 + GL_DEPTH_COMPONENT32F buffer, but I cannot be sure? Like it could also be a GL_SRGB8_ALPHA8 + GL_DEPTH32F_STENCIL8 one?
Basically, I would like to e.g. ensure that the stencil bits I do not care about are put to good use in the example above, instead of an "inferior" depth resolution and 8 dead bytes.