Hello Guest

Why do fragment shaders actually work?

  • 3 Replies
  • 3739 Views
Why do fragment shaders actually work?
« on: September 13, 2020, 16:49:50 »
I got a semi-philosophical question about LWJGL and GLFW, related to fragment shaders.  ;)

Now, OpenGL bundles all of its computed color values with meta data together and we call it a fragment (yes, oversimplified). Fragments are written in framebuffers so that we can e.g. display them.

Each framebuffer can have a completely different structure, aka different color- and z-buffer-depth, it can have stencil-bits, and the whatnot...

Let's assume the following fragment shader is in effect:

Code: [Select]
#version 440

//results of the previous vertex-shader
in vec2 textureCoord;

//we will sample a texture unit with a bound texture
uniform sampler2DArray textureSampler;

//will hold the final result of this shader
out vec4 fragColor;

//very simple shader code, load color from texture given the coordinates:
void main() { fragColor = texture(textureSampler, textureCoord); }

I recently got interested in why this actually works, since fragColor is not a build-in variable. Rather, to quote khronos.org:

  > User-defined outputs from a fragment shader represent a series of "colors". These color values are directed into specific buffers based on the glDrawBuffers state.

Now, my shader above and half the internet never explicitly link any output variable to a buffer, let alone a specific part of that buffer, OpenGL auto-assigns this in a nice do-what-I-meant-to-do sort of way, despite khronos.org claiming:

  > The fragment color assigned is completely arbitrary and may be different for different programs that are linked, even if they use the exact same fragment shader code.

Now, obviously this is a spec-writer's point of view and in practice, this works good for most standard use-cases, otherwise all these tutorials out there would never work. But what happens if we start modifying the structure of the buffer?
In our LWJGL-Program, that means providing new hints to GLFW, By default, we use something like:

Code: [Select]
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
glfwWindowHint(GLFW_CONTEXT_CREATION_API, GLFW_NATIVE_CONTEXT_API);

glfwWindowHint(GLFW_RED_BITS, 8);
glfwWindowHint(GLFW_GREEN_BITS, 8);
glfwWindowHint(GLFW_BLUE_BITS, 8);
glfwWindowHint(GLFW_ALPHA_BITS, 8);
glfwWindowHint(GLFW_DEPTH_BITS, 24);
glfwWindowHint(GLFW_STENCIL_BITS, 8);

Let's say I do not want any stencil usage but put the bytes to other good usage:

Code: [Select]
glfwWindowHint(GLFW_DEPTH_BITS, 32);
glfwWindowHint(GLFW_STENCIL_BITS, 0);

Still works, at least for all my test scenes. If I half my color precision like this...

Code: [Select]
glfwWindowHint(GLFW_RED_BITS, 4);
glfwWindowHint(GLFW_GREEN_BITS, 4);
glfwWindowHint(GLFW_BLUE_BITS, 4);
glfwWindowHint(GLFW_ALPHA_BITS, 4);

...it still works nicely. And I found that pretty surprising  ???

QUESTIONS:
  • do my hints actually have an effect or is this nonsense?
  • does everybody just trust on the driver/device to assign fragment-shader output results in standard usecases? (I find that somewhat dangerous at the moment, just a feeling...)
  • what exactly is the best practice with LWJGL and GLFW to make such output-result assignments?

*

Offline KaiHH

  • ****
  • 334
Re: Why do fragment shaders actually work?
« Reply #1 on: September 13, 2020, 18:54:41 »
What is relevant here is not a particular draw buffer's internal color format (1 or 2, or 4 or 8 bits per color channel) but how OpenGL assigns GLSL fragment shader output variables to framebuffer color attachments. A framebuffer simply has N color attachments and optionally a depth-stencil attachment. Most/all drivers, when linking a GLSL program with a single fragment shader output variable, will assign that output variable to framebuffer color attchment 0.
And with the default framebuffer for a window, color attachment 0 is bound to a backbuffer swapchain color image, which eventually gets displayed when you call glfwSwapBuffers() (depending on the number of backbuffer swapchain images). Depth-stencil attachments are orthogonal to color attachments. You can have a color attachment (being a swapchain backbuffer image) while not having any depth-stencil buffer. That's fine. That will not change the fact that framebuffer attachment 0 is still bound to a backbuffer swapchain image in the default framebuffer for the GLFW window.
What you can control with layout qualifiers on fragment shader output variables (or with a call to glBindFragDataLocation) is which framebuffer color attachment a particular fragment shader output variable binds to. And with glDrawBuffers() you can control, which color attachment (0...N) is bound to which draw buffer (e.g. backbuffer for default framebuffer or something like GL_COLOR_ATTACHMENT0 for custom framebuffers.)

As for the GLFW hints regarding color depth: The graphics card supports a list of fixed render buffer configurations, most commonly 8 bits per pixel for all color channels with optionally a 24-bit depth buffer and 8-bit stencil buffer. The hint you give to GLFW only specifies that the actual framebuffer configuration you get when GLFW creates the window will at least have that many bits per color channel (but not fewer).
On X-based systems you can call glxinfo to see the supported framebuffer configurations by the graphics card.
« Last Edit: September 13, 2020, 18:59:09 by KaiHH »

Re: Why do fragment shaders actually work?
« Reply #2 on: September 17, 2020, 12:48:34 »
Most/all drivers, when linking a GLSL program with a single fragment shader output variable, will assign that output variable to framebuffer color attchment 0.

Okay, that answers 90% of my question, basically everyone assumes that it works this way and paranoid people like me just do a

Code: [Select]
glBindFragDataLocation(programId, 0, "fragColor" );

before linking the program with the fragment shader in the original post. Thanks.

As for the GLFW hints regarding color depth: The graphics card supports a list of fixed render buffer configurations, most commonly 8 bits per pixel for all color channels with optionally a 24-bit depth buffer and 8-bit stencil buffer. The hint you give to GLFW only specifies that the actual framebuffer configuration you get when GLFW creates the window will at least have that many bits per color channel (but not fewer).

Okay, but how exactly do I then control the exact format of my render framebuffer in an LWJGL setup then?

The whole GLFWWindow obviously does a ton of magic, among them probably calls to glRenderbufferStorage(), especially since resizeable windows are supported.

So you are saying that the following code

Code: [Select]
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
glfwWindowHint(GLFW_CONTEXT_CREATION_API, GLFW_NATIVE_CONTEXT_API);

glfwWindowHint(GLFW_RED_BITS, 8);
glfwWindowHint(GLFW_GREEN_BITS, 8);
glfwWindowHint(GLFW_BLUE_BITS, 8);
glfwWindowHint(GLFW_ALPHA_BITS, 8);
glfwWindowHint(GLFW_DEPTH_BITS, 32);
glfwWindowHint(GLFW_STENCIL_BITS, 0);

will probably produce a GL_RGBA8 + GL_DEPTH_COMPONENT32F buffer, but I cannot be sure? Like it could also be a GL_SRGB8_ALPHA8 + GL_DEPTH32F_STENCIL8 one?

Basically, I would like to e.g. ensure that the stencil bits I do not care about are put to good use in the example above, instead of an "inferior" depth resolution and 8 dead bytes.

*

Offline KaiHH

  • ****
  • 334
Re: Why do fragment shaders actually work?
« Reply #3 on: September 17, 2020, 13:55:52 »
You can query the actual depth bits via:
Code: [Select]
glGetFramebufferAttachmentParameteri(GL_FRAMEBUFFER, GL_DEPTH, GL_FRAMEBUFFER_ATTACHMENT_DEPTH_SIZE)
on any (including the default) framebuffer.
I don't think that you will get anything better than a 24 bits depth buffer for the default framebuffer on any hardware. Just look at the output of glxinfo.
If you explicitly want a 32-bit depth buffer, you are going to have to create a custom framebuffer object.