NVIDIA Update Breaks Shader

Started by Waffles, April 04, 2016, 12:08:35

Previous topic - Next topic

Waffles

Greetings!

I recently updated my NVIDIA GeForce driver to version 364.72, and a shader program that worked perfectly fine before does not seem to compile anymore. I realize this is not technically an LWJGL problem (at least I don't think), but I am completely stuck on this issue. You guys have already been a tremendous help in the past, so I figured it was worth a shot. I haven't changed a thing to the script itself after the update, however now I get the following error:

error: Type mismatch between variable "i_Color" matched by location "1"


The vertex shader has a variable 'out vec4 o_Data1' bound to location 1, while the fragment shader has a variable 'in vec4 i_Color' bound there as well. I'll post my shaders below, but as you'll see, they're quite straightforward. I'm using it for font rendering from a single texture, allowing the font to change color in the fragment shader. I'm not sure what could possibly be going wrong. I've got other programs with a similar structure, that still work after the update. Note that these scripts were code-generated. That's why I use layout location on all of my data, it made the scripts much easier to mix and match.

My vertex shader:

#version 450 core
 
layout(location = 0)
uniform mat4 u_Matrix;
 
layout(location = 0)
in vec4 i_Vertex;
 
layout(location = 3)
in mat4 i_Matrix;
 
layout(location = 2)
in vec4 i_Data1;
 
layout(location = 1)
out vec4 o_Data1;
 
layout(location = 1)
in vec2 i_Data0;
 
layout(location = 0)
out vec2 o_Data0;
 
void main()
{
    o_Data1 = i_Data1;
    o_Data0 = i_Data0;
 
    gl_Position = u_Matrix * i_Matrix * vec4(i_Vertex.x + 0.0, i_Vertex.y - 0.0, i_Vertex.z, 1.0);
}


My fragment shader:

#version 450 core
 
layout(location = 1)
uniform sampler2D u_Texture;
 
layout(location = 1)
in vec4 i_Color;
 
layout(location = 0)
in vec2 i_Coord;
 
out vec4 o_Color;
 
void main()
{
    o_Color = texelFetch(u_Texture, ivec2(i_Coord), 0);
    if(o_Color.a > 0.0)
    {
        o_Color.rgb = i_Color.rgb * (1.0 - o_Color.rgb);
    }
 
    if(o_Color.a == 0.0)
    {
        discard;
    }
}


Does anyone have an idea what could be going wrong here? Thanks in advance!

Kai

This is obvious. The interface between your vertex and fragment shader does not define `i_Color` in both shaders.
Only the fragment shader has that definition, so the compiler or rather linker does not know where that value is coming from, since the vertex shader is missing an identical out variable definition.
You cannot define a layout binding with vertex shader to fragment shader interface variables.
Only matching by name is done there.

CoDi

In addition to what Kai said: in my experience, NVidia drivers are a lot more "tolerant" to programmer mistakes, or laziness, e.g. by falling back to reasonable default values in case you forget to set some states explicitly. It's probably something they just made less fault-tolerant with the driver update.

Other drivers (esp. Intel) are not as much as forgiving. It's a good idea to test your GL code on different GPUs as often as possible.

Waffles

That is incredibly surprising. So you mean to say I've been using the layout qualifier wrong all along? I thought the whole point of that was to make sure the linker knows which values to link to what? I got this idea off of the OpenGL wiki actually, here, it says you can use this qualifier to allow different names in your shaders.

Quote
For example, given a vertex shader that provides these outputs:

layout(location = 0) out vec4 color;
layout(location = 1) out vec2 texCoord;
layout(location = 2) out vec3 normal;


This allows the consuming shader to use different names and even types:

layout(location = 0) in vec4 diffuseAlbedo;
layout(location = 1) in vec2 texCoord
layout(location = 2) in vec3 cameraSpaceNormal;


This still results in an interface match.


Am I perhaps misunderstanding what the wiki is explaining here?

EDIT: I just realized I likely missed a key element in the explanation, being

Quote
...between the outputs of one program and the inputs of the next...

So the layout qualifier I am using would only work in between programs, and not in between shaders? However, here it says I can in fact create a program containing only my vertex shader, and one containing only my fragment shader, and then link them dynamically. So theoretically, if I did that my layout qualifiers would work as I'd expect? That seems quite counterintuitive... Or perhaps I really have a fundamental misunderstanding into how this whole linking process works.

Kai

In summary: The "location" layout qualifier is _only_ used to specify the binding locations of the interface between vertex specification and vertex shader as well as between the fragment shader and the framebuffer attachments. Nothing else.
It is therefore explicitly _not_ used to specify the locations of variables in the interface between sucessive shader stages, such as between vertex and fragment stages.


EDIT: The above is wrong. See below statement from Zeno.

Waffles

I'm sorry, but I think you're not quite right here. It literally says so in the official guide (4.4.1, bottom of page 57):

Quote
For example,

layout(location = 3) in vec4 normal;
const int start = 6;
layout(location = start + 2) int vec4 v;


will establish that the shader input normal is assigned to vector location number 3 and v is assigned location number 8. For vertex shader inputs, the location specifies the number of the generic vertex attribute from which input values are taken. For inputs of all other shader types, the location specifies a vector number that can be used to match against outputs from a previous shader stage, even if that shader is in a different program object.

Kai

Oh, yeah. Sorry. :) Seems like it. In that case I deny everything I've said and assert the opposite. :)
I will check that with some demo program and the latest Nvidia. Thanks for correction!

Waffles

That's not an issue at all, I know you're trying to help, and thank you for that! I actually put up the same issue on the OpenGL and NVidia forums but unsurprisingly the people that immediately jump to the rescue are you guys. :P I realise technically speaking fixing this one shader is just a matter of dropping my layout locations and doing it the other way... But big picture they were helping a lot in abstracting away some of the OpenGL stuff into neat little OOP so it would bug me immensely if I had to do that.

Kai

Lol, this has got to be a bug introduced in the latest Nvidia drivers.
Try redeclarating your o_Data0 and corresponding i_Coord as vec4 (and the assignment to o_Data0 to vec4(i_Data0, 0.0, 0.0);). Then it works.

Waffles

Wow. Just wow. That took you all of 10 minutes. :P You're right, it works like a charm. The little bit of iffy code is going to bug the hell out of me but at least it works and we can continue on from here. ;D You sir, are amazing, if I had a hat I would tip it. Thank you. I guess I should file an NVidia bug report some or other? :-\

Kai

Yes, I'm getting old and slow. :D And also yes, definitely file a bug with Nvidia. Your small vertex and fragment shaders should be more than enough to reproduce.

Waffles