LWJGL Shader misunderstanding

Started by david37370, January 24, 2015, 12:34:53

Previous topic - Next topic

david37370

Hello! this is my first post here.  :)

I'm new to modern openGL and LWJGL 3, and i have a problem. First i'll describe what I know, and in case i'm wrong, please correct me:
There are the vertex Shader, fragment Shader and geometry Shader, which I won't use. The vertex Shader's input is directly from the program output: vertices, that could be coming as attribute lists(location color etc.), or interleaved. the vertex shader runs on each vertex. the vertex Shader performs its operations (translation, zooming and other stuff that i do) and passes the correct vertices to the geometry Shader, which creates the shapes - triangles, lines, you can specify the type it in the glDrawElements function, GL_LINES or GL_TRIANGLES etc. Then the shapes are Rasterized into pixels. The Fragment Shader will run on each PIXEL. now there is a conflict: each vertex has a color, Vertex Shader gets vertices, and Fragment Shader gets pixels, then what is done with the vertex's color? from what I've understood, the color is interpolated from varying vertices, and that can be changed using flat or smooth before defining a variable.

Here we come to my question:  if I want to make custom lighting, do i have control over that per-pixel-color-generation process? for example, take this game: http://www.java-gaming.org/topics/iconified/34997/view.html. how to make lighting similar to that?

thanks!

Kai

Hello,

Quotethe vertex Shader [...] passes the correct vertices to the geometry Shader, which creates the shapes [...]

Almost. The primitive type (point, line, triangle, ...) is specified via the corresponding argument to a OpenGL API draw call, as you said. But this is not what the geometry shader outputs. Instead it is what it is given as input.
The geometry shader can then just pass it through to the next pipeline stages or alter the primitive type to something new (for example make triangle lists out of points).

But you wont't be using the geometry shader, as you said, so essentially after being transformed by the vertex shader, your vertices are built into primitives (that argument to the GL draw call) by the "primitive assembly" built-in mechanism.
Those primitives are then rasterized into "fragments."

QuoteThe Fragment Shader will run on each PIXEL.

Essentially, yes. But there is a slight difference between a pixel and a fragment when it comes to multisampling.
When using multisampling, your fragment shader is potentially run on many fragments within the same pixel.
But without using multisampling, a fragment corresponds to the center position of a single pixel, yes.

Quotethe color is interpolated from varying vertices, and that can be changed using flat or smooth before defining a variable.

Yes. That is correct. Every attribute that can be expressed as a scalar, 2d-, 3d- or 4d-vector will be interpolated for a fragment between a primitive's vertices. Using the "flat" qualifier the "provoking vertex" will give the value for all fragments of the primitive. That provoking vertex is the last vertex of a primitive but that convention can be altered using the OpenGL extension: https://www.opengl.org/registry/specs/EXT/provoking_vertex.txt

Quoteif I want to make custom lighting, do i have control over that per-pixel-color-generation process?

Yes. Absolutely. You do not have to rely on a vertex color attribute to define your fragment's/pixel's color. Your fragment shader can calculate the color by any means. It's just that you can feed the fragment shader with interpolated attributes from your vertices to be able to compute the final color. That computation can include things such as "lighting" computations. For 3D-scenes this would most importantly be the "normal."
But you can compute the final fragment/pixel color however you like.
For example, when you have a point light source somewhere, you usually compute the distance between the surface point at the current fragment and the light source and attenuate your final color intensity using that distance. Additionally, you would multiply the color of the light source with the color of your vertex.

Quotehow to make lighting similar to that?

I don't know what technique that particular game is using. Might be light maps baked within textures or might be computing the effect of a light using a shader. Dunno.

david37370

Thank you! now i know a little bit more, and can ask a little more specific question: about the structure of my 2D game, is it recommended to make the fragment shader "universal"-meaning that on every fragment the same computation will apply, or using many uniforms, like lightSourceLocation or CharacterLocation, is favorable? the first approach seems better, but then, still the shader can't know which pixels should be affected by interpolation, etc... I certainly still dont know enough xD

Kai

Hm... I don't understand the intent of your separation between "make the fragment shader universal (same computation for every fragment)" and "using many uniforms".

As the name "uniform" implies, the value within a uniform stays...,well... uniform (i.e. equal) over each shader invocation (i.e. fragment).

So in general your fragment shader will always perform the same computation on each fragment. There is no way around that. :)

But what you probably meant is whether you should parameterize your shader using uniforms so that it knows where the light sources are.
And that is an absolute yes. :)
If you want to perform lighting calculations within your fragment shader, that shader would want to know about the light locations.

Maybe have a google search on "OpenGL GLSL tutorial" or "OpenGL shader tutorial" and you will find good resources on shaders with OpenGL. For example the one from Lighthouse3D is pretty good.

Oh, and it is always good to not search for LWJGL-specific tutorials on the web, but instead just use "OpenGL" and apply what you found (which is likely C/C++ or Object Pasca code, but the OpenGL functions and more so the GLSL code stays the same) to Java/LWJGL.

quew8

Yes to what @Kai said.

But if you are talking about having a uniform to for example decide whether or not a shader should apply light. Ie you have an if statement in the shader to light or not to light then that is a big no no.

david37370

Kai, thank you, and of course i dont search LWJGL in google, there's not much on 3 yet :D
the correct way for lighting you say, is giving uniforms for light sources, probably location color and magnitude. but light is affected by walls that can block that light, or by different effects, and the fragment shader only knows about this specific pixel's location and color that's it running on at the moment. except for the uniforms. so it seems like i have to upload a lot of data to uniforms... doesn't feel clean. is it the right way?
I'm just too used to making projects only to realize the i have been using wrong techniques, and in openGL i prefer knowing the right way from the beginning. thank you for all the help! :)

and quew8, i can define the magnitude and set it to 0 if i want, but i guess it depends on the case.

quew8

Quote from: david37370 on January 28, 2015, 16:09:33
i can define the magnitude and set it to 0

And that would be the right way to do it. It's just that that's quite different from the way us programmers would normally go about things and most people don't think because why should they.

Quote from: david37370 on January 28, 2015, 16:09:33
the correct way for lighting you say, is giving uniforms for light sources, probably location color and magnitude. but light is affected by walls that can block that light, or by different effects, and the fragment shader only knows about this specific pixel's location and color that's it running on at the moment. except for the uniforms.

If you wanted to render lighting exactly physically correctly then you have to go with a completely different technique called raytracing. Where you literally trace the path of a photon from a light source as it bounces around different surfaces until it reaches the viewer's "eye". While some top end games use aspects of this, to my knowledge no-one has managed to simulate a scene in full physical accuracy in real time and so it has only limited uses in games.

In games we cheat to get something that looks mostly like the same effect. As an example: shadows. One of the ways this is done (there are many) is to render the entire scene from the perspective of the light source into a texture attached to the depth buffer attachment of a framebuffer (as a prerender). Then in the actual render, you pass this shadow texture to the fragment shader and for each fragment you work out where in the shadow texture this fragment would be and how far away from the light source it is. You sample the shadow texture at that position and if the value you get is less than or equal to the distance from the light source then you know that this fragment is lit by that light source. This is a process you have to repeat for each and every light and as you can imagine, it won't work very well for a non-directional light source without some more cheating (spherical uv mapping for example). So for every problem there is a solution but I wouldn't worry too much since most of them tend to be additive rather than replacing huge amounts of code. What you're doing now is right.

david37370

I have another question: is it possible to store both vertices with textures and without? i wanna use pictures, but i dont want all of my vertices to have texture coords, and i have no idea how to do it... maybe using tex cordinates that won't show the image?
btw should i post new questions here or in new threads, if i dont find the answer in google?

quew8

Normally you would start a new topic but for a little thing like this it's fine.

So you want to draw some geometry with textures and some without? Well in fixed function you would disable GL_TEXTURE_2D. In programmable pipeline the equivalent is to have another shader program which doesn't use textures and switch to that one when you don't want to have textures.

david37370

Ok, last off-topic question:  :)
In my game i have many kinds of shapes that all consist of vertices, and when i call glDrawElements i need to tell it to what to draw - triangles, lines, quads. also each shape has its own texture. so i could think of three ways to implements this:

first way, make one VBO of shapes, and for drawing i will call each shape's draw method. glDrawElements will be called n times, when i have n shapes. example for line:
public int render(int offset) {
	glBindTexture(GL_TEXTURE_2D,texID);
	glDrawElements(GL11.GL_LINES, 2, GL_UNSIGNED_BYTE, offset);
	return offset+2;
}

with triangles it will return offset+3, for example.

second way: make one VBO, with first x vertices assigned to lines, then y vertices assigned to triangles and so on. that way, i won't have to call the drawing method on each object. drawing will look like this:
glDrawElements(GL11.GL_LINES, x, GL_UNSIGNED_BYTE, 0);
glDrawElements(GL11.GL_TRIANGLES, y, GL_UNSIGNED_BYTE, x);

not sure about textures here.

third way: same as second, but a separate VBO for each shape type... this will make it a little complex and I dont think that im going to do it now.

what i'm asking is, are those three ways legit and good? calling glDrawElements multiple times is bad or is it OK? first way works well for me for now, but im not doing intensive drawing yet. also, if i use second way, how can i make a different texture for each shape?
i'm afraid of first way though, because if i delete shapes, the place in the VBO will say empty forever because not all shapes are same size, and if i delete a line i couldn't insert a triangle there, later.
long question... sorry, im just trying to learn. should i post this in a new thread? i think that this could be important to other beginners in VBO programming.
thanks :)

quew8

Do what works. You have something that works so use that. If it is too slow for your needs then move to an optimized method. Do not optimize prematurely.