Main Menu

GLSL Shaders

Started by CodeBunny, September 19, 2011, 03:03:59

Previous topic - Next topic

CodeBunny

So, I'm trying to learn how to use shaders, but the tutorial on the wiki (http://lwjgl.org/wiki/index.php?title=GLSL_Shaders_with_LWJGL) is not working for me.

First off, the code, when copy-pasted, was wrong. :-\ Pretty badly so. Aside from a typo that actually meant the code had compile errors, the main class didn't even access the class that ran the shaders. Did whoever wrote this tutorial even test-run the code? It really should get fixed.

Secondly, when I fixed those problems (had the main class create an instance of the Box class and use it, fixed the typo), the shaders I created with copy and paste didn't work. Both the vertex and the fragment shaders didn't process past printLogInfo(); the returned value for the log length was 0 in both cases. I also checked that the files were being read appropriately; they were.

Since I really don't have a very good idea of how to work with shaders, the fact that this tutorial is broken is very detrimental to my ability to figure out what's going on.

Here's the code I'm using (it has been very slightly modified from the code on the wiki):

Main.java:
import org.lwjgl.opengl.GL11;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import org.lwjgl.util.glu.GLU;

/*
 * Sets up the Display, the GL context, and runs the main game
 loop.
 *
 * @author Stephen Jones
 */
public class Main {

	private boolean done = false; // game runs until done is set to true
	
	private Box box;

	public Main() {
		init();
		while (!done) {
			if (Display.isCloseRequested())
				done = true;
			render();
			Display.update();
		}

		Display.destroy();
	}

	private void render() {
		GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
		GL11.glLoadIdentity();
		box.draw();
	}

	private void init() {
		int w = 1024;
		int h = 768;

		try {
			Display.setDisplayMode(new DisplayMode(w, h));
			Display.setVSyncEnabled(true);
			Display.setTitle("Shader Setup");
			Display.create();
		} catch (Exception e) {
			System.out.println("Error setting up display");
			System.exit(0);
		}

		GL11.glViewport(0, 0, w, h);
		GL11.glMatrixMode(GL11.GL_PROJECTION);
		GL11.glLoadIdentity();
		GLU.gluPerspective(45.0f, ((float) w / (float) h), 0.1f, 100.0f);
		GL11.glMatrixMode(GL11.GL_MODELVIEW);
		GL11.glLoadIdentity();
		GL11.glShadeModel(GL11.GL_SMOOTH);
		GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
		GL11.glClearDepth(1.0f);
		GL11.glEnable(GL11.GL_DEPTH_TEST);
		GL11.glDepthFunc(GL11.GL_LEQUAL);
		GL11.glHint(GL11.GL_PERSPECTIVE_CORRECTION_HINT, GL11.GL_NICEST);
		box = new Box();
	}

	public static void main(String[] args) {
		EngineDirectory.init();
		LWJGLNatives.load();
		new Main();
	}
}


Box.java
import org.lwjgl.opengl.GL11;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.nio.ByteBuffer;
import java.nio.IntBuffer;
import org.lwjgl.BufferUtils;
import org.lwjgl.opengl.ARBShaderObjects;
import org.lwjgl.opengl.ARBVertexShader;
import org.lwjgl.opengl.ARBFragmentShader;

/**
 * The vertex and fragment shaders are setup when the box object is constructed.
 * They are applied to the GL state prior to the box being drawn, and released
 * from that state after drawing.
 * 
 * @author Stephen Jones
 */
public class Box {

	/*
	 * if the shaders are setup ok we can use shaders, otherwise we just use
	 * default settings
	 */
	private boolean useShader = true;

	/*
	 * program shader, to which is attached a vertex and fragment shaders. They
	 * are set to 0 as a check because GL will assign unique int values to each
	 */
	private int shader = 0;
	private int vertShader = 0;
	private int fragShader = 0;

	public Box() {

		/*
		 * create the shader program. If OK, create vertex and fragment shaders
		 */
		shader = ARBShaderObjects.glCreateProgramObjectARB();

		if (shader != 0) {
			vertShader = createVertShader("shaders/screen.vert");
			fragShader = createFragShader("shaders/screen.frag");
		} else
			useShader = false;

		/*
		 * if the vertex and fragment shaders setup sucessfully, attach them to
		 * the shader program, link the sahder program (into the GL context I
		 * suppose), and validate
		 */
		if (vertShader != 0 && fragShader != 0) {
			ARBShaderObjects.glAttachObjectARB(shader, vertShader);
			ARBShaderObjects.glAttachObjectARB(shader, fragShader);
			ARBShaderObjects.glLinkProgramARB(shader);
			ARBShaderObjects.glValidateProgramARB(shader);
			useShader = printLogInfo(shader);
		} else
			useShader = false;
	}

	/*
	 * If the shader was setup succesfully, we use the shader. Otherwise we run
	 * normal drawing code.
	 */
	public void draw() {
		if (useShader) {
			ARBShaderObjects.glUseProgramObjectARB(shader);
		}
		GL11.glLoadIdentity();
		GL11.glTranslatef(0.0f, 0.0f, -10.0f);
		GL11.glColor3f(1.0f, 1.0f, 1.0f);// white

		GL11.glBegin(GL11.GL_QUADS);
		GL11.glVertex3f(-1.0f, 1.0f, 0.0f);
		GL11.glVertex3f(1.0f, 1.0f, 0.0f);
		GL11.glVertex3f(1.0f, -1.0f, 0.0f);
		GL11.glVertex3f(-1.0f, -1.0f, 0.0f);
		GL11.glEnd();

		// release the shader
		ARBShaderObjects.glUseProgramObjectARB(0);
	}

	/*
	 * With the exception of syntax, setting up vertex and fragment shaders is
	 * the same.
	 * 
	 * @param the name and path to the vertex shader
	 */
	private int createVertShader(String filename) {
		// vertShader will be non zero if successfully created

		vertShader = ARBShaderObjects
				.glCreateShaderObjectARB(ARBVertexShader.GL_VERTEX_SHADER_ARB);
		// if created, convert the vertex shader code to a String
		if (vertShader == 0) {
			return 0;
		}
		String vertexCode = "";
		String line;
		try {
			BufferedReader reader = new BufferedReader(new FileReader(new File(
					Box.class.getClassLoader().getResource(filename).toURI())));
			while ((line = reader.readLine()) != null) {
				vertexCode += line + "\n";
			}
		} catch (Exception e) {
			System.out.println("Fail reading vertex shading code");
			return 0;
		}
		/*
		 * associate the vertex code String with the created vertex shader and
		 * compile
		 */
		ARBShaderObjects.glShaderSourceARB(vertShader, vertexCode);
		ARBShaderObjects.glCompileShaderARB(vertShader);
		// if there was a problem compiling, reset vertShader to zero
		if (!printLogInfo(vertShader)) {
			vertShader = 0;
		}
		// if zero we won't be using the shader
		return vertShader;
	}

	// same as per the vertex shader except for method syntax
	private int createFragShader(String filename) {

		fragShader = ARBShaderObjects
				.glCreateShaderObjectARB(ARBFragmentShader.GL_FRAGMENT_SHADER_ARB);
		if (fragShader == 0) {
			return 0;
		}
		String fragCode = "";
		String line;
		try {
			BufferedReader reader = new BufferedReader(new FileReader(new File(
					Box.class.getClassLoader().getResource(filename).toURI())));
			while ((line = reader.readLine()) != null) {
				fragCode += line + "\n";
			}
		} catch (Exception e) {
			System.out.println("Fail reading fragment shading code");
			return 0;
		}
		ARBShaderObjects.glShaderSourceARB(fragShader, fragCode);
		ARBShaderObjects.glCompileShaderARB(fragShader);
		if (!printLogInfo(fragShader)) {
			fragShader = 0;
		}

		return fragShader;
	}

	/*
	 * oddly enough, checking the success when setting up the shaders is verbose
	 * upon success. If the reference iVal becomes greater than 1, the setup
	 * being examined (obj) has been successful, the information gets printed to
	 * System.out, and true is returned.
	 */
	private static boolean printLogInfo(int obj) {
		IntBuffer iVal = BufferUtils.createIntBuffer(1);
		ARBShaderObjects.glGetObjectParameterARB(obj,
				ARBShaderObjects.GL_OBJECT_INFO_LOG_LENGTH_ARB, iVal);

		int length = iVal.get();
		if (length > 1) {
			// We have some info we need to output.
			ByteBuffer infoLog = BufferUtils.createByteBuffer(length);
			iVal.flip();
			ARBShaderObjects.glGetInfoLogARB(obj, iVal, infoLog);
			byte[] infoBytes = new byte[length];
			infoLog.get(infoBytes);
			String out = new String(infoBytes);
			System.out.println("Info log:\n" + out);
			return true;
		}
		System.err.println("Error accessing shader log!");
		return false;
	}

}


screen.vert
varying vec4 vertColor;

void main(){
    gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
    vertColor = vec4(0.6, 0.3, 0.4, 1.0);
}


screen.frag
varying vec4 vertColor;

void main(){
    gl_FragColor = vertColor;
}


So, why aren't the shaders working? The syntax seems correct when I look at other examples, and I seem to be sending it to the graphics card appropriately.

And yes, my graphics card definitely supports shaders.

PS: Sorry if my tone is a little irritated; I've been banging my head against the wall on this one for a while.

spasi

It works for me. You may be doing something in here:

EngineDirectory.init();
LWJGLNatives.load();


that changes the GL state in a way that breaks the tutorial. Try to comment out those two lines.

A couple recommendations:

- You're not going to learn shaders by reading tutorials. Start with the GLSL spec and the corresponding GL API, understand the basics first, then start experimenting and checking out tutorials.
- Try the org.lwjgl.test.opengl.shaders.ShadersTest sample in the LWJGL test package. It's old code (based on ARB_shader_objects etc), but it has proper error checking and should be easier for you to experiment with.

CodeBunny

All those lines of code do is ensure that the LWJGL native files are placed in a known directory, and then sets the "org.lwjgl.librarypath" flag to point to that directory. That way, I can dynamically manage the files at runtime, and have significant more flexibility.

I commented it out, and manually set the parameter, and no change occurred.

Chuck

While I agree the existing tutorials aren't adequate, it helps to start from a minimal working example.  I fixed the compile problem on the wiki, plus a logic issue that was causing it to fail.  I'm still not a fan of the way the code is organized, but it at least works now.

I have a single-file versions of that tutorial here, as well as a conversion that uses GL20 instead of ARB extensions.

https://bitbucket.org/chuck/lwjgl-sandbox/src/tip/src/main/java/tutorials/wiki


CodeBunny

AWESOME. Shaders work for me now. Thank you so very much, that was bothering.

Now, I still have to learn how to program with them. But it's surmountable now! :D

CodeBunny

So, I'm working on a shader that will function very much like the built-in blending modes, but needs some more complex functionality.

Right now I have the following vertex shader:
void main()
{
	gl_FrontColor = gl_Color;
	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_Position = ftransform();
}


and fragment shader:
uniform sampler2D tex;

void main()
{
	vec4 srcColor = texture2D(tex, gl_TexCoord[0].st) * gl_Color;
	vec4 dstColor = gl_FragColor;
	gl_FragColor = dstColor + srcColor;
}


However, in the fragment shader I want to be able to know what the destination color (i.e., the color I'm overwriting) is. How do I do that?

spasi

The short answer is: you can't. Read this for the long answer. The closest you can have to programmable blending on current hardware is NV_texture_barrier.

CodeBunny

There's no way to know what color I'll be overwriting? I just want to know the color of the pixel I'll be rendering over.

Chuck

Blending is a still a big blind spot in the programmable pipeline and something you still have to rely on the built-in blend modes to handle it for the moment.  A uniform buffer might be a way to go about it now, though I can't speak from any actual experience there.

CodeBunny


Chuck


spasi

Quote from: CodeBunny on October 05, 2011, 18:08:13There's no way to know what color I'll be overwriting? I just want to know the color of the pixel I'll be rendering over.

As I said, you can't know that color. If you had access to that information, the synchronization implications would totally kill GPU performance. It's all explained in the blog post I linked above.

CodeBunny

*sigh* Alrighty.

Hmm... the shader effect I need is suddenly going to take a lot more thought...

Kova

I created FBO and rendered the whole scene to a texture in FBO... then used that texture in the shader. Works.

CodeBunny

Quote from: Kova on October 10, 2011, 16:07:17
I created FBO and rendered the whole scene to a texture in FBO... then used that texture in the shader. Works.

I am already doing this to achieve a different effect. The problem is that I need this shader to be active while I am rendering to the FBO.

My overall goal here is to create a 2D lighting/motion blur system. Basically, I have have four framebuffers which handle various aspects of the scene being rendered. These are:

  • The foreground buffer, which will be all objects affected by lighting.
  • The lighting mask buffer, which has splotches of color rendered (additive blending) to it to represent areas of light. After both this and the foreground buffer are rendered, this is drawn over the foreground with multiplicative rendering (changes to alpha are disabled, so it affects hue only).
  • The frame buffer, which has the background (whatever is not affected by lighting) rendered onto it, and then the foreground (which has now been affected by the lighting mask) rendered on top of that. Finally, the gui is rendered on top, resulting in a finished frame.
  • The accumulation buffer, which is used to allow for motion blur. Basically, once I have finished rendering the frame, I draw it at a varying level of transparency over this buffer. This smooths changes over time.

All of this is working, and is wonderfully fast (I've had a (admittedly static) scene with 1000 lights render in under a millisecond - since I manage lighting this way, rendering a light is exactly as fast as rendering a 2D sprite). There's also a lot of other things I can do (rendering shadows is just as easy, I just don't use additive rendering, etc).

However, the standard glBlendFunc options are not working optimally when I am rendering the foreground.

My basic blending function for the is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). For the color channels, this is exactly what I want. However, this manifests issues when I render pixels with alpha that is neither zero nor one.

Consider a simple case where I render a ghost in front of a brick wall. The ghost has an alpha of 0.5f. The wall has an alpha of 1.0. So, if I plug these alpha values into my blending function, the output is:
SRC_ALPHA * SRC_ALPHA + DST_ALPHA * ONE_MINUS_SRC_ALPHA = (0.5f * 0.5f) + (1.0 * 0.5f) = 0.75f


See? It makes perfect sense, but it's an unsatisfactory result. The user would expect the image to still have full alpha where they overlap - however, instead, he'll be able to see some of the background when I render the final texture over the background. He'll not just see through the ghost, he'll also see through the wall.

I wanted to use a shader to basically program an appropriate blending function. The algorithm I think will work is something along the lines of:
FINAL_ALPHA = DST_ALPHA + ((1.0f - DST_ALPHA) * SRC_ALPHA);


So, for the ghost/wall example, I would have:
1.0 + ((1.0f - 1.0) * 0.5f) = 1.0f


That way, the alpha level can't be actually lowered, and it should follow commonsense patterns when transparency is drawn over transparency (Two 0.5f alpha pixels rendered at the same time would result in a pixel of 0.75f alpha, etc).

SO, IN CONCLUSION: Anybody have any ideas how I could make this work? Currently I'm stumped for options on this final issue.