Getting fragment shader to work

Started by Appelmoes123, January 13, 2010, 09:52:46

Previous topic - Next topic

Appelmoes123

Hello everyone,

I'm quite new to shaders, so I am in need of some help here.
What I'm trying to accomplish (at first) is applying a simple fragment shader. I have been able to get this to work with JOGL but I would prefer to use LWJGL for it's added functionality.

I've made a minimal test project which uses lwjgl 2.2.1 to run:

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.nio.IntBuffer;

import org.lwjgl.BufferUtils;
import org.lwjgl.LWJGLException;
import org.lwjgl.input.Keyboard;
import org.lwjgl.opengl.ARBFragmentShader;
import org.lwjgl.opengl.ARBShaderObjects;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import org.lwjgl.opengl.GL11;
import org.lwjgl.opengl.Util;
import org.lwjgl.util.glu.GLU;


public class Main {
	
	private DisplayMode	mode;
	private int shaderProgram = 0;
	
	public static void main(String[] args) {
		new Main().run();
	}
	
	public Main() {
		try {
			
			mode = findDisplayMode(1024, 768, Display.getDisplayMode().getBitsPerPixel());
			Display.setDisplayModeAndFullscreen(mode);
			Display.create();
			
			GL11.glDisable(GL11.GL_DEPTH_TEST);
			
			GL11.glViewport( 0, 0, mode.getWidth(), mode.getHeight() );
			
			GL11.glMatrixMode(GL11.GL_PROJECTION);
			GL11.glLoadIdentity();
			GLU.gluOrtho2D(0, mode.getWidth(), mode.getHeight(), 0);
			
			GL11.glMatrixMode(GL11.GL_MODELVIEW);
			GL11.glLoadIdentity();
			GL11.glTranslatef(0.375f, 0.375f, 0f);
			
			GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
			
		} catch (LWJGLException e) {
		}
		
		
	}
	
	public void run() {

		while (!Keyboard.isKeyDown(Keyboard.KEY_ESCAPE) && !Display.isCloseRequested()) {
			
			if (Display.isVisible()) {
				draw();
			} else {
				if (Display.isDirty())		draw();
				
				try {	Thread.sleep(100);
				} catch (InterruptedException e) {	}
			}
			
			Display.update();
			
		}
		
		Display.destroy();
		
	}
	

	private void draw() {
		
		if (shaderProgram == 0) {
			System.out.println("initializing shaders");
			
			try {
				
				int fragmentShader = ARBShaderObjects.glCreateShaderObjectARB(ARBFragmentShader.GL_FRAGMENT_SHADER_ARB);
				
				String fragmentsource = stringifyfile("eigenfrag.glsl");
				ByteBuffer fragmentShaderSourceByteBuffer = bbifystring(fragmentsource);
				
				ARBShaderObjects.glShaderSourceARB(fragmentShader, fragmentShaderSourceByteBuffer);
				ARBShaderObjects.glCompileShaderARB(fragmentShader);
				printLogInfo(fragmentShader);
				
				shaderProgram  = ARBShaderObjects.glCreateProgramObjectARB();
				ARBShaderObjects.glAttachObjectARB(shaderProgram, fragmentShader);
				
				ARBShaderObjects.glLinkProgramARB(shaderProgram);
				printLogInfo(shaderProgram);
				
				ARBShaderObjects.glValidateProgramARB(shaderProgram);
				
				
				
			} catch (FileNotFoundException e) {
				System.err.println("Shader source file not found.");
			} catch (IOException e) {
				System.err.println("Could not read shader file.");
			}
		}
		
		GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
		
		GL11.glPushMatrix();
		{
			GL11.glTranslatef(0, 0, 0);
			GL11.glColor3f(1.0f, 1.0f, 1.0f);
			
			GL11.glBegin(GL11.GL_QUADS);
			{
				GL11.glTexCoord2f(1.0f, 0.0f); 
				GL11.glVertex2i(0, 768);
				GL11.glTexCoord2f(0.0f, 0.0f);
				GL11.glVertex2i(1024, 768);
				GL11.glTexCoord2f(0.0f, 1.0f);
				GL11.glVertex2i(1024, 0);
				GL11.glTexCoord2f(1.0f, 1.0f);
				GL11.glVertex2i(0, 0);
			}	
			GL11.glEnd();
		}	
		GL11.glPopMatrix();

	}
	
	private void printLogInfo(int obj) {
		IntBuffer iVal = BufferUtils.createIntBuffer(1);
		ARBShaderObjects.glGetObjectParameterARB(obj, ARBShaderObjects.GL_OBJECT_INFO_LOG_LENGTH_ARB, iVal);
 
		int length = iVal.get();
		System.out.println("Info log length:"+length);
		if (length > 0)	{
			ByteBuffer infoLog = BufferUtils.createByteBuffer(length);
			iVal.flip();
			ARBShaderObjects.glGetInfoLogARB(obj,  iVal, infoLog);
			byte[] infoBytes = new byte[length];
			infoLog.get(infoBytes);
			String out = new String(infoBytes);
			System.out.println("Info log:\n"+out);
		}
		Util.checkGLError();
	}
	
	private DisplayMode findDisplayMode(int width, int height, int bpp) throws LWJGLException {
		DisplayMode[] modes = Display.getAvailableDisplayModes();
		for (int i = 0; i < modes.length; i++) {
			if (modes[i].getWidth() == width && modes[i].getHeight() == height && modes[i].getBitsPerPixel() >= bpp && modes[i].getFrequency() <= 60)	return modes[i];
		}
		return Display.getDesktopDisplayMode();
	}


	private String stringifyfile(String filename) throws IOException {
		BufferedReader br = new BufferedReader(new FileReader(filename));
		String line = "", res = "";
		while ((line=br.readLine()) != null)	res += line + "\n";
		return res;
	}
	
	private ByteBuffer bbifystring(String s) {
		ByteBuffer bb = ByteBuffer.allocateDirect(s.length());
		try {	bb.put(s.getBytes("US-ASCII"));
		} catch (UnsupportedEncodingException e) {	return null;	}
		return bb;
	}
	
}


The shader file called "eigenfrag.glsl" has the following content:

void main() {
	gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}


This shader program works with JOGL and "turns" a quad red. What am I doing wrong in LWJGL??

Thanks in advance


PS. The preview shows me the code with all the formatting undone, sorry about that, if there is a way to keep the formatting as I pasted it please tell me

Kai

Hi,

first thing that caught my eye was that you did not rewind your buffer's position after putting the shader source code in it in your method "ByteBuffer bbifystring(String s)" either with rewind or flip or position(0). I would prefer flip, since it also sets the buffer's limit to the current position before setting position to 0.

The second thing is that you do not glUseProgram() your shader anywhere in the code.
After you created your program and attached your shader objects to it, you should use glUseProgram on the program object to activate it.

Appelmoes123

Hi Kai,

Unbelievable it was something that little, indeed I did not rewind the buffer, adding:
"fragmentShaderSourceByteBuffer.flip();"
after
"ByteBuffer fragmentShaderSourceByteBuffer = bbifystring(fragmentsource);"
solved the problem.

I am aware of the fact that I should use glUseProgram(), I deliberately left everything else out in order to post a very compact version of the code which did include my problem. (I see I still included some glTexCoord2f calls by accident as well)
Thank you very much Kai!


I have another question on which I couldn't really find a satisfying answer, there are several classes GL11, GL20 etc.. What is the use? Does the class you use determine the OpenGL version used?

(If people mind me switching to a different subject in this topic, pls warn me  ;D)

Kai

QuoteI have another question on which I couldn't really find a satisfying answer, there are several classes GL11, GL20 etc.. What is the use? Does the class you use determine the OpenGL version used?

These classes (GL11, GL20, GL30, ...) are intended to encapsulate the methods that are introduced to the core GL spec in that version, as are the ARB..., EXT..., etc. classes for the extensions. They do not determine the OpenGL version that you are using when invoking methods in these classes!

Fool Running

QuoteThese classes (GL11, GL20, GL30, ...) are intended to encapsulate the methods that are introduced to the core GL spec in that version, as are the ARB..., EXT..., etc. classes for the extensions. They do not determine the OpenGL version that you are using when invoking methods in these classes!
I don't think that's entirely true (someone can correct me if I'm wrong). The GL11, GL20, etc. classes check to make sure you have the proper OpenGL version. Basically if you use a method on GL30, LWJGL checks to make sure the drivers implement all of GL30 before you use it.
Thus, its better to use the extension classes if you only need a little of the functionality because they just check to make sure the drivers have the used extensions and not the whole GLxx set.

Again, I could be wrong, but I'm pretty sure that is the way LWJGL works.
Programmers will, one day, rule the world... and the world won't notice until its too late.Just testing the marquee option ;D

spasi

The proper way to do it is to first check for the GL version. If the functionality you need is available, then using the GLxx class should always be preferable to using the corresponding extension class, sometimes that functionality will be cleaned-up/bug-free and in general better supported (talking about the GPU driver here, not LWJGL). If the functionality isn't present in the core of the GL version you're running, then you should provide a code path that uses the extension(s) that implement the same functionality.

That's my advice at least.

Btw, if an OpenGLxx flag in ContextCapabilities is true, you can be 100% sure that all GL functions from that version will be available.

Appelmoes123

Thank you all for the replies!

So I suppose that if I would want the program to run on a as broad as possible spectrum of videocards, I would be best off writing it based on the GL11? Since if I use a GL20 or GL30 functionality on my development machine it might be supported, whereas it might not be supported on lesser machines.

The reason I am asking is because there are several ways to implement the shaders, for example:
ARBShaderObjects.glUseProgramObjectARB(shaderProgram);

OR
GL20.glUseProgram(shaderProgram);


Does this mean that the first call will work in OpenGL 1.1 and OpenGL 2.0, where the second call will only work in OpenGL 2.0??

Rene

The second call will make the program crash if not supported.

Using the extensions might indeed make your program work on 'hardware < GL2.0', but the driver must support it. So don't count on GL1.1 hardware to run shaders using the extensions :P

@spasi:
I always thought the extensions and the core functions would point to the same code on driver level? It's both easier to develop and avoids the problems you're mentioning...
When I am king, they shall not have bread and shelter only, but also teachings out of books, for a full belly is little worth where the mind is starved - Mark Twain

spasi

Quote from: Appelmoes123 on January 13, 2010, 14:48:58The reason I am asking is because there are several ways to implement the shaders, for example:
ARBShaderObjects.glUseProgramObjectARB(shaderProgram);

OR
GL20.glUseProgram(shaderProgram);


Does this mean that the first call will work in OpenGL 1.1 and OpenGL 2.0, where the second call will only work in OpenGL 2.0??
The first call has nothing to do with the GL version. It depends on the extension availability, if it's available, then you can use it. Same with the GL20 call, if your GL version is 2.0 or higher, then you can use it. In both cases, you need to check a condition before using the functionality. If you don't make that check, you'll get an exception during runtime.

The question is, if both are available, which one should you use? My opinion is that you should use the core functionality (the one in GLxx classes). It also means that you should provide different code paths, one that uses the core functionality and one that uses the extension, for users with older GL versions. (it's no big deal code-wise anyway).

What I do in my apps is simply use the core functionality up to the minimum required GL version (this is app/market target specific). So for example, if I require OpenGL version 3.0, I use all the core stuff up to that, then I create different code paths for GL3.1+ functionality.

Quote from: Rene on January 13, 2010, 15:30:34@spasi:
I always thought the extensions and the core functions would point to the same code on driver level? It's both easier to develop and avoids the problems you're mentioning...
That's true for most of the cases, but not always. Sometimes the API changes and sometimes even the behavior changes. This usually happens for EXT or vendor specific extensions that get promoted to core. It might happen for ARB extensions too, but I don't recall an example atm.

Appelmoes123

I think I understand, so either way (using 1.1 or 2.0 or whatever) I should check the extension first.
And if possible I should use the call on a higher OpenGL version before resorting to lower OpenGL versions or eventually ARBShaderObjects (which has nothing to do with the OpenGL version).

So this code:

String extensions = GL11.glGetString(GL11.GL_EXTENSIONS);
if ((extensions.indexOf("GL_ARB_fragment_program") != -1) && extensions.indexOf("GL_ARB_fragment_shader") != -1) {
    System.out.println("fragment shaders supported");
} else {
	System.out.println("fragment shaders not supported");
	Display.destroy();
	System.exit(0);
}


only checks the extensions available in OpenGL 1.1, if I would want to check the extensions availability in OpenGL 2 I would have to write something different??

Also, is there much performance difference between OpenGL 1.1 and (for example) Opengl 3.0? In other words, if I would allow the program to run only on machines which support 3.0 (and thus also 1.1), is it worth writing it for 3.0?? Since pretty much 99% of the material on LWJGL found on the internet is based on GL11, this would be more time-consuming.

spasi

Quote from: Appelmoes123 on January 14, 2010, 15:33:24I think I understand, so either way (using 1.1 or 2.0 or whatever) I should check the extension first.
No, you check the OpenGL version first. If the functionality you need has been promoted to the GL core at a certain version, then you should check if that OpenGL version is available. If it hasn't been promoted or if the GL version is lower than the required one, then you check if the extension is available. If both checks fail, then you can't use it.

Quote from: Appelmoes123 on January 14, 2010, 15:33:24And if possible I should use the call on a higher OpenGL version before resorting to lower OpenGL versions or eventually ARBShaderObjects (which has nothing to do with the OpenGL version).
Yes, you should use the core functionality first (regardless of version), then go to the extension if the core functionality isn't present.

Quote from: Appelmoes123 on January 14, 2010, 15:33:24So this code:

String extensions = GL11.glGetString(GL11.GL_EXTENSIONS);
if ((extensions.indexOf("GL_ARB_fragment_program") != -1) && extensions.indexOf("GL_ARB_fragment_shader") != -1) {
    System.out.println("fragment shaders supported");
} else {
	System.out.println("fragment shaders not supported");
	Display.destroy();
	System.exit(0);
}


only checks the extensions available in OpenGL 1.1, if I would want to check the extensions availability in OpenGL 2 I would have to write something different??
Let me explain a few things first:

- In LWJGL, the recommended way to check the GL version you're running and the extensions that are available is through the GLContext.getCapabilities() API. The ContextCapabilities object that will be returned from this call has already performed all the checks for you. So you can write code like:

ContextCapabilities caps = GLContext.getCapabilities();
if ( caps.OpenGL20 )
    System.out.println("OpenGL version 2.0 supported");
if ( caps.GL_ARB_fragment_shader )
    System.out.println("ARB_fragment_shader extension supported");


- ARB_fragment_program and ARB_fragment_shader are totally different extensions. The first provides access to low-level GPU programming (assembly-like) and the second to high-level GPU programming (through GLSL). The ARB_vertex/fragment_program extensions were never promoted to the GL core (so you can consider them obsolete/deprecated). They should still work on shader-capable hardware, but they haven't been updated to any of the new functionality present in GLSL (except nVidia, they have exposed that functionality in their low-level NV-specific extensions). What you probably want to use is the sum of functionality contained in the ARB_shader_objects, ARB_vertex_shader and ARB_fragment_shaders extensions.

- As I said above, you didn't check for the GL version first. So, the correct way to check this is:

ContextCapabilities caps = GLContext.getCapabilities();
if ( caps.OpenGL20 ) {
    // OpenGL 2.0 requires both vertex and fragment shader functionality.
    System.out.println("Vertex and fragment shaders are supported in the GL core!");
} else {
    if ( !(caps.GL_ARB_vertex_shader || caps.GL_ARB_fragment_shader) )
        killApp("Shaders are not supported!");

    if ( caps.GL_ARB_vertex_shader )
        System.out.println("Vertex shaders are supported via the ARB_vertex_shader extension");
    if ( caps.GL_ARB_fragment_shader )
        System.out.println("Fragment shaders are supported via the ARB_fragment_shader extension");

    // You don't have to check for ARB_shader_objects, since the above 2 extensions require it.
}


Quote from: Appelmoes123 on January 14, 2010, 15:33:24Also, is there much performance difference between OpenGL 1.1 and (for example) Opengl 3.0? In other words, if I would allow the program to run only on machines which support 3.0 (and thus also 1.1), is it worth writing it for 3.0?? Since pretty much 99% of the material on LWJGL found on the internet is based on GL11, this would be more time-consuming.
All 1.1 functionality is present in OpenGL 3.0. The problem starts from OpenGL 3.1 and up. Then you have to worry about deprecated functionality and the core/compatibility profiles introduced in OpenGL 3.2. It all depends on the kind of application you're writing and the market you're targeting. If it's a new app and you expect all your users to have updated drivers etc, then you can start with OpenGL 3.2. The problem is, if you're new to OpenGL, then going with OpenGL 3.2 core profile will be quite hard for you, because a lot of "utility" functionality has been removed (the matrix stack, immediate mode rendering, etc). It's up to you really, LWJGL has support for any GL version from 1.1 to 3.2.

spasi

Another clarification, just to be clear: If GLContext.getCapabilities().OpenGLxx returns true, for any yy lower than xx, OpenGLyy will also be true. All functions in the GLyy classes will work, except in the following cases:

- GL version is 3.0, with forward compatibility enabled.
- GL version is 3.1.
- GL version is 3.2 with the core profile.

You can read the OpenGL 3.2 spec for the details of how it works. The ContextAttribs class in LWJGL allows you to configure the GL context in the way you prefer.

Appelmoes123

QuoteNo, you check the OpenGL version first. If the functionality you need has been promoted to the GL core at a certain version, then you should check if that OpenGL version is available. If it hasn't been promoted or if the GL version is lower than the required one, then you check if the extension is available. If both checks fail, then you can't use it.

Ah, I see, some extensions of lower OpenGL versions have been brought into higher OpenGL versions and if OpenGL 2.0 is available it will thus also have the fragment shader available. Tnx, it is very clear to me now.

I will also just use GLContext.getCapabilities() to check, tnx!

QuoteAll 1.1 functionality is present in OpenGL 3.0. The problem starts from OpenGL 3.1 and up. Then you have to worry about deprecated functionality and the core/compatibility profiles introduced in OpenGL 3.2. It all depends on the kind of application you're writing and the market you're targeting. If it's a new app and you expect all your users to have updated drivers etc, then you can start with OpenGL 3.2. The problem is, if you're new to OpenGL, then going with OpenGL 3.2 core profile will be quite hard for you, because a lot of "utility" functionality has been removed (the matrix stack, immediate mode rendering, etc). It's up to you really, LWJGL has support for any GL version from 1.1 to 3.2.

I personally would prefer to base it on OpenGL 3.0, would you happen to know where to find tutorials actually on GL30 instead of GL11??

QuoteAnother clarification, just to be clear: If GLContext.getCapabilities().OpenGLxx returns true, for any yy lower than xx, OpenGLyy will also be true. All functions in the GLyy classes will work

So I can mix function call between the different OpenGL versions (until 3.0)? Is it not better to write a program specific to the chosen (higher) version and not use the lower version's calls? Or are you meant to use OpenGL like that because of it's accumalative nature?

spasi

Quote from: Appelmoes123 on January 14, 2010, 19:54:15Or are you meant to use OpenGL like that because of it's accumalative nature?

This. Unlike Direct3D, which is basically a new API every version, when a new GL version is released it just layers stuff on top of the old versions. Unless stated in the specification, old functionality works as before, this has been mostly true for more than 15 years. OpenGL 3.0 first introduced the deprecated functionality, 3.1 removed it, 3.2 added the profiles, which is where we are atm. But even with GL3.2 with core profile, there are still a lot of stuff in GL11 that work as before, you wouldn't be able to do anything without them even.

Appelmoes123

I understand, thank you for the elaborate explanations.