OpenGL 3.2 - ATI and NVIDIA behavior

Started by Estraven, May 20, 2010, 19:43:16

Previous topic - Next topic

Estraven


Hi everyone,

Ok, I've developped a OGL 3.2 software on my nVidia GTX 295, tested on a 8800 GTX, a GTX 285, a GT 330M... on Linux, Windows Vista/seven, and MacOS (using extensions instead of GL 3.2 core)
Everything renders OK using a GL 3.2 context (using an AWTGLCanvas).

I recently tryied on a ATI radeon 4800 drivers 10.4. Everything compiles perfectly, but the screen remains black (or any color apply by the glClearColor())
I tryied to initialize my AWTGLCanvas using a OGL 3.1 Context, and everything is rendering again.

I don't understand why it behaves differently on ATI and NVIDIA.  ???
Any idea would be nice.

Thanks !

Estraven.

Ciardhubh

ATI drivers aren't very good/standard compliant. They often omit/overlook parts of the OpenGL spec. In my - admittedly a bit limited - experience, as soon as you start using things after OpenGL 1.2 it get dangerous. I had various issues due to that, e.g. ATI drivers wouldn't populate GLSL shader built-in variables for lighting (always remained 0) or they didn't correctly implement certain extensions.

You might want to search the web for people who had similar issues (non-Java OpenGL-related) and debug every last gl-call to see where it fails.

Estraven

QuoteATI drivers aren't very good/standard compliant. They often omit/overlook parts of the OpenGL spec. In my - admittedly a bit limited - experience, as soon as you start using things after OpenGL 1.2 it get dangerous.

Yes, i have the same felling about ATI drivers... anyway, as I use OGL 3.2, I don't use any of the built-in variable (that mostly disappeared in OGL 3.2). But that could have been an explanation...

I'll try looking for non-java similar threads.

Quotedebug every last gl-call to see where it fails

The thing is that in "debug" mode, i use glCheck on every important operation. I don't get any errors, and the rendering thread run correctly.
Every shader compilation gives me something like : 'Shader have been successfully compiled to run on hardware.'
More surprising : I have a similar framerate on 3.2 and 3.1. I just don't have any image on 3.2...

I'll post again if i found any more information. meanwhile, any other idea is still wellcome.

Thanks for the answer.

Ciardhubh

Quote from: Estraven on May 21, 2010, 10:34:24
The thing is that in "debug" mode, i use glCheck on every important operation. I don't get any errors, and the rendering thread run correctly.
Every shader compilation gives me something like : 'Shader have been successfully compiled to run on hardware.'
More surprising : I have a similar framerate on 3.2 and 3.1. I just don't have any image on 3.2...

If it's really a bug in the driver, it will likely fail silently. By "debug" I meant more like start with a minimal program and take the gl-calls you use and test them individually on the platform in question; not just check glError but verify that it produces the expected result, i.e. render a simple quad and see if it appears (might also tell you if there's something fundamentally wrong).

Maybe the easiest way would be to stick to an earlier version + extensions and wait till OGL 3+ drivers will be more reliable; and available on more computers if that's an issue for you.

Estraven

QuoteMaybe the easiest way would be to stick to an earlier version + extensions and wait till OGL 3+ drivers will be more reliable; and available on more computers if that's an issue for you.

Yes, i need my software to run on nvidia and ATI in the exact same way, but i've finally implemented a small initialisation pass that launch a 3.2 context on nvidia and a 3.1 context on ATI. In fact, i was already downgrading to 2.1 context on MAC... so that's really not an issue.

I was just being currious about what was going on...

Thanks Ciardhubh !

spasi

Could you please post your initialization code? On my ATI it isn't possible to create a 3.1 context, I always get a 3.2 context, with either Core or the Compatibility Profile. You could also try to launch your app with the following code:

System.out.println("\nGL RENDERER: " + GL11.glGetString(GL11.GL_RENDERER));
System.out.println("GL VENDOR: " + GL11.glGetString(GL11.GL_VENDOR));
System.out.println("GL VERSION: " + GL11.glGetString(GL11.GL_VERSION));

ContextCapabilities caps = GLContext.getCapabilities();

if ( caps.OpenGL32 ) {
	IntBuffer buffer = ByteBuffer.allocateDirect(16 * 4).order(ByteOrder.nativeOrder()).asIntBuffer();

	GL11.glGetInteger(GL32.GL_CONTEXT_PROFILE_MASK, buffer);
	int profileMask = buffer.get(0);

	System.out.println("\nPROFILE MASK: " + Integer.toBinaryString(profileMask));

	System.out.println("CORE PROFILE: " + ((profileMask & GL32.GL_CONTEXT_CORE_PROFILE_BIT) != 0));
	System.out.println("COMPATIBILITY PROFILE: " + ((profileMask & GL32.GL_CONTEXT_COMPATIBILITY_PROFILE_BIT) != 0));
}

System.out.println("\nOpenGL 3.0: " + caps.OpenGL30);
System.out.println("OpenGL 3.1: " + caps.OpenGL31);
System.out.println("OpenGL 3.2: " + caps.OpenGL32);
System.out.println("OpenGL 3.3: " + caps.OpenGL33);
System.out.println("OpenGL 4.0: " + caps.OpenGL40);
System.out.println("ARB_compatibility: " + caps.GL_ARB_compatibility);


and post the results. Are you sure you're getting a 3.1 context when asking for a 3.1 context?

In any case, I'm perfectly happy with AMD's drivers, they've been very solid for quite some time now. I wouldn't be surprised if your issue isn't related to context initialization.

Estraven

Hi spasi,

No you're right, i don't get a OGL 3.1 profile, but a OGL 3.2 compatiblility profile.

I don't have access to the ATI card until thrusday, but i will test your code then.

my AWTGLCanvas is built using this code :

new AWTGLCanvas(
        GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice(),
        new PixelFormat(),
        null,
        new ContextAttribs(3,2) ); //  or 3,1 if needed....


When asking a 3.2 context, i get a 3.2 Core profile, (3.2.9xxx more precisely) but no image.

I usualy access the GL version and info using :

GL11.glGetString(GL11.GL_VERSION) ;
	GL11.glGetString(GL11.GL_VENDOR) ;
	GL11.glGetString(GL11.GL_RENDERER);



I have used this code for two years now (upgrading version when new OGL version became available),
I never had any issue on nVidia Cards.

Estraven

spasi

Then you should be using new ContextAttribs(3, 2).withProfileCompatibility(true). The ContextAttribs constructor defaults to the core profile when asking a version higher than or equal to 3.2. Alternatively, I think it's safer to just use the default constructor and check what you actually got after context creation. That is, if you generally want the deprecated functionality to be present.

My guess is that NV doesn't honor the core profile bit and always returns a compatibility profile, that's why you're seeing the problem only on ATI. But I could be wrong, you should test what happens on both.

spasi

I've updated ContextAttribs to not default to the core profile when asking for a 3.2+ version. I also updated our VersionTest according to the latest version of ARB_create_context, it seems that the AMD driver doesn't follow the spec for 3.1 contexts:

QuoteIf version 3.1 is requested, the context returned may implement any of the following versions:

* Version 3.1. The GL_ARB_compatibility extension may or may not be implemented, as determined by the implementation.
* The core profile of version 3.2 or greater.

Anyway, using either the default constructor or new ContextAttribs(3, 2).withProfileCompatibility(true) should solve your problem.

Estraven

Thanks, i'll give it a try asap.

Estraven