Hello Guest

some ATI / nVidia inconsistencies

  • 2 Replies
  • 3956 Views
some ATI / nVidia inconsistencies
« on: June 17, 2011, 08:09:11 »
Hi,

Testing my lastest 3D software (developped on nvidia cards ) on ATI hardware, i noticed two differences in rendering  behaviors.
It's just a couple of questions, nothing very urgent. I've found workarounds.

I tested several nVidia cards (from an old GT 8800 to a newer GTX 470), they all behave identically
I only have one ATI at my disposal (Radeon HD 4850) which is quite old...

Here the noticed inconsistencies :

1) In a shader (GLSL v1.5 to v3.3),  i needed to convert a mat4 to a mat3, keeping only the rotation part of the matrix.

on nVidia I did this :
      
      mat3 matrix3 = transpose(mat3( matrix4[0].xyz, matrix4[1].xyz, matrix4[2].xyz ));

ATI requires this :

      mat3 matrix3 = (mat3( matrix4[0].xyz, matrix4[1].xyz, matrix4[2].xyz ));

The result is then identical, but i'm pretty surprised of such different behaviors !

I eventually found a workaround to avoid mat4 -> mat3, so nothing realy important. Yet : does anyone has an explanation ?


2) second strange behavior, regarding FBOs

To initialize my FBOs (on nVidia) I use the GL11.GL_DEPTH_COMPONENT as depth format.

It creates a huge precision issue on ATI. Surfaces near from each other(but not extremly near !)  flickers... I thus need to switch to GL14.GL_DEPTH_COMPONENT24 (but only on ATI hardware)

Does GL11.GL_DEPTH_COMPONENT have different bits count ?

Anyone ever noticed that ?

Thanks !
Estraven


« Last Edit: June 17, 2011, 11:40:42 by Estraven »

Re: some ATI / nVidia inconsistencies
« Reply #1 on: June 17, 2011, 09:25:07 »
replying to myself...

The first problem seems not to be a shader related problem, but more likely a GPU memory managment issue.

In fact, the mat4 comes from an uniform array of mat4 passed to the shader in a UBO.
Apparently, the matrix are extracted from this structure is transposed on nvidia, and not transposed on ATI.

I'll do some more test to figure what's going on.

Estraven

EDIT :



The Layout of UBO needs to be manually specified. I added "layout(row_major)" to my UBO declaration, now the behavior is consistent between nVidia and ATI.


layout(row_major) uniform animation_ubo                {
      vec4 bones_absolute_positions[100] ;
         vec4 bones_rotated_positions[100] ;
       vec4 bones_conjugate_quaternions[100] ;
       mat4 bones_conjugate_composition_quaternions[100] ;
};

My guess is that, when unspecified, ATI uses Row_Major, whereas nVidia uses Column_Major....

For those interested, see : http://www.opengl.org/wiki/Uniform_Buffer_Object



Any clues on the Depth_Component ??
« Last Edit: June 17, 2011, 10:14:59 by Estraven »

*

Offline spasi

  • *****
  • 2198
    • WebHotelier
Re: some ATI / nVidia inconsistencies
« Reply #2 on: June 17, 2011, 11:36:33 »
Base on the 4.1 specification, implementations are required to support the following internal formats, that correspond to the base DEPTH_COMPONENT format:

- DEPTH_COMPONENT16
- DEPTH_COMPONENT24
- DEPTH_COMPONENT32F

I think the spec doesn't specify what format you should get if you only specify DEPTH_COMPONENT, so I guess you're seeing implementation-specific behavior. That is, you get DEPTH_COMPONENT24 on NV and DEPTH_COMPONENT16 on AMD by default. You can query the internal format after FBO creation to verify this.