OpenGL doesn't support depth below 16bit???

Started by SCM23, February 06, 2009, 20:37:49

Previous topic - Next topic

SCM23

hi together,

im using the jmonkeyengine to create some applications. In the source code of this engine there is the following code:

private static String[] getDepths(String resolution, DisplayMode[] modes) {
        ArrayList<String> depths = new ArrayList<String>(4);
        for (int i = 0; i < modes.length; i++) {
            // Filter out all bit depths lower than 16 - Java incorrectly
            // reports
            // them as valid depths though the monitor does not support them
            if (modes[i].getBitsPerPixel() < 16)
                continue;

            String res = modes[i].getWidth() + " x " + modes[i].getHeight();
            String depth = String.valueOf(modes[i].getBitsPerPixel()) + " bpp";
            if (res.equals(resolution) && !depths.contains(depth))
                depths.add(depth);
        }

        String[] res = new String[depths.size()];
        depths.toArray(res);
        return res;
    }


As you can see the programmer skips all resolutions with depth lower than 16bits. I was wondering why this is the case and asked in the jmonkeyengine forum:
http://www.jmonkeyengine.com/jmeforum/index.php?topic=10317.0

Some of them told me that it could be LWJGL which is restricting the depth but some also said it is opengl, but none of them could prove their statement...

do you know anything more??

thx scm!!

Kai

Hi,

the supported number of supported depth buffer bits is not specified by OpenGL nor can it be somehow adjusted or influenced via LWJGL. LWJGL just forwards the framebuffer configuration request to the underlying windowing system (X-Server in the case of linux).
It is related to the supported framebuffer configurations which itself is defined by the Graphics Card, exposed through the graphics driver.

If you are running under Linux and own a nvidia card you can use the glxinfo command which shows all supported framebuffer configurations, including color bits per pixel, depth buffer bits, stencil bits, auxiliary buffers, stereo support, etc. etc.

But it is very common, that todays graphics cards only support 16 and 24 bit depth buffers and none else.
There is a small hint to this in the OpenGL 1.5 specification at page 142 where all possible texture image formats are specified, including DEPTH_COMPONENT16, DEPTH_COMPONENT24 and DEPTH_COMPONENT32, whereas a real native support for 32 bit depth buffers is really rare today and in my opinion should not be a requirement for any OpenGL application.

So I would say that there is no support for 8-bit depth buffers natively.
You could just as well use a 16 bit depth buffer and a fragment shader in which you rescale those values to 8 bit accuracy when writing to gl_FragDepth.