Is there a way to read out the depth buffer as an image?

Started by Vapalus, January 16, 2015, 11:50:08

Previous topic - Next topic

Vapalus

This would be kind of a nice idea to find out at which element the mouse is pointing at, or at which place in x y z it currently is in the 3D world.
Also I think it is commonly used in games for an easier hit detection.
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

If you simply want to sample the depth component of your screen-rendered scene at a given viewport location (in pixels), you can use glReadPixels with format=GL_DEPTH_COMPONENT. [1]

If you really want to have your depth in a texture, which you then maybe want to sample in a shader, then you can either render your scene with a framebuffer object (FBO) into a texture (use glFramebufferTexture2D with attachment=GL_DEPTH_ATTACHMENT), or you can copy the depth buffer into a texture object via glCopyTexImage2D (using internalformat=GL_DEPTH_COMPONENT). [2]

I am sure there are some more ways to do it. :)

References:
[1]: http://stackoverflow.com/questions/7164952/efficient-way-of-reading-depth-values-from-depth-buffer
[2]: http://stackoverflow.com/questions/6340724/how-to-copy-depth-buffer-to-a-texture-on-the-gpu

quew8

(Edited to correct)
Or if you just want to see the depth buffer on-screen, you can have a shader which samples the fragment depth.

void main() {
    gl_FragColor = vec4(gl_FragCoord.z, gl_FragCoord.z, gl_FragCoord.z, 1);
}

This fragment shader will render a greyscale representation of the depth buffer.

As @Kai so astutely pointed out, what I originally posted, which used gl_FragDepth, was wrong since gl_FragDepth is an output variable from the fragment shader. I hang my head in shame.

Vapalus

That could be used for smoke, fog or night effekt, too as it seems.
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

Quote from: quew8 on January 18, 2015, 13:04:53
Or if you just want to see the depth buffer on-screen, you can have a shader which samples the fragment depth.

void main() {
    gl_FragColor = vec4(gl_FragDepth, gl_FragDepth, gl_FragDepth, 1);
}

This fragment shader will render a greyscale representation of the depth buffer.

Have you actually tried this? Does it work for you? Because gl_FragDepth is a "output-only value", which the fragment shader can "write to" in order to override the fixed-function pipeline depth-value which is computed afterwards.

This post here also states something like this: http://lists.apple.com/archives/mac-opengl/2010/Jun/msg00025.html

And they suggest to read the gl_FragCoord.z value instead.

quew8

Sorry. You're quite right @Kai. I have done it in the past and I did use gl_FragCoord.z. I posted that off the top of my head and I should really check out the quick reference before I do that.

Vapalus

If FragDepth is a value which can override the depth value, does that mean I can manipulate wether a polygon is rendered in front or behind another polygon, i.e. make objects visible through walls a.k.a "wallhacks"? I saw a feature like this in DeusEx a few times.
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

spasi

Yes, but keep in mind that doing so disables early-Z culling and other optimizations.

quew8

Yes, whatever you're doing, the likelihood is that the solution is just to disable depth culling temporarily rather than mess around with depth values.

Kai

Quote from: Vapalus on January 20, 2015, 09:00:49
If FragDepth is a value which can override the depth value, does that mean I can manipulate wether a polygon is rendered in front or behind another polygon, i.e. make objects visible through walls a.k.a "wallhacks"? I saw a feature like this in DeusEx a few times.

You can do a lot of great things with overriding the depth value.
The most prominent example I remember is to have "impostors" behave like real objects also regarding z-testing.

Have a look at the GPU-Gems 3 article: True Impostors. Although that does not use this technique.

You could for example want to render a million spheres but only use a billboard rectangle to do that (which you can also render using a single vertex with point sprites) but then write the depth as if it were a real sphere. When you on top of that also use instancing for that million spheres then you would only need a single vertex (3 floats) for your whole scene in a buffer object.

Vapalus

Quote from: Kai on January 20, 2015, 10:09:27
Quote from: Vapalus on January 20, 2015, 09:00:49
If FragDepth is a value which can override the depth value, does that mean I can manipulate wether a polygon is rendered in front or behind another polygon, i.e. make objects visible through walls a.k.a "wallhacks"? I saw a feature like this in DeusEx a few times.

You can do a lot of great things with overriding the depth value.
The most prominent example I remember is to have "impostors" behave like real objects also regarding z-testing.

Have a look at the GPU-Gems 3 article: True Impostors. Although that does not use this technique.

You could for example want to render a million spheres but only use a billboard rectangle to do that (which you can also render using a single vertex with point sprites) but then write the depth as if it were a real sphere. When you on top of that also use instancing for that million spheres then you would only need a single vertex (3 floats) for your whole scene in a buffer object.

How much benefit would that bring you if the z-coordinate is always faced in your direction?
Bringing a single pixel "closer" to you doesn't work, I think, or did I understand that the wrong way.
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

Yes, you did undestand that in the wrong way. ;)

The z-value is essentially saying how far away from the viewer a certain surface point is, though not in linear space to allow for higher z-precision near the viewer.
So actually lower values mean "nearer to" the viewer and greater valus mean "farther away from" the viewer.
Z is a single scalar value so it can only express a single dimension, which is the Z-coordinate (distance from the viewer) in the normalized device coordinates system.

The Z-value does not exactly mean "facing in your direction", as Z is a single value and not a vector pointing somewhere. It gives the distance (and not direction!) from the viewer to the surface point. The direction is actually irrelevant, because in view-space it is always (0, 0, 1) in NDC. Remember, in NDC we have no "perspective projection" anymore. We are past that and have a unit-cube there.

For a sphere you can now generate a z-value that is lowest at the center of the sphere and increases via a cosine-falloff in linear space up to the edge of the sphere.

Hope that makes things clearer.

Cheers,
Kai

Vapalus

That is kind of what I meant - and it also means that it makes no difference (unless, of course, the object is BEHIND a surface which it will collide through, a case which should not happen in a good game imho).
Unless the light is affected, which should be simulated in a different way...
After creating the correct othogonal matrices for the GUI and optimizing the RAM usage with a streaming engine, I usually fail to call the initialization function; As soon as that happens, you know I will search that stupid error for ten hours.

Kai

Quote from: Vapalus on January 20, 2015, 19:24:56
and it also means that it makes no difference [...] Unless the light is affected, which should be simulated in a different way...
That is true.
But people are inventing new crazy and cool things in computer graphics every day that I have come to learn to never claim that something "should" be done a specific or different way.
Only thing I'm saying is that depth writes allow you to simulate geometry that wasn't rasterized there.
And I was just proposing some crazy way to make use of it. :)