"Wobbly" edges on triangles

Started by Draghi, April 15, 2014, 11:34:14

Previous topic - Next topic

Draghi

Hey all,

I've recently started programming a very, very, basic doom-like/quake-like engine using LWJGL - it's more or less a learning project. I started on OpenGL 1/2 and now I'm working with OpenGL 3/4 and I've run into a bit of an issue:



The issue, if it's not quite as obvious as I think it is, is that the edges where the triangles meet appear to wobble, depending on the angle you're looking.

These are the enables/hints I've got set:
GL11.glEnable(GL11.GL_DEPTH_TEST);
	GL11.glEnable(GL11.GL_LINE_SMOOTH);
	GL11.glEnable(GL11.GL_POLYGON_SMOOTH);
	GL11.glEnable(GL11.GL_CULL_FACE);

	GL11.glHint(GL20.GL_FRAGMENT_SHADER_DERIVATIVE_HINT, GL11.GL_NICEST);
	GL11.glHint(GL11.GL_LINE_SMOOTH_HINT, GL11.GL_NICEST);
	GL11.glHint(GL11.GL_POLYGON_SMOOTH_HINT, GL11.GL_NICEST);
		
	GL11.glDepthFunc(GL11.GL_LESS);
	GL11.glCullFace(GL11.GL_BACK);
	GL11.glFrontFace(GL11.GL_CCW);


Changing the hints and/or enables/disables appears to have no effects on the issue. If I turn off backface culling the colours on the hidden faces "creep" over the top of the top face where the edge "wobbles".

I'm sure this isn't the fragment or vertex shader because even if they just pass the position/colour through them the issue persists.

Has this issue happened to anyone else before and any clue what causes it?

I'm happy to provide any code, though it might not be completely standard java code (I'm new to java, but programmed in free pascal for a few years), I just don't know what is relevant in this case.

Just some information about what I've got:

  • I have 3 matrices (Projection, Viewing and Model) and a computed normal matrix for use in lighting.
  • The cube is composed of 12 triangles, all defined CCW
  • The cube is rendered as GL_TRIANGLES
  • This prism is a cube scaled on the X and Z by a factor of 10 (Even if unscaled, the issue persists)
  • The window has an OpenGL 4.3 context (The context version seems to have no effect, though it requires a minimum context of 3.0)

Thanks for your time!

quew8

I think this is a case of low depth buffer precision. So OpenGL stores the zValue of the visible pixel (or fragment more correctly) in the depth buffer. When you try to draw over that pixel the depth test determines whether the new pixel is "closer" or "further away" by comparing its zValue with that in the depth buffer. You probably knew that bit already but either way you do now.

BUT each pixel in the depth buffer only has so many bits to store that zValue in. These bits have to be capable of storing the entire range of zValues (the range of zNear and zFar used to create the projection matrix). So if you have a large range of zValues and you are trying to draw two pixels which are very close together depth wise (like at the edge of two faces in a cuboid for example) then a depth buffer without enough bits then it struggles to determine which is on top. Sometimes it thinks one face is on top, sometimes it thinks the other is. Which means wobbly edges as you so eloquently put.

Now I think you have 3 solutions.
1) Request a Depth Buffer with more bits. Note Request, all you can do here is ask OpenGL nicely. If there hardware isn't there to support it then you ain't going to get it. Depending on the exact implementation of OpenGL you might be given the maximum number of bits by default. Also learning project or not, by doing this you are dooming (doom, hah) lower end hardware to shoddy graphics potentially unnecessarily. Saying that it is worth a try. Ask if you don't know how to do this. But seeing as you know how to mess around with contexts I'm going to assume you already know.

2) Use a smaller depth range. Most likely this means setting your farZ to a smaller value and occluding things a bit nearer than before. I generally have zFar set far too large anyway. This is my preferred option. If you want to be really clever then you can quite easily set this as an in game option and let the user decide what is best for their hardware. Occluding further away means better performance especially when combined with frustrum culling so it's extra good for the lower end hardware with the smaller depth buffers.

3) This just came off the top of my head as I was writing and looking at the very aliased far edges on your cuboid. I might be wrong but I'm pretty sure a spot of anti-aliasing would sort out this problem as well. Depending on the exact method you use, it is essentially a bit of blurring so it tends to get rid of little artifacts like that.

Hope I've helped. Any more questions, ask away.

Draghi

I was thinking along the same lines with the depth buffer.

This is what I've tried:

  • Requested 24bit depth buffer (max size I could request, even with a 0-bit stencil buffer)
  • Reducing zFar from 100 to 10
  • Increasing zNear from 0.1 to 1
  • Bringing both zFar and zNear in
  • Rearranging the vertex buffer in various ways (Top rendered first/last eg.)
  • Splitting each face into their own Vao, rendering consecutively.
  • Reducing the size of the faces by very small amounts (kind of worked, but gaps)

I also tried rendering a 10x10 grid of 1x1x1 cubes and this occurred:


The lines on the surface are from the adjacent cubes side, as indicated by the dark and light lines that match the light cast on the sides. These lines stay the same thickness no matter how far I pull the camera back. The lines also appear then the cubes are stacked along XY and ZY.

In practice though, this *shouldn't* be an issue if I don't render the sides that are between two cubes, however, the wobbly edge issue is very annoying - to me at least and rather than avoiding the issue I really should fix it.

Any other clues?
I'll keep trying to fix it in the mean time.

As for Anti-Aliasing, I'm still working on trying that.


Well. Apparently I typed out the above for nothing.
Upon removing these lines.
GL11.glEnable(GL11.GL_LINE_SMOOTH);
	GL11.glEnable(GL11.GL_POLYGON_SMOOTH);
	GL11.glHint(GL11.GL_LINE_SMOOTH_HINT, GL11.GL_NICEST);
	GL11.glHint(GL11.GL_POLYGON_SMOOTH_HINT, GL11.GL_NICEST);


Completely remedied the issue.

What made me try that was:
http://www.opengl.org/wiki/Multisampling#History

Apparently, these are no longer nice functions XD

Anyway, thanks a lot for your help - I wouldn't have stumbled across that section otherwise! :)
Also, I actually didn't know how the depth buffer worked, I knew it stored depth positions but not how it went about it.