Simplest 3D object rendering

Started by Automaton, October 02, 2015, 17:47:53

Previous topic - Next topic

Automaton

I am starting to get the hang of this OpenGL thing, and I am now able to import and draw complex geometries from .OBJ files. I import all vertices, faces (which define each triangle of the mesh), and vertex normals. I currently put the vertices in a VBO and draw the object using drawElements since I have the index order for each triangle from the faces. Obviously without lighting the object doesn't really look "3d".

The way I see it is that I have 2 options for doing this:

  • Add lighting
  • Add edge outlines
(or both)

When I say "edge outlines" I don't mean the mesh lines which is easily achievable using
glPolygonMode(GL11.GL_FRONT_AND_BACK, GL11.GL_LINE);


This is what I mean (ignoring the fact that there is also lighting):


I'm not opposed to creating a separate VBO for the edges and drawing lines but what I'm unable to determine through googling is if there is a simple way of dynamically determining in code which vertices to draw lines between (I do have access to vertex normals, but I'm not sure if those help).

Further, I'm looking for the simplest lighting implementation. I'd be totally satisfied with simply adding static ambient and diffuse light in my shader if such a thing is possible? Most examples I come across for lighting are using code which is no longer available in the core profile.

Cornix

The outlines you show in your image are not really the outlines of the mesh since the mesh is made up of triangles. For this reason you would need to provide these outlines as separate data if you need to render them.

Kai

There are a lot of techniques available to render the outlines of a model. Just google for "OpenGL render outlines".
Though most of them are incapable of rendering the "inner" creases and edges of the model.
The most non-invasive way (as in: changing the model data itself) to do that is by using an edge detection filter (such as Sobel or Frei-Chen) implemented as a shader.
This works by rendering the normals of the model to an offscreen color framebuffer, and then letting an edge detection shader run over that image to produce color wherever there are sufficiently large discontinuities between the normals. If the discontinuities are along an edge/crease, then there will be a line along that edge in the edge detection output image.
You can then superimpose/blend your original model render with the edge detection output and voilà.

Kai

I added another demo to the lwjgl3-demos repository, showing a technique to render the outline of a mesh using the geometry shader. It first computes triangle adjacency information based on the index buffer of a simple triangles mesh. Then it uses the geometry shader to check which triangle edges are outlines of the mesh by checking whether both triangles sharing that same edge face the camera / are front-facing.

See: https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/opengl/geometry/SilhouetteDemo.java

EDIT: Currently, that demo only highlights edges that are shared by both a frontfacing and a backfacing triangle.
Using the geometry shader it is also possible to visualize the inner creases/edges by computing the dot product between the normals of the two adjacent triangles of an edge and use some threshold when to show the edge or use some color/intensity interpolation to highlight the edge.
However, showing the true "outline"/boundary of a mesh, without any inner edges (no matter whether those are backfacing/frontfacing pairs), is not possible with this approach. I guess this requires an image-space technique to really find the boundary of the mesh.

EDIT 2: There is also another demo now, using an image-space approach with a Sobel edge detection filter implemented in a fragment shader.
See: https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/opengl/fbo/EdgeShaderDemo20.java
(or using a multisampled renderbuffer): https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/opengl/fbo/EdgeShaderMultisampleDemo20.java
This demo uses the view-space normals to detect edges. Optimally, the shader would also use the depth, in case the model has normals pointing in the same direction but at different "depths."