3d Engine Recommendations

Started by DavidYazel, June 30, 2003, 14:35:07

Previous topic - Next topic


I was wondering if I could get some advice on people's advice for different rendering issues.

1. Lighting.  There are several choices for lighting.  With lower polygon resolutions vertex lighting looks sucky, so I was exploring the alternatives.  My first thought is that the engine should keep a lightmap texture available for all surfaces, packing many polygons as possible into each lightmap.  This lightmap would be used for receiving shadows and light.  There is also a technique for real-time radiosity using cube map to encode the lights which has the advantage of being able to handle a lot of lights since you are projecting the light onto cube faces and then mapping the texture to the objects which can receive the llight.

2. Shadows.  There seems to be a lot of options here, so I am interested in what people have done before and what their experince has been.  There are stencil shadows, which IMHO look rather bad in msot cases.  There are projection shadows where light blockers are rendered into a texture from the perspective of the lightsource and then mapped into the receivers.  etc.

3. Triangle pipeline.  Coming from the Java3d universe it is interesting to take a step downinto the pipeline.  Our app generally thinks about geometry at the geometry array level, that would equate to a batch of primitives with their attributes.  This has the advantage of simplicity and ease of rendering, but does make some things harder to do inside the renderer.  For example, we handle progressive meshing inside the scenegraph by building the indexed geometry in such a way that it can be collapsed by only changing the indexed vertex count.  In some engines, this sort of thing is controlled within the rendering pipeline.  The point I am trying to make is that if the interface between the scenegraph and the rendering engine remains at the geometry array level then it is difficult for the engine to do anything at the primitive level (progressive mesh, occlusion culling, etc).  But if we make the interface between the scenegraph and the renderer straight polygon soup then there is a lot more work to be done by the renderer and it will probably slow it down considerably.  I am leaning toward allowing two or more different geometry sources, one of which is the more static arrays we use now and generally exist from a straight loaded model, and another would be where the source can issue different polygons depending on view ( I guess once could consider this a geometry updator).  I am rambling a bit on this one.

4. Transparency sorting. Technically the most accurate way to do this is to take all the primitives which are translucent and sort them and render them back to front.   If the geometry is being submitted to the renderer from the scenegraph in "chunks" then this becomes more difficult.  In java3d the lowest level of transparency sorting granularity was the geometry array.  The problem is that if you have all the triangles managing your weeds/grass in one array and your particle systems are in another array (like we do) this almost guarentees that you will get incorrect rendering.  If we were to collect all primitives which are translucent, sort them by distance to the view and render them perfectly I think it would work.  The problem is that the number of state changes could be considerable as it had to switch between different textures.  I also think we would be talking about a *lot* of the scene would fall into this category.  All vegetation is alpha blended, trees, weeds, billboards, particle systems, water, sky, etc.  Sorting and rendering this I think would be painful.

5. Overlays.  In opengl is it possible to blend bitmaps right into the buffer before swapping?  If so then we could move our gui outside of the image plate aligned 3d scene and render it with straight writes to the screen.  Any suggestions?  This would save us huge amounts of texture memory and also allow much faster updates since we are not streaming textures over the pipe.

Thanks in advance for your thoughts.



1. Lighting: standard GL lighting is more or less a waste of time; although you can use the calculations from GL lighting to work out useful stuff for other forms of lighting. Simple light maps are probably the way to go. The biggest single difference I've noticed in recent years in rendered quality is by using diffuse bump maps. Specular bump maps are another order of niceness again. In Java unfortunately the processing is considerably slower than in C because of the old read-process-write problem with buffers (where C just processes the data in place).

2. I'd love to learn about shadows. Not so interested in stencil shadows because they're far too sharp. Soft shadows are the next big thing in realism and niceness. (And then volumetric lighting :D )

3. The way I think about it is: the renderer should only be concerned about triangles and the order it needs to draw them in. Therefore you merely need an API that can feed it huge numbers of triangles, and have them sorted so the scene renders in its entirety. That's what the Shaven Puppy triangle renderer does. It's designed so you can define as many levels of sorting as you want, and the collected specification of all the values of these levels is the triangle's "material".

The scenegraph's purpose, therefore, is to determine what triangles to send to the renderer. You firstly decide largely what chunks of geometry are visible with all your culling tricks - BSPs, quadtrees, brute force frustum check etc.; then for each chunk of geometry you need to turn it into a huge bunch of triangles to send to the renderer.

To do the clever bits like shadows, a bit of geometry will be creating rather more triangles than it is actually made of - it'll produce, on the fly, a set of shadow triangles. Etc.

4. Transparancy is probably best handled by sorting in the Renderer along with everything else. There's no real way around texture thrashing if it's going to thrash.

Cas :)


Thanks Cas.  I have read all the source code for the shadow puppy library and saw how you were handling the triangles /shaders.  it is a very helpful example.

I believe I am going to license my renderer under an something similar to the apache license which is blah blah blah you can use this for anything, but dont hold me accountable if your nuclear reactor melts down due to a programming  bug of mine.  Some of your code could be helpful, like the geometry package.  Since your license is (i think) more restrictive, would that infect my code?

Also, any answer on the overlay question above?


Orangy Tang

Quote from: "DavidYazel"2. Shadows.  There seems to be a lot of options here, so I am interested in what people have done before and what their experince has been.  There are stencil shadows, which IMHO look rather bad in msot cases.  There are projection shadows where light blockers are rendered into a texture from the perspective of the lightsource and then mapped into the receivers.  etc.

I did a lot of reading into shadow methods, and tinkered with shadow maps for a bit as well. Shadow maps seem like they would be nice on high end hardware (GeForce 3+), but on lower cards you end up with half-solutions and semi-hacks. You end up with lots of annoying edge cases that you're pretty much stuck with on GF2 :( Precision problems and texture resolution being the main ones. You can only get soft edges on GF3+ as well.

Volumes are nice because they're consistant and stable. Plus they work on almost any half decent card out there :) I've seen some demos that do weird stuff with pixel shaders to get soft edges, but they're pretty much a large image-space hack.

So shadow mapping for the future, but volumes are going to hang around for a few years before dieing out. I'm going to attempt a soft-shadow variation of shadow volumes myself, but its going to be in 2d, so i should be able to get accurate umbra and penumbra.


Yes after reading even more today I think I am going to try shadow volumes first using the zFail test.  I would like to try the single pass version which requires the EXT_stencil_two_side extension

glStencilOp(GL_KEEP,            // stencil test fail                     GL_INCR_WRAP_EXT, // depth test fail                    
GL_KEEP);            // depth test pass glStencilMask(~0);

glStencilFunc(GL_ALWAYS, 0, ~0);
glStencilOp(GL_KEEP,    // stencil test fail                   GL_DECR_WRAP_EXT,  // depth test fail
                 GL_KEEP);  // depth test pass glStencilMask(~0); glStencilFunc(GL_ALWAYS, 0, ~0);

Also, it is possible to use a vertex program to generate the volumes from the edge list, although you still need the triangle connectivity information to quickly generate the sillouette edge information.  I am not sure what cards support vertex programs (ARB_VERTEX_PROGRAMS) though.

The recommedation from my reading is to use lightmaps for static lights on static objects and pre-generate the lightmaps.  Then use shadow volumes for dynamic lights against dynamic objects, like torches on people.


ARB_VERTEX_PROGRAMS is supported on most newer ATI and nVIDIA drivers I think. HOWEVER, they are emulated on the CPU for anything < GF3. Not sure about Radeons.

- elias


For that sort of information, check out www.delphi3d.net.  It's quite a good resource for this sort of thing.


Orangy Tang

Quote from: "DavidYazel"Also, it is possible to use a vertex program to generate the volumes from the edge list, although you still need the triangle connectivity information to quickly generate the sillouette edge information.  I am not sure what cards support vertex programs (ARB_VERTEX_PROGRAMS) though.

You don't even need that, you can write a vertex program to create a volume from the original mesh, without needing any extra information. This is nice if you're using GF3+ since the model can stay in video memory the whole time, and the CPU doesn't need to touch it at all :) On lower hardware the approach you metion would be better though.

Lightmaps + shadow volumes can work pretty well, but you might need some tinkering to get dynamic and static lights to look similar enough to not look disjoint. And for static lights on dynamic objects you might want to look into something like Q3's light grid.

Oh, and if you want something really fancy, try looking into spherical harmonics. The math confused the heck out of me when i looked at it though, and i don't think its going to be practical for a while yet.


So then Orangy Tang, are you going to give us a shadow volumes tutorial :) ?

(WRT the SPGL license - it doesn't really have one except that if you use it in a project and subsequently get rich I'd appreciate a pleasantly large donation for its continued development).

Cas :)


Cas, I will probably port the nehe shadow volume tutorial to lwjgl in the next few weeks.  If it works I will give it to you guys to add to the cvs repository under examples.

BTW their tutorial is not quite right as they write the shadow after redering each shadow volume when in fact it should be done after the stencil has been updated with all shdow volumes.  Their method will not scale and would result in a lot of overdraw.


Orangy Tang

Theres a very good document on nVidias website that describes shadow volumes and a whole bunch of optimisations - something like 'Robust Shadow Volumes'. Explains the whole thing in lots of detail, you might find that handy.

As much as i'd love to do some tutorials or something, i've just started my uni final year project, so i'm totally bogged down in that at the moment :(


Just out of curiosity - what's your final project about? Something game related?

- elias

Orangy Tang

Yup, the title is 'Visual Game Scripting'. The idea is to get some sort of game framework where a non-programmer can create game scripts, level interactions etc. without having to write any code, and at the same time be nice and extenable to let programmers drop in extra classes to extend the behaviour.

I'm aiming to get a lot of the groudwork done, especially for the level editor, before going back to uni at the start of the next semester. I could do with getting a basic prototype done as quick as possible so i can get peoples feedback on whats good/bad/needs improving.