I created FBO and rendered the whole scene to a texture in FBO... then used that texture in the shader. Works.
I am already doing this to achieve a different effect. The problem is that I need this shader to be active
while I am rendering to the FBO.
My overall goal here is to create a 2D lighting/motion blur system. Basically, I have have four framebuffers which handle various aspects of the scene being rendered. These are:
- The foreground buffer, which will be all objects affected by lighting.
- The lighting mask buffer, which has splotches of color rendered (additive blending) to it to represent areas of light. After both this and the foreground buffer are rendered, this is drawn over the foreground with multiplicative rendering (changes to alpha are disabled, so it affects hue only).
- The frame buffer, which has the background (whatever is not affected by lighting) rendered onto it, and then the foreground (which has now been affected by the lighting mask) rendered on top of that. Finally, the gui is rendered on top, resulting in a finished frame.
- The accumulation buffer, which is used to allow for motion blur. Basically, once I have finished rendering the frame, I draw it at a varying level of transparency over this buffer. This smooths changes over time.
All of this is working, and is wonderfully fast (I've had a (admittedly static) scene with 1000 lights render in under a millisecond - since I manage lighting this way, rendering a light is exactly as fast as rendering a 2D sprite). There's also a lot of other things I can do (rendering shadows is just as easy, I just don't use additive rendering, etc).
However, the standard glBlendFunc options are not working optimally when I am rendering the foreground.My basic blending function for the is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). For the color channels, this is exactly what I want. However, this manifests issues when I render pixels with alpha that is neither zero nor one.
Consider a simple case where I render a ghost in front of a brick wall. The ghost has an alpha of 0.5f. The wall has an alpha of 1.0. So, if I plug these alpha values into my blending function, the output is:
SRC_ALPHA * SRC_ALPHA + DST_ALPHA * ONE_MINUS_SRC_ALPHA = (0.5f * 0.5f) + (1.0 * 0.5f) = 0.75f
See? It makes perfect sense, but it's an unsatisfactory result. The user would expect the image to still have full alpha where they overlap - however, instead, he'll be able to see some of the background when I render the final texture over the background. He'll not just see through the ghost, he'll also
see through the wall.
I wanted to use a shader to basically program an appropriate blending function. The algorithm I think will work is something along the lines of:
FINAL_ALPHA = DST_ALPHA + ((1.0f - DST_ALPHA) * SRC_ALPHA);
So, for the ghost/wall example, I would have:
1.0 + ((1.0f - 1.0) * 0.5f) = 1.0f
That way, the alpha level can't be actually lowered, and it should follow commonsense patterns when transparency is drawn over transparency (Two 0.5f alpha pixels rendered at the same time would result in a pixel of 0.75f alpha, etc).
SO, IN CONCLUSION: Anybody have any ideas how I could make this work? Currently I'm stumped for options on this final issue.