LWJGL 3.3.2 released
June 10, 2020, 09:47:26 »
I have a specific task where I need to render a bunch of entities that can overlap a bit, and then I need the resulting figure to be transparently rendered on top of a bunch of other entities.
But I don't want entities within those two layers to be transparent in relation to each other.
I.e. Objects A, B, C are rendered together, they are not transparent towards each other and cover each other
Objects D, E, F and on the other layer, the same deal - they are not transparent to each other
But then Group A is overlayed over group B, and it is transparent.
Is rendering to texture the only way to do that?
One other question: I am currenly using
glBlendFunc(ONE_MINUS_DST_ALPHA, GL_ONE) to basically render the object "underneath" whatever is already in the frame buffer.
But if the contents of frame buffer have alpha below 1, the results are blended - i.e. transparency is applied.
Can I somehow make it not render the fragments AT ALL, if there's something already in the buffer?
Basically what I want is
glBlendFunc (SRC_ALPHA > 0 ? GL_ZERO : GL_ONE, GL_ONE)
Re: Blending question
Reply #1 on:
June 10, 2020, 10:42:30 »
You need to composite the objects from group A and the objects from group B separately in a full-screen render pass.
So, if you don't need objects from group A to depth-sort with objects from group B (so only correctly depth-sort objects within a single group) and you want group A to overlay over group B transparently, then:
1. render group B to the final render target first with opaque blending (i.e. no blending)
2. render group A to a temporary render target (i.e. render to texture, also without blending)
3. do a full-screen pass (render a full-screen triangle or quad) sampling from the render target of step 2 and applying blending
Re: Blending question
Reply #2 on:
June 10, 2020, 11:56:13 »
So, there really is no other way than rendering to texture and putting it on a quad in front of the screen?
I am a bit hesitant about doing two render passes like that, as I am working with ARM device that isn't really performant.
Is there a way to access current frameBuffer and make all fragments have alpha of 1? So that I could render part of the image, change alpha and then render the rest?
SMF © 2014