Transparent RGB Value

Started by psiegel, June 09, 2004, 20:42:26

Previous topic - Next topic

psiegel

Is it possible in OpenGL when rendering a texture to tell OpenGL to treat a given RGB value as if it were fully transparent?  Currently I'm having to create my textures as RGBA, and then sift through the data being passed pixel at a time, and set the A value to either 0 or 1 based on whether the RGB matches my transparent color value.  Naturally, this process is dog slow.  It wouldn't be so terrible if it only happened at load time, but some of my textures are altered during runtime and need to be reprocessed mid-stream.  This really kills my framerate.

So, does anyone know if there's a shortcut for me?

Paul

spasi

This can be done easily with fragment shaders of course, but I think it's achievable with nv_register_combiners and/or ati_fragment_shader too.

princec

Sounds risky to me. I'd figure out a better way to do it... what are you scanning RGB values for?

Cas :)

psiegel

Ok, I'll get into it.

My blitter uses only single bit transparency.  All graphics are 24 bit, and specify a single RGB value that represents transparent.  This works fine in the DirectX implementation, as DirectX has the notion of a color key that is exactly what I just mentioned (an RGB value that is always skipped during blits).  In fact, this the only kind of transparency DirectX 7 offers, and I went with it so as to code to the least common denominator.

Now, for the LWJGL implementation, I was just pre-parsing all the textures at load time and setting the alpha value to 1 for each pixel that used the sepcified RGB value.  This was slow, but worked fine.  Now that I've fixed my "render to texture" feature by using the window's back buffer, I've discovered a flaw.  When I render to the window's back buffer, any alpha values are lost.  (I believe this is normal as the window was created in 16 bpp - notably no alpha channel there).  In order to correct this, the only solution I've come up with is to re-parse each individual pixel as I copy it from the screen back to the texture, and reset the alpha value based on the RGB values.  If this happens every frame, you can see how much this would slow down my render speed.  What's more, if the offscreen texture is very large (which it sometimes is), I start to see java.lang.OutOfMemory errors.

Basically, I'm looking for a way to fix this, and it struck me that if there was a native OpenGL way to mask out the blitting of pixels of specific RGB values (the way DirectX does it) it would be ideal.  I'll gladly listen to any other suggestions as well.

Paul

spasi

Is there a reason you can't just create the framebuffer with an alpha channel?

cfmdobbie

Can you not request an alpha channel with your render surface?
ellomynameis Charlie Dobbie.

psiegel

Are you referring to upping the bpp of the window to 32, or is there some other way to request an alpha channel in the framebuffer?  And if so, what is the level of support of alpha channels in the framebuffer?  (As in, does it require > OpenGL 1.1 and are there a lot of older cards that don't support such?)

Paul

spasi

Just use 24 for bpp and 8 for alpha in Window.create().

I'm not sure about support though. I'd guess many old cards will support it, but I haven't ever checked this.

psiegel

So I've done that, and here's what I'm seeing.  I'm trying to blit a solid square of transparency to a surface.  My code looks something like this (it was extrapolated a bit to remove excess levels of abstraction):

GL11.glDisable(GL11.GL_TEXTURE_2D);

GL11.glColor4f(0f, 0f, 1f, 0f);
GL11.glBegin(GL11.QUADS);
{
  GL11.glVertex2i(0, 0);
  GL11.glVertex2i(0, 64);
  GL11.glVertex2i(64, 64);
  GL11.glVertex2i(64, 0);
}
GL11.glEnd();

GL11.glColor4f(0f, 0f, 0f, 1f);
GL11.glBegin(GL11.QUADS);
{
  GL11.glVertex2i(8, 0);
  GL11.glVertex2i(8, 24);
  GL11.glVertex2i(24, 24);
  GL11.glVertex2i(24, 0);
}
GL11.glEnd();

GL11.glColor4f(1f, 1f, 1f, 1f);

GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, mTextureID);
GL11.glCopyTexImage2D(GL11.GL_TEXTURE_2D, 
                                      0, 
                                      GL11.GL_RGBA,
                                      0,
                                      0, 
                                      64,
                                      64,
                                      0);

GL11.pushMatrix();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, mTextureID);
GL11.glTranslatef((float)(Window.getWidth()/2 - 32), (float)(Window.getHeight()/2 - 32), 0f);

GL11.glBegin(GL11.GL_QUADS);
{
  GL11.glTexCoord2f(0f, 0f);
  GL11.glVertex2i(0, 0);

  GL11.glTexCoord2f(0f, 1f);
  GL11.glVertex2i(0, 64);
   
  GL11.glTexCoord2f(1f, 1f);
  GL11.glVertex2i(64, 64);

  GL11.glTexCoord2f(1f, 0f);
  GL11.glVertex2i(64, 0);
}
GL11.glEnd();
GL11.popMatrix();


What I'm seeing is just a solid blue square in the middle of the screen.  What I was hoping to see was a blue square with a smaller square inside it that was the same color as my clear color.  Am I doing something obviously wrong?

Paul

spasi

Quote from: "psiegel"What I was hoping to see was a blue square with a smaller square inside it that was the same color as my clear color.

Then you should switch the two alpha values:

GL11.glColor4f(0f, 0f, 1f, 1f); (opaque blue)
GL11.glColor4f(0f, 0f, 0f, 0f); (transparent black)

Also important are the states of ALPHA_TEST, BLENDING and the color/alpha clear values.

psiegel

I'm sorry, that was a typo on my part.  I do indeed use the alpha values you suggest.  As for my blending, I've got GL_ALPHA_TEST disabled, GL_BLEND enabled, and for my blending function I've got:

GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);

The interesting thing is, if I change the second color to:

GL11.glColor4f(0f, 0f, 0f, 0.5f);  // Semi-transparent black

I do indeed see a semi-transparent window inside the blue square.  (My background isn't solid black, and in fact in my test I blit the final texture at the location of the mouse so I can move it around and see the final effect.)  It's only when I put the transparency level all the way up to 1 that suddenly it's as if the blit wasn't done at all.  It almost seems that OpenGL sees that the entire quad is transparent, and simply doesn't blit anything at all.  

Paul

psiegel

Got it.

I tried futzing with the glBlendFunc method here and there, and finally got my desired result.  What I ended up doing was changing this function based on whether I was going to blit a texture or a solid colored square.  For a texture, I use what I had before:

GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);

For a solid colored square, I use:

GL11.glBlendFunc(GL11.GL_ONE, GL11.GL_ZERO);

I still find it odd that the previous didn't work as expected, but since I have a work around, I suppose it's largely academic.

Paul

spasi

Damn, I just figured it out. This has happened to me too and you're right, nothing is actually rendered when you use 0.0f alpha. It's an optimization, which I think shouldn't be normally happening without alpha test. They want to save some blending calculations, since a zero alpha wouldn't change anything. So they kill the fragment. I've seen this on Ge2 & GeFX, on what card are you testing this?

****in' driver writers... :evil:

psiegel

My dev machine has a GeForce FX 5600 Go. My Mac tester has a PowerMac G4 Digital Audio, which I think has either a GeForce 2 or an ATI Rage in it.  He's not sure, and I'm not sure how to tell him to find out.  :)

Paul