Can I set PixelFormat alpha to 16 bits?

Started by napier, June 02, 2005, 02:34:55

Previous topic - Next topic

napier

I'm creating some effects by blending many layers of translucent textures at very low alpha settings (.01).  I notice that my alpha blending code behaves very differently with different cards (bad on Radeon 9200 vs good on Radeon 9700 and Geforce FX 5500).  I'm not sure if I'm hitting hardware limitations or if I just don't understand alpha buffer settings.  I have many questions about the PixelFormat and display creation.  If you can shed light on any of these please do so.  Thanks.

1) Is it possible to set alpha bits to more than 8?  For example:
   Display.create(new PixelFormat(16, depthBufferBits, 0));

2) Will this throw an exception if hardware can't support 16 bit alpha?  Or will it scale down to 8 bits?

3) Do graphics cards typically support > 8 bit alpha?

4) Can I query the opengl context to find out what alpha bits it's capable of supporting?

5) When I call getIntegerv(ALHPA_BITS) on my GeForce FX 5500 (Windows) it returns 0 even though alpha is enabled and working well.  Is this a fluke or does the 0 mean something relevant?

6) Is there a DescribePixelFormat() call in LWJGL somewhere?  Is this the same thing as getAvailableDisplayModes()?  

7) Is alpha related to Bits per Pixel? If I want an alpha channel at 16 bits and RGB at 24 bits then do I need to set bpp to 40?  (Sounds impossible but I gotta ask)

8 ) Is the Radeon 9200 (32 MB) such a wimpy card that it can't blend well? Any ideas why my alpha blends are subtle and smooth in a Radeon 9700 and banded with distorted color on a Radeon 9200?  Display is set to 32 bits, textures have internal format RGBA8. Can the 9700 allocate more bits for alpha and so generate finer gradations with less loss due to rounding?

9) Is there a good article on managing pixel formats out there?


Sorry for the many questions, but I'm in over my head on this one, and I sincerely appreciate any help.
penGL/Java/LWJGL demos and code: http://potatoland.org/code/gl

spasi

Hi,

1, 3. I'm afraid there's no support for 16 bit framebuffer alpha, on any consumer card that I know of.

2. I get an LWJGLException: "Could not find a valid pixel format".

4. Display.getAvailableDisplayModes()

5. I'm not sure, probably a bug.

6. I'm not sure, but getAvailableDisplayModes() should return everything supported.

7. To get an alpha channel set the alpha bits to 8. 40 + 8 works for me, but of course no more than 32 bits will be used. Setting alpha to 16 fails on all cases.

8. Hmm, I'm not sure. The blending stage precision should be the same on all cards. The difference you're seeing comes from either higher precision generally available on the R3xx (fixed function pipeline is emulated with vertex & fragment shaders), or higher precision blending available on >R3xx & >NV3x that I haven't heard of. Are you sure the 5500 does the blending right? You're not using any fragment shaders, are you?

oNyx

Newer cards use a higher precision internally in order to avoid artifacts.

If the same precision is used internally you get banding/blocky artefacts pretty soon, because all those errors add up quickly. In the past it wasn't much of an issue, because the fillrate was too low anyways (eg if you only get 2 frames with that many layers it doesn't matter if it looks like 2bit shit).

Orangy Tang

It sounds like you're somewhat confused over the difference between alpha blending and a framebuffer's alpha component. If you're getting banded textures while alpha blending then this has nothing to do with how many alpha bits you request in your pixel format.

8. Here, it sounds like you've done everything right. As long as you've got a 32bit framebuffer and 32bit textures (RGB8) then you should be fine. Check in your graphic drivers if you've not forced textures to be low-quality or similar. Also try running your desktop at 32bit colour as otherwise you might get a 16bit framebuffer without realising it.

napier

Thanks very much for the replies.  Some thoughts:

Quotespasi:
2. I get an LWJGLException: "Could not find a valid pixel format".

Hmmm.... On the Mac OSX 10.3.7 I don't get any error message when I set PixelFormat alpha.  I've tried 0, 8, 24, 32.  All create a display with what looks like 8 bit alpha.

Quotespasi:
4. Display.getAvailableDisplayModes()

Although this doesn't tell me specifically about the alpha channel, just the overall bits per pixel.  

Quotespasi:
8. ... Are you sure the 5500 does the blending right? You're not using any fragment shaders, are you?

I'm doing plain vanilla opengl calls, no fragment shaders, and the GeForce 5500 looks fine (as does the Radeon 9700).

oNyx's comment about precision sounds accurate.  At 8 bits, the smallest meaningful value is 1/255 which is about .004.  If I render a low RGB color value, say .1, with alpha .01, the resulting value is .001, which will be rounded when it's stuffed into a byte.  So at 8 bit alpha precision, I should see color distortion due to rounding if I use alpha .01.  If I raise alpha to .04 then a .1 RGB shift * .04 gives .004 which is large enough to register in a byte without rounding, so I'll see smoother color shifts, less banding, which is exactly what happens.  

I would expect all cards to behave as the Radeon 9200 is, with low alpha values creating artifacts due to byte rounding.  But the higher end cards are clearly different: they seem to be able to handle blending without rounding to 8 bits.  

It sounds like I'll have to go with higher end cards for this project. unless there's some way to tweak the drivers in the 9200.
penGL/Java/LWJGL demos and code: http://potatoland.org/code/gl

spasi

From the OpenGL spec:

QuoteDestination (framebuffer) components are taken to be fixed-point values represented according to the scheme in section 2.14.9 (Final Color Processing), as are source (fragment) components. Constant color components are taken to be floating-point values.

Prior to blending, each fixed-point color component undergoes an implied conversion to floating-point. This conversion must leave the values 0 and 1 invariant. Blending components are treated as if carried out in floating-point.

But, from MSDN:

QuoteDespite the apparent precision of the above equations, blending arithmetic is not exactly specified, because blending operates with imprecise integer color values. However, a blend factor that should be equal to one is guaranteed not to modify its multiplicand, and a blend factor equal to zero reduces its multiplicand to zero.

Anyway, no matter what precision is used when blending, the source and destination colors are always 8bit values. So, the precision loss happens somewhere else in the pipeline. When exactly is the .001 value used and how is it generated (obviously not from a texture's alpha channel)?

As for the 9200, have a look at ATI_fragment_shader. IIRC, it works with higher precision calculations (than fixed function).

napier

The .001 value comes from me going through some hypothetical color blending scenarios.  When a low RGB value (ie. .1 or .2) is multiplied by a low alpha value (.01) the resulting value (.001 or .002) is too small for an 8 bit number to store accurately.  So it makes sense that with 8 bit precision I would get artifacts when using very low alpha values.

I'm rendering many layers of textured quads with very faint alpha, so the artifacts build as the blending operations occur (on the 9200).

Thanks for the info spasi, I'll check out the ATI_fragment_shader.
penGL/Java/LWJGL demos and code: http://potatoland.org/code/gl

elias

Quote from: "napier"
Hmmm.... On the Mac OSX 10.3.7 I don't get any error message when I set PixelFormat alpha.  I've tried 0, 8, 24, 32.  All create a display with what looks like 8 bit alpha.

This is a bug. I've fixed it in CVS.

- elias