why are texture used as lightmaps when you only need one channel?

Started by Kova, October 15, 2011, 13:58:11

Previous topic - Next topic

Kova

Hello.
So I'm doing this lighting system using lightmaps with shaders. Reading articles and tutorials around the web I found them all using textures. What I need is only one channel to indicate how much light passed through, so why all those examples use full 3 or 4 channels (RGBA) and waste memory? Sine one channel textures don't exist (?), shouldn't we use arrays rather then textures for this to not waste memory?

spasi

Single-channel textures do exist: the old GL_ALPHA, GL_LUMINANCE, GL_RED and the new GL_Rx ones from ARB_texture_rg are all single-channel formats.

The main reason to use colored textures for lightmaps is to pre-compute global illumination/radiosity effects. GI introduces subtle coloring to every surface so a single alpha value modulated by a fixed light color is not enough to simulate the real world.

Kova

Yeah but I have found that GL_INTENSITY, GL_ALPHA and such are deprecated: http://www.opengl.org/wiki/Image_Format (at the bottom of the page: legacy formats)
They seem also to produce vec4 in shader:
"When a GL_RED format is sampled in a shader, the resulting vec4 is (Red, 0, 0, 1). When a GL_INTENSITY format is sampled, the resulting vec4 is (I, I, I, I) ...."

So my question is if you need only one channel (2d int array) for data, should you store it into the texture or is there some better way for core profile OpenGL? As single channel texture formats are deprecated it seems that there is a way you should do it, but I can't seem to find how as all tutorials are quite old on the subject.

Kova

OK I found it (the ARB extension spasi was talking about, it made it to core it seems), don't know how I missed it the first time. Same page (http://www.opengl.org/wiki/Image_Format) has this:

QuoteImage formats do not have to store each component. When the shader samples such a texture, it will still resolve to a 4-value RGBA vector. The components not stored by the image format are filled in automatically. Zeros are used if R, G, or B is missing, while a missing Alpha always resolves to 1.

OpenGL has a particular syntax for writing its color format enumerants. It looks like this:

GL_[components][size][type]

The components field is the list of components that the format stores. OpenGL only allows "R", "RG", "RGB", or "RGBA";
...

So I got my answer! Tnx everyone for your time.