Why do I need a shader?

Started by sufimaster, December 11, 2011, 23:55:12

Previous topic - Next topic

sufimaster

I caution that this is going to sound very stupid. So I wrote a small engine to display models and lights in 3D in a window. I render the meshes using VBOs to the screen.  Everythign renders in three-dimensions. I can move around using regular camera controls and move around my scene.  I can move my lights around.

Then I started reading about shaders.  I understand the concept of a vertex shader - basically the shader processes each vertex and performs some transformations on it.   Why do I need this? Everywhere I read, you need this to transform vertices in 3 dimensions to the 2d screen, but this is working already in my program.  Does this mean I am doing the transform using the CPU as opposed to the GPU? Which lines should I remove from my code if it is redundant?

One other thing - when I implemented frag shades, my lighting disappears, and now, for example, a cube in the scene is a single solid color.  I understand this is because the shader I am using looks like this;


varying vec4 vertColor;

void main(){
	gl_FragColor = vertColor;
}


Which basically just shows the vertex color as whatever it's set in my data structure.  I have normals at each vertex - how do I use these to get the lighting back to normal? And why was it working before without the fragment shader?

CodeBunny

All shaders are executed on the GPU - therefore, they are very very fast.

Many shader effects can be written without anything particularly special in the vertex shader, it's true. But that doesn't mean that vertex shaders aren't useful. One of the things they allow for is geometry distortion - things like facial animations, waves, etc.

To understand what's going on with your frag shader, look up some fragment shader tutorials. This issue is quite well documented and explained.

sufimaster

I've done a lot of searching, and have learned about shaders but not exactly why there are used and what they replace.  Should I be getting rid of gluLookAt calls and the matrix transformations before that?  I understand that vertex shaders can be used for distorting vertices.  Does this mean I can write animations using shaders?

CodeBunny

I haven't worked with vertex shaders (only fragment shaders, I got into this area not long ago), so I'm not the best person to ask about that.

aldacron

Without shaders, you are using what is called the 'fixed-function pipeline'. All of the transform and lighting calculations are still done on the GPU, but you are stuck with the algorithms implemented by the hardware vendor. When shaders first came along, they allowed you to interject your own algorithms into the pipeline to replace the built-in stuff. This made it possible to create effects that were difficult, slower, or impossible to do otherwise, including stuff that had never been done on the GPU before (like level of detail algorithms). And these days, modern OpenGL versions have deprecated the fixed-function pipeline. So if you want to use pure OpenGL 3/4, you have no choice but to use shaders.

sufimaster

Ah, that's the first answer that has made sense to me.  But my question is, if I want to move to shaders, what sort of stuff do I need to get out of my current program.

For example, right now I load meshes from a file, compute normals for each vertex, then do the camera transforms, and render my meshes using VBOs.  My lighting looks terrible because I just use normal averages at each vertex.  When I tried to use the sample in the wiki, when I used the shaders, my camera and all of that seemed fine, but my lighting disappeared, and everything appeared a single solid color.  I am assuming that is because the shader didn't consider the normals that I created.  Would that be true?

aldacron

Quote from: sufimaster on December 22, 2011, 22:29:48
Ah, that's the first answer that has made sense to me.  But my question is, if I want to move to shaders, what sort of stuff do I need to get out of my current program.

For example, right now I load meshes from a file, compute normals for each vertex, then do the camera transforms, and render my meshes using VBOs.  My lighting looks terrible because I just use normal averages at each vertex.  When I tried to use the sample in the wiki, when I used the shaders, my camera and all of that seemed fine, but my lighting disappeared, and everything appeared a single solid color.  I am assuming that is because the shader didn't consider the normals that I created.  Would that be true?


It's not just that you didn't consider the normals, it's that you didn't implement any sort of algorithm to calculate lighting. The shader example you gave does one thing only: it assigns the color value of vertColor to the output fragment. So the result you see is exactly what is expected. If you want to see lighting, you'll have to manipulate the color value.

If you take a look at this link, it explains the traditional OpenGL lighting model. The values for ambient, specular, diffuse, and everything else are used on the gpu to calculate the final fragment color. So to get your own lighting, you'll have to do that yourself. A quick google for "glsl lighting tutorial" comes back with several results. Keep in mind that they will likely be using different versions of GLSL.