LWJGL + Cg = ?

Started by Optus, April 12, 2004, 19:01:35

Previous topic - Next topic

Optus

Hi Everyone, I'm sure this has been covered before.  I searched multiple times and came up with nothing :/.  Anyway, I just got a new video card, and was interested in checking out Cg.  Can I use Cg with LWJGL?

Matzon

Though I answered it on #lwjgl @ irc.freenode.net, I'll just answer it here too:

Yes, LWJGL supports Cg, check the org.lwjgl.test.opengl.Grass by elias (nVidia only).

elias

Actually, that's not quite true. The Grass example is in fact just using a plain NV_vertex_program created by Cg and hand-imported. The actual Cg runtime is not supported by LWJGL.

- elias

spasi

Why limit to NV only, when there's GLSL? And, on NV hardware, you can use Cg functionality straight through GLSL (which may sound great, but sucks big time IMHO, for several reasons). You'll have to grab the latest drivers (leaked/beta) to use it though. Of course, LWJGL has support for GLSL :).

elias

Sure. But the grass demo is way old (older than even ARB_vertex_program, or else I would have used that). And with the advent of GLSL and the Cg extension to it, I see no real reason to support the Cg runtime at the moment.

- elias

Optus

So, just so I'm clear on the subject.  Should I use Cg to compile to a ARB_vertex_program or should I investigate GLSL? I know nothing about GLSL (It's long for GLSlang right?), does it, like Cg, compile high-level code to vertex_programs, or is it different?

spasi

Cg, like DirectX's HLSL, compiles to low-level shaders/programs. You can do it offline, or at runtime. The problem is, there are lots of targets (nv20, nv30, ati ones, etc.) to worry about. As an example, I read in a Valve presentation that Half-Life 2 uses THOUSANDS of, HLSL generated, low-level shaders. They write HLSL, compile all of the possible "versions" of the shaders (different # of lights, # of bones, enabled effects, etc.) and use the low-level ones at runtime, which of course is a big mess to manage.

With GLSL, the drivers compile at runtime only. And you don't have to worry about what's being generated. The driver will choose the best one for the hardware to run it. For example, on my FX it compiles to NV VP2.0 and NV FP1.0, thus being able to support everything available on my card (and optimize better). On a GeForce4, it'll probably compile to NV VP1.0 and an ATI would use the ARB programs. But you really don't (and shouldn't) care. It's all being done behind the scenes, you only care about shader objects and shader programs. No low-level stuff. It's much easier for the developer and as I said, if you want more flexibility you can always use Cg stuff (hint: check for the EXT_Cg_shader extension availability).