iDevGames Forums

Full Version: CG Fragment Shaders
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I was messing around with CG today and came up with some really nice effects. I immediately got excited and implemented them into my game. I was wondering if there are any major performance/compatibility issues i need to be aware of before i continue developing with CG and fragment shaders in general.

As always any help is greatly appreciated Smile
If you're using Cg's ARB_fragment_program profile, then you're requiring a GeForce FX or Radeon 9500 or better. The performance will (of course) be adversely impacted; how much by depends on the complexity of the shader. I suggest benchmarking on a GeForce FX 5200 Go and a Radeon 9550 Mobility before making any assumptions, since those are the slowest cards in Macs that can support it.
Actually, depending on the shader, you might get better performance. Basically, if you know you don't need to do something, you don't have to do it. (if it was going to happen in the fixed pipeline anyway, for example) Obviously, the more complex the shader, the more time it'll take. Just remember, especially for fragment shaders, it's being repeated a lot of times.
OneSadCookie Wrote:If you're using Cg's ARB_fragment_program profile...

Actually i had a question about this as well. Im using all CG prefixed functions as does the nehe tutorial (Link ). I later saw that fragment shaders can be implemented using some type of ARB extension and all ARB prefixed functions. Is there a reason why i should or shouldn't keep on using the CG prefixed functions?
I'm pretty sure that Cg shaders won't work on ATI cards unless you compile them to ARB specified shaders. (basically low level shaders, which you can also learn how to do)
akb825 Wrote:I'm pretty sure that Cg shaders won't work on ATI cards unless you compile them to ARB specified shaders. (basically low level shaders, which you can also learn how to do)

That's how Cg was designed, use a high level language that compiles down to assembly shaders that the then current graphics cards could already read.

nVidia has a number of specific shader extensions while ATI has only one that I can think of. Both support (on newer cards of course) the ARB shader extensions. Because Cg was created by Nvidia (last time I checked, which was a while ago) effort wasn't put in to implment the ATI specific shader extension (ATI_text_fragment_shader), though again, it's been a while since I last checked on that (maybe ATI stepped in and added support for it). Though, really, ATI_text_fragment_shader is so crippled that there is little that can be done with it.

EDIT: btw, I highly recommend 'The Cg Tutorial', an excellent book. It's a better shader book then the orange book, good even if you just want to learn about shaders in general.
Cg has a couple of advantages over ARB_vertex_program + ARB_fragment_program, one of which is that simple shaders can be compiled for the GeForce 3, and the other is that it's a much higher-level language, and therefore much easier to write shaders in.

It has two major disadvantages compared to GLSL, being that it doesn't have direct access to OpenGL state, and that it'll never be able to take advantage of features of the next (as-yet unreleased) generation of ATI hardware.
OneSadCookie Wrote:It has two major disadvantages compared to GLSL, being that it doesn't have direct access to OpenGL state...

Interesting, so in GLSL i could access for example the modelview matrix without having to directly pass it to the shader program?
that's right.
The other major problem with Cg is that the compiler isn't optimal. It's pretty good all things considered, but hand-tuned code can easily beat it out.

For a cross-platform game targeting both DX and OGL with a single shader solution, Cg is a godsend. Cg is also great for learning shaders. ARB_VERTEX/FRAGMENT_PROGRAM may be better as a shipping solution for a Mac-only game. And obviously GLSL will be what you want to use in the future.

There's no real performance and compatibility considerations over standard ARB_PROGRAM code, since that's what Cg compiles down to (outside the aforementioned sub-optimal code). Shaders *can* be faster than fixed-function code, but in practice this doesn't happen super often (it also depends largely on the video card). And I wouldn't use shaders below X.3.8. Shaders *can* be used in X.2.8-X.3.7, but I've seen huge incompatibilities on those systems. Shaders that don't give the desired results, system crashes, etc. Better to save yourself the heartache and just require X.3.8.
Actually, in Cg you can access the gl state...

glstate.matrix.modelview[0] gets you the default modelview for instance... It's all covered under the ARB profiles in the pdf that ships with Cg.
Reference URL's