GLUT_ACCUM slowing down application 10x!

Member
Posts: 321
Joined: 2004.10
Post: #1
I've got a openGL (guess that's obvious) program with about
50 objects rotating, moving etc. Fairly zippy.

All I do is add a GLUT_ACCUM to the glutIntiDisplayMode() function.
Nothing else. Don't even enable the accumulation buffer.

And then the program moves like molasses. Painfully slow.

I've heard some disparaging remarks regarding GLUT. And i've
got an old iMac DV running OS 9.

This whole thing started because I wanted to do scene antialiasing
with the accumulation buffer. But this issue is sort of a
non-starter. Short of buying a new G5 when Tiger comes out,
any suggestions.

thanks.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #2
The accumulation buffer uses 64 bits per pixel. Therefore, it can not be supported in hardware on anything older than a Radeon 9600 or GeForce 5200. On your iMac, every time you accumulate, the framebuffer has to be read back across the bus and accumulated by the CPU. This is very slow.

Buying a G5 won't help much either, since even on the new cards the accumulation functions are not accelerated yet by the drivers (but you can achieve the same results using float textures, which are accelerated.)

You don't really have a lot of options on a Rage 128 iMac... multisampling isn't supported. One thing you could do is render the scene 4x larger than you want (keeping in mind the 1024x1024 viewport limit) and then scale the result down with bilinear filtering. That is probably the best way to do general 3D scene antialiasing on Rage128 hardware. There may be better ways in special cases (like 2D) such as drawing antialiased 1 pixel wide lines around your model edges.
Quote this message in a reply
Member
Posts: 321
Joined: 2004.10
Post: #3
I came across this quote on inside mac gamers web site today:

The Radeon X800 XT's memory is about 50% faster than the Radeon 9800 Pro, which will help when full scene anti-aliasing (FSAA) and anisotropic filtering (AF) are enabled

I did some googling on FSAA, and am i correct in assuming that FSAA is done
automatically by the card. So there is really no programming associated with it?

Also, am I correct in assuming that the accFrustum() and accperspective() functions
in Chapter 10 of the Red Book will soon be just a historical teaching example.

As an aside, since I'm just going through the Red Book teaching myself a smattering
of openGL (going for breadth and not depth, here) I'm wondering what will become
obsolete the quickest in openGL? When I read about video cards with 256MB, I
thinking that color-index display mode might be on the way out, if not already.

But then, I'm just a serious hobbyist with no real world knowldege of the game
business.

Have meds. Will stop rambling.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #4
The trend is definitely towards FSAA. I don't think any games rely on accumulation, because consumer hardware hasn't accelerated it until very recently (the story is a bit different on million dollar SGI Reality Engines.)

FSAA is a pixel format attribute, so the only programming involved is to ask for it, and to enable multisampling. Very easy. On nvidia hardware there is also a filtering hint. The downside of FSAA is the resulting quality (typically 2 subpixel bits, 4 shades of alpha) is very poor compared to 2D or GL_SMOOTH antialiasing (up to 8 subpixel bits, 256 shades.) But it works with general 3D scenes.

Indexed color mode is not supported at all on OS X. nvidia hardware used to support indexed textures, but they are deprecated now. ATI hardware never supported them.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Slowing Frame Rates and Display Lists Nick 4 3,063 Mar 27, 2005 06:08 AM
Last Post: wadesworld