Why do my PNG textures have a whitish thin border?

Member
Posts: 249
Joined: 2008.10
Post: #1
Hi friends!

I upload a picture of a texture on my Photoshop and my opengl program.
My question is why I see a gray/white/whitish border? Do you see it?
Is it in relationship of PNG pre-multiplied alpha channel?
My render code have:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
and my initialization opengl code:
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);



[Image: dibujo2mkq.png]

[Image: dibujo3l.png]

After reading this post:
http://www.idevgames.com/forum/showthread.php?t=17830
I tried changing filtering from LINEAR to NEAREST, but as you can see, I have the same problem.


[Image: 95920860.png]

Does Anybody know what going on is?
By the way, my textures are showed perfercly on Adobe Flash.

Thanks a lot for your help.

EDIT: How can I show those pictures instead of only links?
Quote this message in a reply
Member
Posts: 269
Joined: 2005.04
Post: #2
riruilo Wrote:Is it in relationship of PNG pre-multiplied alpha channel?

Yup. Xcode converts PNGs automatically to premultiplied alpha. Try glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); instead.
Quote this message in a reply
Member
Posts: 749
Joined: 2003.01
Post: #3
that happens because when it interpolates (for scaling) the color and alpha of the border pixels he "gets" some of the black from the transparent pixels.

One option is to edit the png to make the transparent border pixels (close to the visible ones) of the same color of the adjacent visible pixels. You might have problems doing this with photoshop, I use seashore for that ( go to windows->utility windows->show layers then in the layers windows chose channels: primary and you will be able to edit the rgb even if the alpha is 0. )

There should be some ways to make this work programmatically but I never managed.

©h€ck øut µy stuƒƒ åt ragdollsoft.com
New game in development Rubber Ninjas - Mac Games Downloads
Quote this message in a reply
Member
Posts: 249
Joined: 2008.10
Post: #4
Bachus Wrote:Yup. Xcode converts PNGs automatically to premultiplied alpha. Try glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); instead.

My PNG is already premultiplied (a 24 bpp picture).

After using GL_ONE, it seems it works perfectly, but I don't understand why. Can you explain me it a little bit more? or write a link?

What will happen if I use a 32 bit BMP file?

Thanks a lot. Very helpful.
Quote this message in a reply
Member
Posts: 87
Joined: 2006.08
Post: #5
Search around http://home.comcast.net/~tom_forsyth/blog.wiki.html for a great explanation of premultiplied alpha. Unfortunately, the strange design of the website makes it impossible to link directly to a specific article.

This is a perfect example of why premultiplied alpha is better than non-premultiplied alpha.
Quote this message in a reply
Member
Posts: 166
Joined: 2009.04
Post: #6
Frogblast Wrote:Search around http://home.comcast.net/~tom_forsyth/blog.wiki.html for a great explanation of premultiplied alpha. Unfortunately, the strange design of the website makes it impossible to link directly to a specific article.

This is a perfect example of why premultiplied alpha is better than non-premultiplied alpha.

He has a hard-to-find rollover perma link

http://home.comcast.net/~tom_forsyth/blo...lpha%5D%5D
Quote this message in a reply
Member
Posts: 269
Joined: 2005.04
Post: #7
The key paragraph:

Quote:So what now happens is all the alpha=0 texels are now black. Wait - but you'll get bleeding and halos! No, you don't. Let's do the half-texel example again. Let's say we have an entirely red texture, but with some bits alpha'd, and we render onto a green background. You'd expect to get shades of red, green and yellow - but the darkest yellow should be around (0.5,0.5,0) - no dark halos of something like (0.25,0.25,0), right?. So (1,0,0,1 = solid red) and (1,0,0,0 = transparent red). The second one gets premultiplied before compression to (0,0,0,0). Now we bilinear filter between them and get (0.5, 0, 0, 0.5). And then we render onto bright green (0,1,0).

FB.rgb = texel.rgb + (1-texel.a) * FB.rgb
= (0.5, 0, 0) + (1-0.5) * (0,1,0)
= (0.5, 0, 0) + (0, 0.5, 0)
= (0.5, 0.5, 0)

which is exactly what we were expecting. No dark halos.

Contrast it with "normal" blending (GL_SRC_ALPHA):

FB.rgb = (texel.rgb * texel.a) + ((1-texel.a) * FB.rgb)
= ((0.5, 0, 0) * 0.5) + ((1-0.5) * (0,1,0))
= (0.25, 0, 0) + (0, 0.5, 0)
= (0.25, 0.5, 0)

Which is darker (and a different color!), thus causing the black fringe.
Quote this message in a reply
Member
Posts: 249
Joined: 2008.10
Post: #8
Frogblast Wrote:This is a perfect example of why premultiplied alpha is better than non-premultiplied alpha.

Thanks a lot for replies.

I guess not always premultiplied is better than premultiplied, doesn't it?
Quote this message in a reply
Moderator
Posts: 1,562
Joined: 2003.10
Post: #9
riruilo Wrote:I guess not always premultiplied is better than premultiplied, doesn't it?

Can't quite make sense of this sentence, but these are the important differences as I see them:
  • Premultiplied images can scale with interpolation without leaving fringes. Nonpremultiplied images can't, at least not without some trickery.
  • Premultiplying an image is a lossy operation. It reduces the effective range of color values for each pixel to a maximum of that pixel's alpha value. No color data at all can be stored for pixels with alpha 0.
For these reasons, I often like to edit and store images nonpremultiplied, and premultiply them at load time if I'm going to be displaying them with OpenGL.
Quote this message in a reply
Sage
Posts: 1,234
Joined: 2002.10
Post: #10
See also the multiple previous threads on this topic. We've been answering this same question for at least five years now.
Quote this message in a reply
Member
Posts: 249
Joined: 2008.10
Post: #11
Thanks a lot.
Quote this message in a reply
Nibbie
Posts: 4
Joined: 2010.10
Post: #12
So, rather than start a new thread, I will just continue this one as it seems to describe the issue I'm trying to solve.

I'm seeing a similar result when I load a 32 bit png using SOIL (ie. image loader that isn't from Apple and which doesn't auto pre-multiply). After reading about this extensively I concluded that I would be better of with a PNG that was pre-multiplied so I followed the instructions for how to save my sprite sheet (atlas) with pre-multiplied alpha vs. not.

I then changed my glBlend func
from this: glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
to this: glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

The result I get is my quad color'ed white with my sprite inside it.

I don't set glTexEnv at all, so should be getting the default there. Also, not calling glColor. I have glDrawArrays set up to use a color array which I sometimes use to fade images, but currently the values there are 1.0, 1.0, 1.0, 1.0 (r,g,b,a). But, I get the white quads even when I don't add color arrays into glDrawArrays.

This is simple texture mapping on a quad where the texture has transparent pixels. Just like in the original example in this thread.

What am I missing here that is causing the white backgrounds on my quads?
Quote this message in a reply
Member
Posts: 245
Joined: 2005.11
Post: #13
It sounds like you are using GL_DECAL (and, therefore, drawing your textures onto the solid white quad). Try:
Code:
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
Quote this message in a reply
Nibbie
Posts: 4
Joined: 2010.10
Post: #14
I almost wrote in my first post that this looks like what I get when I run with:

glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
and
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

(I sometimes use that as a quick-n-dirty debug trick to see the quads.)

But, I'm not currently calling glTexEnv at all and the default is not GL_DECAL (at least that is my understanding from the doc). Further, explicitly calling this:

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

changes nothing.

I still get the white background quads.
Quote this message in a reply
Nibbie
Posts: 4
Joined: 2010.10
Post: #15
I "think" the issue was that Gimp was not truly pre-multiplying the alpha at save time.

The image loading library (Soil) can do the pre-multiply at load time as well. I would prefer it was done in the file as part of the pipeline, but for the sake of debugging I took the original png and premultiplied alpha at load time instead.

This works perfectly using glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) as it should. I have no idea why the Gimp approach didn't work, but whatever. I can deal with this approach I guess.

Now I need to figure out how this affects my use of glColor to fade out, add transparency, etc. to my sprites. I'm guessing I need to make those values pre-multiplied as well...
Quote this message in a reply
Post Reply