iDevGames Forums

Full Version: OpenGL ES Texture Compression
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm working on a 2d iPad port with 15+ 1024x1024 png derived textures and I'm trying to reduce my game's memory footprint to get it onto 1st gen iPads. I've tried PVRTC compressions (4bpp) but found the lossiness unacceptable.

The next logical step would seem to be getting OpenGL ES to store the texture data in a lower quality internal format (e.g., RGBA_5_5_5_1, or something other than the default RGBA 32 bit).

Currently I pass the pixel data to OpenGL via:

Code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);

Now, when I'm testing on my mac, all I need to to is pass GL_RGBA4, to the 3rd parameter (internalFormat), and I suddenly get 16 bpp textures without any problems. This works without me having to change what I pass to the last two parameters. However, when I try this on iOS, it doesn't work. Additionally I've tried passing other constants to the 3rd parameter (internalFormat) and have gotten a range of effects from totally empty textures to completely discolored neon-pinkish textures.

Any advice would be greatly appreciated.
Unlike desktop GL, GLES doesn't convert texture data for you. If you are giving it 32 bit RGBA color data, it won't convert that to a packed pixel format for you. You have to do that yourself before uploading the texture.

The problem is that there are a bajillion different texture conversions that desktop GL needs to be able to support. Removing that from GLES makes it's implementation that much simpler.
Okay, that makes sense. Incidentally, after posting I came across this thread which also seems to touch upon the same issue.

So if I understand correctly, in order to get Opengl ES to store pixel data internally as RGBA_4_4_4_4, I should do this:

Code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024, 1024, 0, GL_RGBA, GL_UNSIGNED_SHORT_4_4_4_4, pixels);

Where pixels points to the raw image data stored in RGBA_4444 format?

I tried the above code after converting the source png from RGBA8888 to RGBA444 and it still doesn't work.

I suspect it has to do with how I'm loading the png data, here's my code:

Code:
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image);
GLubyte *tmpbuffer = malloc((1024*1024)*sizeof(GLushort));
CGContextRef context = CGBitmapContextCreate(tmpbuffer, width, height, 4, 2 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder16Big);
CGContextTranslateCTM(context, 0, (float)(height));
CGContextScaleCTM(context, 1.0, -1.0);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);

I tried loading 2 different png rgba4444 files, one with pre-multiplied alphas and one without. Neither works. I also tried messing with a lot of the CGBitmapContextCreate parameters but have gotten nowhere so far.
CoreGraphics can't render except at 8 and 32 bits per channel, AFAIK. It certainly can't do 4.

You should consider using PVRTC compressed textures. You'll get much better quality at the same size, and Apple provides "texturetool" to create PVRTC files from PNG images.
@OSC: From the first post -> "I've tried PVRTC compressions (4bpp) but found the lossiness unacceptable"

If you download the PowerVR SDK they have a tool for outputing .pvr files with various pixel formats including packed ones. Pretty sure they have sample code for loading them as well. The format is pretty dead simple, I ended up just writing my own the one time since I only wanted to support one format anyway.
@OSC, it appears that you're right about CoreGraphics, I combed through some docs and they don't seem to support 16 with alpha.

@Skorche, I'd given up on pvr because of how horribly it botched my sprites. My understanding though was that 4bpp was the maximum quality (at least insofar as ogles supports).

I'm leaving my files as rgba8888 and included a routine which converts from rgba8888 to rgba4444 (or rgba5551) after it's already loaded via CoreGraphics. This seems to work well, and gives added flexibility.
Easly mistake to make, but PVR != PVRTC.

PVR is a simple container file format for texture data of various formats (including pvrtc). Probably not terribly different than DDS files.
That would explain my constant state of confusion over the two file extensions Annoyed
PVRTC actually isn't a file extension. Just the name of the texture compression algorithm.
Yeah I know, but while trying various formats/conversions I ran across files which used both .pvr and .pvrtc extension which I thought odd at the time, though now it makes more sense.
Reference URL's