I'm writing a C library to export SDL_Surfaces to various formats as an exercise, and so far, I got the BMP, TGA and PCX formats down. Now I'm working on the GIF format and I feel I'm very close to getting it working. My implementation is a modified version of this one.
My current problem is writing the GIF LZW compressed image data sub-blocks. Everything goes smooth until position 208 in the first sub-block. The three bytes in the original file are (starting from position 207): "B8 29 B2" in hexadecimal, and mine are "B8 41 B2". After that, the bytes "sync" up again. Further down the compressed stream I can find similar differences, probably caused by the first error. My file is also shorter than the original.
I should note that I changed the type of the lzw_entry struct from a uint16_t to int to allow -1 as an "empty" entry since 0 is a valid entry. It didn't really make a difference in the compressed stream though. The original implementation uses uninitialized data to mark an empty entry instead.
I think I'm reading my dictionary values incorrectly which is why I get another code for position 208 than expected. Otherwise, my bitpacking is incorrect.
I've added a stripped-down version of my compression code. What might be the problem? Also, how I can make either my "dictionary" data structure better or make the bitstream writing faster?
Finally, I'm also aware that I can optimize some code here and there :)
static Uint8 bit_count = 0;
static Uint8 block_pos = 0;
int LZW_PackBits(SDL_RWops *dst, Uint8 *block, int code, Uint8 bits) {
Uint8 out = 0;
while (out != bits) {
if (bit_count == 8) {
bit_count = 0;
if (block_pos == 254) { // Thus 254 * 8 + 8 == 2040 -> 2040 / 8 = 255 -> buffer full
++block_pos;
SDL_RWwrite(dst, &block_pos, 1, 1);
SDL_RWwrite(dst, &block[0], 1, block_pos);
memset(block, 0, block_pos);
block_pos = 0;
} else
++block_pos;
}
block[block_pos] |= (code >> out & 0x1) << bit_count;
++bit_count; ++out;
}
return 1;
}
#define LZW_MAX_BITS 12
#define LZW_START_BITS 9
#define LZW_CLEAR_CODE 256
#define LZW_END_CODE 257
#define LZW_ALPHABET_SIZE 256
typedef struct {
int next[LZW_ALPHABET_SIZE]; // int so that -1 is allowed
} lzw_entry;
int table_size = 1 << LZW_MAX_BITS; // 2^12 = 4096
lzw_entry *lzw_table = (lzw_entry*)malloc(sizeof(lzw_entry) * table_size);
for (i = 0; i < table_size; ++i)
memset(&lzw_table[i].next[0], -1, sizeof(int) * LZW_ALPHABET_SIZE);
Uint8 block[255];
memset(&block[0], 0, 255);
Uint16 next_entry = LZW_END_CODE + 1;
Uint8 out_len = LZW_START_BITS;
Uint8 next_byte = 0;
int input = 0;
int nc = 0;
LZW_PackBits(dst, block, clear_code, out_len);
Uint8 *pos = ... // Start of image data
Uint8 *end = ... // End of image data
input = *pos++;
while (pos < end) {
next_byte = *pos++;
nc = lzw_table[input].next[next_byte];
if (nc >= 0) {
input = nc;
continue;
} else {
LZW_PackBits(dst, block, input, out_len);
nc = lzw_table[input].next[next_byte] = next_entry++;
input = next_byte;
}
if (next_entry == (1 << out_len)) { // Next code requires more bits
++out_len;
if (out_len > LZW_MAX_BITS) {
// Reset table
LZW_PackBits(dst, block, clear_code, out_len - 1);
out_len = LZW_START_BITS;
next_entry = LZW_END_CODE + 1;
for (i = 0; i < table_size; ++i)
memset(&lzw_table[i].next[0], -1, sizeof(int) * LZW_ALPHABET_SIZE);
}
}
}
// Write remaining stuff including current code (not shown)
LZW_PackBits(dst, block, end_code, out_len);
++block_pos;
SDL_RWwrite(dst, &block[0], 1, block_pos);
SDL_RWwrite(dst, &zero_byte, 1, 1);
const Uint8 trailer = 0x3b; // ';'
SDL_RWwrite(dst, &trailer, 1, 1);
UPDATE: I've done some more tests, and implemented the bit packing algorithm that Aki Suihkonen suggested. It made no noticable difference which tells me that I'm somehow looking up/storing codes incorrectly in my lzw_table structure and that the error(s) is in the main loop.
It's not the cause of the problem, but is there a need to write character 255 every now and then?
SDL_RWwrite(dst, &block_pos, 1, 1);
First pointer how to make the bit-writing faster:
4MB memory for the dictionary is a lot (especially compared to year 1987, when the standard was developed), but probably not that much to justify writing a more complex hash table. The basic unit could be short though. You can also initialize it to zero, if you just write code+1 to the table (and read it as table[a].next[b] -1)..
The table clearing can be optimized. There are 4MB of memory reserved but less than 4k entries used.