I must not understand the glPolygonStipple
bit arrangement. I thought it was a simple 32x32 bitmask. Therefor if I could use an unsigned int
per row. For example this code produces (as expected) a thick vertical stripe:
static unsigned int halftone[32];
for(static bool once = true;once;once=false)
{
for(int r = 0;r<32;r++)
{
halftone[r] = 65535;
}
}
Producing:
static unsigned int halftone[32];
for(static bool once = true;once;once=false)
{
halftone[0] = 65535;
for(int r = 1;r<32;r++)
{
halftone[r] = rol(halftone[r-1]);
}
}
where rol
is a circular bit shift:
template <typename INT>
constexpr INT rol(INT val) {
static_assert(std::is_unsigned<INT>::value,
"Rotate Left only makes sense for unsigned types");
return (val << 1) | (val >> (sizeof(INT)*CHAR_BIT-1));
}
I can verify by adding a cout<<bitset<32>(halftone[r])<<endl;
that I'm getting the right pattern:
00000000000000001111111111111111
00000000000000011111111111111110
00000000000000111111111111111100
00000000000001111111111111111000
00000000000011111111111111110000
00000000000111111111111111100000
00000000001111111111111111000000
00000000011111111111111110000000
00000000111111111111111100000000
00000001111111111111111000000000
00000011111111111111110000000000
00000111111111111111100000000000
00001111111111111111000000000000
00011111111111111110000000000000
00111111111111111100000000000000
01111111111111111000000000000000
11111111111111110000000000000000
11111111111111100000000000000001
11111111111111000000000000000011
11111111111110000000000000000111
11111111111100000000000000001111
11111111111000000000000000011111
11111111110000000000000000111111
11111111100000000000000001111111
11111111000000000000000011111111
11111110000000000000000111111111
11111100000000000000001111111111
11111000000000000000011111111111
11110000000000000000111111111111
11100000000000000001111111111111
11000000000000000011111111111111
10000000000000000111111111111111
But OpenGL is producing:
I'm casting the array pointer to GLubyte
when I pass to glPolygonStipple
glPolygonStipple((GLubyte*)halftone);
Is there something wrong with my understanding? Is this related to some glPixelStore
problem?
Looks like the bytes in your 32-bit values are swapped relative to what OpenGL expects for the mask.
The byte order is controlled by the
GL_UNPACK_LSB_FIRST
pixels store parameter, which isGL_FALSE
by default. Since the LSB is first on a little endian machine, which is most likely what you're using, this is backwards.You can fix it by changing the value: