I was trying to use the following vertex format:
attribute 0, GL_UNSIGNED_SHORT, size 1, offset 0, stride 8;
attribute 1, GL_UNSIGNED_SHORT, size 1, offset 2, stride 8;
attribute 2, GL_UNSIGNED_BYTE, size 2, offset 4, stride 8, normalized;
attribute 3, GL_UNSIGNED_SHORT, size 1, offset 6, stride 8.
It works on a NVIDIA card, but when ran on an AMD card, the vertices get broken (mutant models appearing). No OpenGL error is given, and works without problems when changed to the following format:
attribute 0, GL_FLOAT, size 3, offset 0, stride 36;
attribute 1, GL_FLOAT, size 3, offset 12, stride 36;
attribute 2, GL_FLOAT, size 3, offset 24, stride 36.
OR (packing all attributes into one)
attribute 0, GL_UNSIGNED_SHORT, size 4, offset 0, stride 8.
The in-vertex shader definition for the first vertex format is:
#version 330
layout ( location = 0 ) in float in_vsAttrib0;
layout ( location = 1 ) in float in_vsAttrib1;
layout ( location = 2 ) in vec2 in_vsAttrib2;
layout ( location = 3 ) in float in_vsAttrib3;
Have I missed something in the OpenGL specification about this, or could it be a driver issue?