I have recently migrated from a 32bit environment to a 64bit one, and it has gone smoothly apart from one problem: glMultiDrawElements
uses some arrays that do not work without some tweaking under a 64bit OS.
glMultiDrawElements( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT,
reinterpret_cast< const GLvoid** >( iOffset_ ),
mesh().faces().size() );
I am using VBOs for both the vertices and vertex indices. fCount_
and iOffset_
are arrays of GLsizei
. Since a buffer is bound to GL_ELEMENT_ARRAY_BUFFER
, iOffset_
's elements are used as byte offsets from the VBO beginning. This works perfectly under a 32bit OS.
If I change glMultiDrawElements
to glDrawElements
and put it into a loop, it works fine on both platforms:
int offset = 0;
for ( Sy_meshData::Faces::ConstIterator i = mesh().faces().constBegin();
i != mesh().faces().constEnd(); ++i ) {
glDrawElements( GL_LINE_LOOP, i->vertexIndices.size(), GL_UNSIGNED_INT,
reinterpret_cast< const GLvoid* >( sizeof( GLsizei ) * offset ) );
offset += i->vertexIndices.size();
}
I think what I am seeing is OpenGL reading 64bit chunks of iOffset_
leading to massive numbers, but glMultiDrawElements
does not support any type wider than 32bit (GL_UNSIGNED_INT
), so I'm not sure how to correct it.
Has anyone else had this situation and solved it? Or am I handling this entirely wrong and was just lucky on a 32bit OS?
Update
Swapping out my existing code for:
typedef void ( *testPtr )( GLenum mode, const GLsizei* count, GLenum type,
const GLuint* indices, GLsizei primcount );
testPtr ptr = (testPtr)glMultiDrawElements;
ptr( GL_LINE_LOOP, fCount_, GL_UNSIGNED_INT, iOffset_, mesh().faces().size() );
Results in exactly the same result.