I just started testing with the iPhone 5S and the 64bit architecture on an OpenGL ES app. The problem I'm seeing is that (CGFloat) values are way wrong when they get to the shaders. I pass in 0.8 and it changes to -1.58819e-23 when I debug the shader. I am using glUniform4fv() to pass in the value. Do I need to use a different data type or? or a different method to pass in the values? The value goes through fine when I test on 32bit
CGFloat brushColor[4];
brushColor[0] = 0.8;
brushColor[1] = 0.1;
brushColor[2] = 0.1;
brushColor[3] = 0.3;
glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
(some of you may notice this is from the GLPaint demo...)
thanks,
austin
CGFloat
is a variable typedef. On a 32-bit build environment it is single-precision, on 64-bit it is double-precision. Normally this would not be a huge issue, but you are usingglUniform4fv
, which takes aGLfloat *
.OpenGL ES 2.0 Specification - Basic GL Operation - p. 12
OpenGL stipulates that
GLfloat
is always a single-precision floating point value and compilers can deal with type demotion from double-precision to single-precision when you use the non-pointer version of this function. When you use pointers, this behavior does not occur - OpenGL expects to be passed an array of single-precision floats, but you pass it an array of double-precision floats with no type conversion.What you need to do is stop using
CGFloat
. Instead, useGLfloat
. OpenGL typedefs are provided to ensure this sort of thing never happens.