I was wondering what the difference is between glcolor3b()
and glcolor3ub()
.
It appears that glcolor3b(255, 0, 0);
does not set the color to red, but rather it sets it to black. glcolor3b(48, 160, 64)
sets it to dark purple, not green. glcolor3ub()
, however works as expected.
Additionally, the documentation for glcolor3b()
and glcolor3ub()
are exactly the same, except for the "u":
public static void glColor3(u)b(byte red,
byte green,
byte blue)
Does anybody know why this is?
glColor3b()
takes byte parameters with a range from -128 to 127.glColor3ub()
takes unsigned byte parameters with a range from 0 to 255. Using values greater than 127 withglColor3b()
leads to an arithmetic overflow.Of course 255 sets it to black. That is 0xff (or 0b11111111), which is -1 using a 2's complement signed 8-bit number...
-1 is less than 0, which you would consider to be the absence of all color. Signed colors really do not make much sense outside of blending. In a nutshell, that is the difference between these two functions, one is signed and other is unsigned.
When you use the
glColor3b (...)
function (signed), then your range is -128–127 (-128 maps to -1.0 and 127 maps to 1.0, 0 is the mid-point),When you use
glColor3ub (...)
function (un-signed), the range is 0–255 (0 maps to 0.0 and 255 maps to 1.0).No matter which function you use, unless it is
glColor3f (...)
, they all do fixed-point to floating-point unit conversion. During fixed-point to floating-point normalization, the range of the integer data type is mapped directly between -1.0 (signed) / 0 (unsigned) and 1.0. Theu
vs. non-u
simply indicates that one of them is unsigned (larger positive range).glColor3ub
is the version forunsigned char
in C/C++, that is a 8-bit integer without a sign.glColor3b
is the version forchar
which is a signed 8-bit integer.255 = 0xFF is actually -1 when interpreted as a signed 8-bit integer. That is why you get a black screen because of the Two's complement representation. Simply stick to the
ub
versions.