Is there an easy way to convert a Java string to a true UTF-8 byte array in JNI code?
Unfortunately GetStringUTFChars() almost does what's required but not quite, it returns a "modified" UTF-8 byte sequence. The main difference is that a modified UTF-8 doesn't contain any null characters (so you can treat is an ANSI C null terminated string) but another difference seems to be how Unicode supplementary characters such as emoji are treated.
A character such as U+1F604 "SMILING FACE WITH OPEN MOUTH AND SMILING EYES" is stored as a surrogate pair (two UTF-16 characters U+D83D U+DE04) and has a 4-byte UTF-8 equivalent of F0 9F 98 84, and that is the byte sequence that I get if I convert the string to UTF-8 in Java:
char[] c = Character.toChars(0x1F604);
String s = new String(c);
System.out.println(s);
for (int i=0; i<c.length; ++i)
System.out.println("c["+i+"] = 0x"+Integer.toHexString(c[i]));
byte[] b = s.getBytes("UTF-8");
for (int i=0; i<b.length; ++i)
System.out.println("b["+i+"] = 0x"+Integer.toHexString(b[i] & 0xFF));
The code above prints the following:
This is clearly explained in the Java documentation:
JNI Functions
Modified UTF-8
Since U+1F604 is a supplementary character, and Java does not support UTF-8's 4-byte encoding format, U+1F604 is represented in modified UTF-8 by encoding the UTF-16 surrogate pair
U+D83D U+DE04
using 3 bytes per surrogate, thus 6 bytes total.So, to answer your question...
You can either:
Use
GetStringChars()
to get the original UTF-16 encoded characters, and then create your own UTF-8 byte array from that. The conversion from UTF-16 to UTF-8 is a very simply algorithm to implement by hand, or you can use any pre-existing implementation provided by your platform or 3rd party libraries.Have your JNI code call back into Java to invoke the
String.getBytes(String charsetName)
method to encode thejstring
object to a UTF-8 byte array, eg:Java's Modified UTF-8 is not exactly the same as CESU-8: