Why does this line
System.Text.Encoding.UTF8.GetBytes("ABCD±ABCD")
Give me back 10 bytes instead of 9? Although ± is char(177)
Is there a .Net function / encoding that will translate this string correctly into 9 bytes?
Why does this line
System.Text.Encoding.UTF8.GetBytes("ABCD±ABCD")
Give me back 10 bytes instead of 9? Although ± is char(177)
Is there a .Net function / encoding that will translate this string correctly into 9 bytes?
You should use Windows-1251
encoding to get ±
as 177
var bytes = System.Text.Encoding.GetEncoding("Windows-1251").GetBytes("ABCD±ABCD");
Although ± is char(177)
And the UTF-8 encoding for that is 0xc2 0xb1 - two bytes. Basically, every code-point >= 128 will take multiple bytes - where the number of bytes depends on the magnitude of the code-point.
That data is 10 bytes, when encoded with UTF-8. The error here is your expectation that it should take 9.
±
falls out side of the range of ASCII so it is represented by 2 bytes.
This video explains utf-8 encoding nicely: http://www.youtube.com/watch?v=MijmeoH9LT4. After watching it you will realize why it results in more bytes and you thought.