Unable to decrypt the xor-base64 text

2019-06-11 13:41发布

问题:

I am using the below code to encrypt and decrypt the data. Now I want to encrypt the data from Node JS and want to decrypt the data from Go lang. But I am not able to achieve it using GO lang.

var B64XorCipher = {
  encode: function(key, data) {
    return new Buffer(xorStrings(key, data),'utf8').toString('base64');
  },
  decode: function(key, data) {
    data = new Buffer(data,'base64').toString('utf8');
    return xorStrings(key, data);
  }
};

function xorStrings(key,input){
  var output='';
  for(var i=0;i<input.length;i++){
    var c = input.charCodeAt(i);
    var k = key.charCodeAt(i%key.length);
    output += String.fromCharCode(c ^ k);
  }
  return output;
}

From go I am trying to decode like below I am not able to achieve it.

bytes, err := base64.StdEncoding.DecodeString(actualInput)
encryptedText := string(bytes)
fmt.Println(EncryptDecrypt(encryptedText, "XXXXXX"))

func EncryptDecrypt(input, key string) (output string) {
    for i := range input {
        output += string(input[i] ^ key[i%len(key)])
    }

    return output
}

Can someone help me to resolve it.

回答1:

You should use DecodeRuneInString instead of just slice string to byte.

Solution in playground: https://play.golang.org/p/qi_6S1J_dZU

package main

import (
    "fmt"
    "unicode/utf8"
)

func main() {
    fmt.Println("Hello, playground")
    k:="1234fd23434"
    input:="The 我characterode我 113 is equal to q"
    fmt.Println(EncryptDecrypt(input,k))

    // expect: "eZV扷ZRFRWEWA[戣[@GRX@^B"

}

func EncryptDecrypt(input, key string) (output string) {
    keylen := len(key)
    count := len(input)
    i := 0
    j := 0
    for i < count {
        c, n := utf8.DecodeRuneInString(input[i:])
        i += n
        k, m := utf8.DecodeRuneInString(key[j:])
        j += m
        if j >= keylen {
            j = 0
        }

        output += string(c ^ k)
    }

    return output
}

compared to your js result

function xorStrings(key,input){
  var output='';
  for(var i=0;i<input.length;i++){
    var c = input.charCodeAt(i);
    var k = key.charCodeAt(i%key.length);
    output += String.fromCharCode(c ^ k);
  }
  return output;
}

console.log(xorStrings('1234fd23434',"The 我characterode我 113 is equal to q"))
// expect: "eZV扷ZRFRWEWA[戣[@GRX@^B"

The test result is the same.

Here is why.

In go, when you range a string, you iterate bytes, but javascript charCodeAt is for character,not byte. In utf-8, the character is maybe 2 or 3 bytes long. So that is why you got different output.

Test in playground https://play.golang.org/p/XawI9aR_HDh

package main

import (
    "fmt"
    "unicode/utf8"
)

var sentence = "The 我quick brown fox jumps over the lazy dog."

var index = 4

func main() {
    fmt.Println("slice of string...")
    fmt.Printf("The byte at %d is |%s|, |%s| is 3 bytes long.\n",index,sentence[index:index+1],sentence[index:index+3])

    fmt.Println("runes of string...")
    ru, _ := utf8.DecodeRuneInString(sentence[index:])
    i := int(ru)
    fmt.Printf("The character code at %d is|%s|%d|    \n",index, string(ru), i)
}

The output is

slice of string...
The byte at 4 is |�|, |我| is 3 bytes long.
runes of string...
The character code at 4 is|我|25105| 


回答2:

The charCodeAt() method returns an integer between 0 and 65535 representing the UTF-16 code unit at the given index.

var c = input.charCodeAt(i);

For statements with range clause

For a string value, the "range" clause iterates over the Unicode code points in the string starting at byte index 0. On successive iterations, the index value will be the index of the first byte of successive UTF-8-encoded code points in the string, and the second value, of type rune, will be the value of the corresponding code point. If the iteration encounters an invalid UTF-8 sequence, the second value will be 0xFFFD, the Unicode replacement character, and the next iteration will advance a single byte in the string.

for i := range input

UTF-16 versus UTF-8?