How do I convert an integer to binary in JavaScrip

2019-01-02 16:54发布

I’d like to see integers, positive or negative, in binary.

Rather like this question, but for JavaScript.

9条回答
素衣白纱
2楼-- · 2019-01-02 17:02

A simple way is just...

Number(42).toString(2);

// "101010"
查看更多
忆尘夕之涩
3楼-- · 2019-01-02 17:03

Answer:

function dec2bin(dec){
    return (dec >>> 0).toString(2);
}

dec2bin(1);    // 1
dec2bin(-1);   // 11111111111111111111111111111111
dec2bin(256);  // 100000000
dec2bin(-256); // 11111111111111111111111100000000

You can use Number.toString(2) function, but it has some problems when representing negative numbers. For example, (-1).toString(2) output is "-1".

To fix this issue, you can use the unsigned right shift bitwise operator (>>>) to coerce your number to an unsigned integer.

If you run (-1 >>> 0).toString(2) you will shift your number 0 bits to the right, which doesn't change the number itself but it will be represented as an unsigned integer. The code above will output "11111111111111111111111111111111" correctly.

This question has further explanation.

-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.


Note 1: this answer expects a Number as argument, so convert it accordingly.

Note 2: the result is the a string without leading zeros, so apply padding as you need.

查看更多
唯独是你
4楼-- · 2019-01-02 17:03

This is the solution . Its quite simple as a matter of fact

function binaries(num1){ 
        var str = num1.toString(2)
        return(console.log('The binary form of ' + num1 + ' is: ' + str))
     }
     binaries(3

)

        /*
         According to MDN, Number.prototype.toString() overrides 
         Object.prototype.toString() with the useful distinction that you can 
         pass in a single integer argument. This argument is an optional radix, 
         numbers 2 to 36 allowed.So in the example above, we’re passing in 2 to 
         get a string representation of the binary for the base 10 number 100, 
         i.e. 1100100.
        */
查看更多
明月照影归
5楼-- · 2019-01-02 17:07

This answer attempts to address integers with absolute values between Number.MAX_SAFE_INTEGER (or 2**53-1) and 2**31. The current solutions only address signed integers within 32 bits, but this solution will output in 64-bit two's complement form using float64ToInt64Binary():

// IIFE to scope internal variables
var float64ToInt64Binary = (function () {
  // create union
  var flt64 = new Float64Array(1)
  var uint16 = new Uint16Array(flt64.buffer)
  // 2**53-1
  var MAX_SAFE = 9007199254740991
  // 2**31
  var MAX_INT32 = 2147483648

  function uint16ToBinary() {
    var bin64 = ''

    // generate padded binary string a word at a time
    for (var word = 0; word < 4; word++) {
      bin64 = uint16[word].toString(2).padStart(16, 0) + bin64
    }

    return bin64
  }

  return function float64ToInt64Binary(number) {
    // NaN would pass through Math.abs(number) > MAX_SAFE
    if (!(Math.abs(number) <= MAX_SAFE)) {
      throw new RangeError('Absolute value must be less than 2**53')
    }

    var sign = number < 0 ? 1 : 0

    // shortcut using other answer for sufficiently small range
    if (Math.abs(number) <= MAX_INT32) {
      return (number >>> 0).toString(2).padStart(64, sign)
    }

    // little endian byte ordering
    flt64[0] = number

    // subtract bias from exponent bits
    var exponent = ((uint16[3] & 0x7FF0) >> 4) - 1022

    // encode implicit leading bit of mantissa
    uint16[3] |= 0x10
    // clear exponent and sign bit
    uint16[3] &= 0x1F

    // check sign bit
    if (sign === 1) {
      // apply two's complement
      uint16[0] ^= 0xFFFF
      uint16[1] ^= 0xFFFF
      uint16[2] ^= 0xFFFF
      uint16[3] ^= 0xFFFF
      // propagate carry bit
      for (var word = 0; word < 3 && uint16[word] === 0xFFFF; word++) {
        // apply integer overflow
        uint16[word] = 0
      }

      // complete increment
      uint16[word]++
    }

    // only keep integer part of mantissa
    var bin64 = uint16ToBinary().substr(11, Math.max(exponent, 0))
    // sign-extend binary string
    return bin64.padStart(64, sign)
  }
})()

console.log('8')
console.log(float64ToInt64Binary(8))
console.log('-8')
console.log(float64ToInt64Binary(-8))
console.log('2**33-1')
console.log(float64ToInt64Binary(2**33-1))
console.log('-(2**33-1)')
console.log(float64ToInt64Binary(-(2**33-1)))
console.log('2**53-1')
console.log(float64ToInt64Binary(2**53-1))
console.log('-(2**53-1)')
console.log(float64ToInt64Binary(-(2**53-1)))
console.log('2**52')
console.log(float64ToInt64Binary(2**52))
console.log('-(2**52)')
console.log(float64ToInt64Binary(-(2**52)))
console.log('2**52+1')
console.log(float64ToInt64Binary(2**52+1))
console.log('-(2**52+1)')
console.log(float64ToInt64Binary(-(2**52+1)))
.as-console-wrapper {
  max-height: 100% !important;
}

This answer heavily deals with the IEEE-754 Double-precision floating-point format, illustrated here:

IEEE-754 Double-precision floating-point format

   seee eeee eeee ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff
   ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
   [    uint16[3]    ] [    uint16[2]    ] [    uint16[1]    ] [    uint16[0]    ]
   [                                   flt64[0]                                  ]

   little endian byte ordering

   s = sign = uint16[3] >> 15
   e = exponent = (uint16[3] & 0x7FF) >> 4
   f = fraction

The way the solution works is it creates a union between a 64-bit floating point number and an unsigned 16-bit integer array in little endian byte ordering. After validating the integer input range, it casts the input to a double precision floating point number on the buffer, and then uses the union to gain bit access to the value and calculate the binary string based on the unbiased binary exponent and fraction bits.

The solution is implemented in pure ECMAScript 5 except for the use of String#padStart(), which has an available polyfill here.

查看更多
回忆,回不去的记忆
6楼-- · 2019-01-02 17:10

Note- the basic (x>>>0).toString(2); has a slight issue when x is positive. I have some example code at the end of my answer that corrects that problem with the >>> method while still using >>>.

(-3>>>0).toString(2);

prints -3 in 2s complement.

1111111111101

A working example

C:\>type n1.js
console.log(   (-3 >>> 0).toString(2)    );
C:\>
C:\>node n1.js
11111111111111111111111111111101

C:\>

This in the URL bar is another quick proof

javascript:alert((-3>>>0).toString(2))

Note- The result is very slightly flawed, in that it always starts with a 1, which for negative numbers is fine. For positive numbers you should prepend a 0 to the beginning so that the result is really 2s complement. So (8>>>0).toString(2) produces 1000 which isn't really 8 in 2s complement, but prepending that 0, making it 01000, is correct 8 in 2s complement. In proper 2s complement, any bit string starting with 0 is >=0, and any bit string starting with 1, is negative.

e.g. this gets round that problem

// or x=-5  whatever number you want to view in binary  
x=5;   
if(x>0) prepend="0"; else prepend=""; 
alert(prepend+((x>>>0)).toString(2));

The other solutions are the one from Annan(though Annan's explanations and definitions are full of errors, he has code that produces the right output), and the solution from Patrick.

Anybody that doesn't understand the fact of positive numbers starting with 0 and negative numbers with 1, in 2s complement, could check this SO QnA on 2s complement. What is “2's Complement”?

查看更多
永恒的永恒
7楼-- · 2019-01-02 17:18

The binary in 'convert to binary' can refer to three main things. The positional number system, the binary representation in memory or 32bit bitstrings. (for 64bit bitstrings see Patrick Roberts' answer)

1. Number System

(123456).toString(2) will convert numbers to the base 2 positional numeral system. In this system negative numbers are written with minus signs just like in decimal.

2. Internal Representation

The internal representation of numbers is 64 bit floating point and some limitations are discussed in this answer. There is no easy way to create a bit-string representation of this in javascript nor access specific bits.

3. Masks & Bitwise Operators

MDN has a good overview of how bitwise operators work. Importantly:

Bitwise operators treat their operands as a sequence of 32 bits (zeros and ones)

Before operations are applied the 64 bit floating points numbers are cast to 32 bit signed integers. After they are converted back.

Here is the MDN example code for converting numbers into 32-bit strings.

function createBinaryString (nMask) {
  // nMask must be between -2147483648 and 2147483647
  for (var nFlag = 0, nShifted = nMask, sMask = ""; nFlag < 32;
       nFlag++, sMask += String(nShifted >>> 31), nShifted <<= 1);
  return sMask;
}

createBinaryString(0) //-> "00000000000000000000000000000000"
createBinaryString(123) //-> "00000000000000000000000001111011"
createBinaryString(-1) //-> "11111111111111111111111111111111"
createBinaryString(-1123456) //-> "11111111111011101101101110000000"
createBinaryString(0x7fffffff) //-> "01111111111111111111111111111111"
查看更多
登录 后发表回答