JAVA Bitwise code purpose , &

2020-05-06 14:29发布

// following code prints out Letters aA bB cC dD eE ....

class UpCase {
public static void main(String args[]) {
 char ch;

 for(int i = 0; i < 10; i++) {
  ch = (char)('a' + i);
  System.out.print(ch);

  ch = (char)((int) ch & 66503);

  System.out.print(ch + " ")
  }
 }
}

Still learning Java but struggling to understand bitwise operations. Both codes work but I don't understand the binary reasons behind these codes. Why is (int) casted back to ch and what is 66503 used for that enables it to print out different letter casings.

//following code displays bits within a byte

class Showbits {
  public static void main(String args[]) {
  int t;
  byte val;

  val = 123;
  for(t = 128; t > 0; t = t/2) {
   if((val & t) != 0) 
    System.out.print("1 ");
    else System.out.print("0 ");
   }

  }
 }
 //output is 0 1 1 1 1 0 1 1

For this code's output what's the step breakdown to achieve it ? If 123 is 01111011 and 128 as well as 64 and 32 is 10000000 shouldnt the output be 00000000 ? As & turns anything with 0 into a 0 ? Really confused.

2条回答
女痞
2楼-- · 2020-05-06 14:55

UpCase

The decimal number 66503 represented by a 32 bit signed integer is 00000000 00000001 00000011 11000111 in binary.

The ASCII letter a represented by a 8 bit char is 01100001 in binary (97 in decimal).

Casting the char to a 32 bit signed integer gives 00000000 00000000 00000000 01100001.

&ing the two integers together gives:

00000000 00000000 00000000 01100001
00000000 00000001 00000011 11000111
===================================
00000000 00000000 00000000 01000001

which casted back to char gives 01000001, which is decimal 65, which is the ASCII letter A.

Showbits

No idea why you think that 128, 64 and 32 are all 10000000. They obviously can't be the same number, since they are, well, different numbers. 10000000 is 128 in decimal.

What the for loop does is start at 128 and go through every consecutive next smallest power of 2: 64, 32, 16, 8, 4, 2 and 1.

These are the following binary numbers:

128: 10000000
 64: 01000000
 32: 00100000
 16: 00010000
  8: 00001000
  4: 00000100
  2: 00000010
  1: 00000001

So in each loop it &s the given value together with each of these numbers, printing "0 " when the result is 0, and "1 " otherwise.

Example:

val is 123, which is 01111011.

So the loop will look like this:

128: 10000000 & 01111011 = 00000000 -> prints "0 "
 64: 01000000 & 01111011 = 01000000 -> prints "1 "
 32: 00100000 & 01111011 = 00100000 -> prints "1 "
 16: 00010000 & 01111011 = 00010000 -> prints "1 "
  8: 00001000 & 01111011 = 00001000 -> prints "1 "
  4: 00000100 & 01111011 = 00000000 -> prints "0 "
  2: 00000010 & 01111011 = 00000010 -> prints "1 "
  1: 00000001 & 01111011 = 00000001 -> prints "1 "

Thus the final output is "0 1 1 1 1 0 1 1", which is exactly right.

查看更多
放我归山
3楼-- · 2020-05-06 15:09

Second piece of code(Showbits):

The code is actually converting decimal to binary. The algorithm uses some bit magic, mainly the AND(&) operator.

Consider the number 123 = 01111011 and 128 = 10000000. When we AND them together, we get 0 or a non-zero number depending whether the 1 in 128 is AND-ed with a 1 or a 0.

  10000000
& 01111011
----------
  00000000

In this case, the answer is a 0 and we have the first bit as 0. Moving forward, we take 64 = 01000000 and, AND it with 123. Notice the shift of the 1 rightwards.

  01000000
& 01111011
----------
  01000000

AND-ing with 123 produces a non-zero number this time, and the second bit is 1. This procedure is repeated.

First piece of code(UpCase):

Here 65503 is the negation of 32.

 32 = 0000 0000 0010 0000
~32 = 1111 1111 1101 1111

Essentially, we subtract a value of 32 from the lowercase letter by AND-ing with the negation of 32. As we know, subtracting 32 from a lowercase ASCII value character converts it to uppercase.

查看更多
登录 后发表回答