// following code prints out Letters aA bB cC dD eE ....
class UpCase {
public static void main(String args[]) {
char ch;
for(int i = 0; i < 10; i++) {
ch = (char)('a' + i);
System.out.print(ch);
ch = (char)((int) ch & 66503);
System.out.print(ch + " ")
}
}
}
Still learning Java but struggling to understand bitwise operations. Both codes work but I don't understand the binary reasons behind these codes. Why is (int) casted back to ch and what is 66503 used for that enables it to print out different letter casings.
//following code displays bits within a byte
class Showbits {
public static void main(String args[]) {
int t;
byte val;
val = 123;
for(t = 128; t > 0; t = t/2) {
if((val & t) != 0)
System.out.print("1 ");
else System.out.print("0 ");
}
}
}
//output is 0 1 1 1 1 0 1 1
For this code's output what's the step breakdown to achieve it ? If 123 is 01111011 and 128 as well as 64 and 32 is 10000000 shouldnt the output be 00000000 ? As & turns anything with 0 into a 0 ? Really confused.
UpCase
The decimal number
66503
represented by a 32 bit signed integer is00000000 00000001 00000011 11000111
in binary.The ASCII letter
a
represented by a 8 bit char is01100001
in binary (97 in decimal).Casting the char to a 32 bit signed integer gives
00000000 00000000 00000000 01100001
.&
ing the two integers together gives:which casted back to char gives
01000001
, which is decimal 65, which is the ASCII letterA
.Showbits
No idea why you think that
128
,64
and32
are all10000000
. They obviously can't be the same number, since they are, well, different numbers.10000000
is128
in decimal.What the for loop does is start at
128
and go through every consecutive next smallest power of 2:64
,32
,16
,8
,4
,2
and1
.These are the following binary numbers:
So in each loop it
&
s the given value together with each of these numbers, printing"0 "
when the result is0
, and"1 "
otherwise.Example:
val
is123
, which is01111011
.So the loop will look like this:
Thus the final output is
"0 1 1 1 1 0 1 1"
, which is exactly right.Second piece of code(Showbits):
The code is actually converting decimal to binary. The algorithm uses some bit magic, mainly the AND(&) operator.
Consider the number
123 = 01111011
and128 = 10000000
. When we AND them together, we get0
or a non-zero number depending whether the1
in128
is AND-ed with a1
or a0
.In this case, the answer is a
0
and we have the first bit as 0. Moving forward, we take64 = 01000000
and, AND it with123
. Notice the shift of the1
rightwards.AND-ing with 123 produces a non-zero number this time, and the second bit is 1. This procedure is repeated.
First piece of code(UpCase):
Here 65503 is the negation of 32.
Essentially, we subtract a value of 32 from the lowercase letter by AND-ing with the negation of 32. As we know, subtracting 32 from a lowercase ASCII value character converts it to uppercase.