Possible Duplicate:
Why does base64 encoding requires padding if the input length is not divisible by 3?
Quoting Wikipedia:
...these padding characters must then be discarded when decoding but still allow the calculation of the effective length of the unencoded text, when its input binary length would not be a multiple of 3 bytes. ...
But the calculation of length raw data can easily be done even if strip the padding character.
| Encoded
|--------------------------------------
Raw Size | Total Size | Real Size | Padding Size
1 | 4 | 2 | 2
2 | 4 | 3 | 1
3 | 4 | 4 | 0
4 | 8 | 6 | 2
5 | 8 | 7 | 1
6 | 8 | 8 | 0
7 | 12 | 10 | 2
8 | 12 | 11 | 1
9 | 12 | 12 | 0
10 | 16 | 14 | 2
.
.
.
So given the real encoded size (third column) you can always correctly guess what padded size would be:
PaddedSize = 4 * Ceil (RealSize / 4)
So in theory, there was no need of padding. Algorithm would have handled it. Considering that Base64 encoding is a popular industry standard, it is used in many applications and devices. These would have benefited from reduced encoded size. So question is, why padding is used in Base64 encoding?