I use the following function to perform offline OCR using Tesseract OCR's Android fork Tess-Two :
private String startOCR(Uri imgUri) {
try {
ExifInterface exif = new ExifInterface(imgUri.getPath());
int exifOrientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
int rotate = 0;
switch(exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}
Log.d(TAG, "Rotation: " + rotate);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4; // 1 - means max size. 4 - means maxsize/4 size. Don't use value <4, because you need more memory in the heap to store your data.
// set to 300 dpi
options.inTargetDensity = 300;
Bitmap bitmap = BitmapFactory.decodeFile(imgUri.getPath(), options);
// Change Orientation via EXIF
if (rotate != 0) {
// Getting width & height of the given image.
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);
// Rotating Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
}
// To Grayscale
bitmap = toGrayscale(bitmap);
final Bitmap b = bitmap;
final ImageView ivResult = (ImageView)findViewById(R.id.ivResult);
if(ivResult != null) {
runOnUiThread(new Runnable() {
@Override
public void run() {
ivResult.setImageBitmap(b);
}
});
}
return extractText(bitmap);
} catch (Exception e) {
Log.e(TAG, e.getMessage());
return "";
}
}
and here is the extractText()
method:
private String extractText(Bitmap bitmap) {
//Log.d(TAG, "extractText");
try {
tessBaseApi = new TessBaseAPI();
} catch (Exception e) {
Log.e(TAG, e.getMessage());
if (tessBaseApi == null) {
Log.e(TAG, "TessBaseAPI is null. TessFactory not returning tess object.");
}
}
tessBaseApi.init(DATA_PATH, lang);
//EXTRA SETTINGS
tessBaseApi.setVariable(TessBaseAPI.VAR_CHAR_WHITELIST, "abcdefghijklmnopqrstuvwxyz1234567890',.?;/ ");
Log.d(TAG, "Training file loaded");
tessBaseApi.setDebug(true);
tessBaseApi.setPageSegMode(TessBaseAPI.PageSegMode.PSM_AUTO_OSD);
tessBaseApi.setImage(bitmap);
String extractedText = "empty result";
try {
extractedText = tessBaseApi.getUTF8Text();
} catch (Exception e) {
Log.e(TAG, "Error in recognizing text.");
}
tessBaseApi.end();
return extractedText;
}
The value returned by extractText()
is shown in the following screenshot:
Accuracy is super low, though I make the image grayscale & upscale to 300 dpi before performing OCR. How can I improve the results? Is the trained data not good enough?