Training Tesseract 3 to recognize numbers from rea

2019-03-19 03:53发布

I'm trying to train tesseract to recognize numbers from real images of gas meters.

The images that I use for training are made with a camera, for this reason there are many problems: poor images resolution, blurred images, poor lighting or low contrast as a result of the overexposure, reflections, shadows, etc...

For training, I have created a large image with a series of digits captured by the images of the gas meter and I manually edited the file box to create the .tr files. The result is that only the digits of the clearer and sharper images are recognized while the digits of blurred images are not captured by tesseract.

4条回答
成全新的幸福
2楼-- · 2019-03-19 04:27

I would try this simple ImageMagick command first:

 convert          \
    original.jpg  \
   -threshold 50% \
    result.jpg

(Play a bit with the 50% parameter -- try with smaller and higher values...)

Thresholding basically leaves over only 2 values, zero or maximum, for each color channel. Values below the threshold get set to 0, values above it get set to 255 (or 65535 if working at 16-bit depth).

Depending on your original.jpg, you may have a OCR-able, working, very high contrast image as a result.

查看更多
可以哭但决不认输i
3楼-- · 2019-03-19 04:31

I suggest you to:

  • use a tool to edit the boxes, such jTessBoxEditor, it's so helpful and let you winning a time. You can install it easily from here
  • it's good idea to train the letters of actual situation (noisy, blurred). Your training set is still limited, you can add more training samples.
  • I recommend you to use Tesseract's API themselves to enhance the image (denoise, normalize, sharpen...) for example : Boxa * tesseract::TessBaseAPI::GetConnectedComponents(Pixa** pixa) (it allows you to get to the bounding boxes of each character)

    Pix* pimg = tess_api->GetThresholdedImage();

Here you find few examples

查看更多
Evening l夕情丶
4楼-- · 2019-03-19 04:35

Tesseract is a pretty decent OCR package, but doesn't pre-process images properly. My experience is that you can get a good OCR result if you just do some pre-processing before passing it on to tesseract.

There are a couple of key pointers that improves recognition significantly:

  1. Remove background noise. Basically this means using mean adaptive thresholding. I'd also ensure that the characters are black and the background is white.
  2. Use the correct resolution. If you get bad results, scale the image up or down until you get good results. You want to aim at approx. font size 14 at 300 dpi; in my software that processes invoices that works best.
  3. Don't store images as JPEG; use BMP or PNG or something else that doesn't make the image noisy.
  4. If you're only using one or two fonts, try training tesseract on these fonts.

As for point 4, if you know the font that's going to be used, there are some better solutions than using Tesseract like matching these fonts directly on the images... The basic algoritm is to find the digits and match them to all possible characters (which are only 10)... still, the implementation is tricky.

查看更多
The star\"
5楼-- · 2019-03-19 04:44

As far as I can tell you need to OpenCV to recognize box in which numbers are located, but OpenCV is not god for OCR. After you locate box, just crop that part, do image processing and then hand it over to tesseract for OCR.

I need help with OpenCV because I don't know how to program in OpenCV.

Here are few real world examples.

  • First image is original image (croped power meter numbers)
  • Second image is slightly cleaned up image in GIMP, around 50% OCR accuracy in tesseract
  • Third image is completely cleaned image - 100% OCR recognized without any training!

first image second image third image

查看更多
登录 后发表回答