Recognize a number from an image

2020-05-15 13:59发布

I'm trying to write an application to find the numbers inside an image and add them up.

How can I identify the written number in an image?

enter image description here

There are many boxes in the image I need to get the numbers in the left side and sum them to give total. How can I achieve this?

Edit: i did a java tesseract ocr on the image but i didnt get any correct results. how can i train it?

also

i did a edge detection i got this:

enter image description here

6条回答
甜甜的少女心
2楼-- · 2020-05-15 14:07

In most image processing problems you want to look to leverage as much information you have as possible. Given the image there are assumptions we can make (and possibly more):

  1. The boxes around the numbers are consistent.
  2. The number on the right is always 8 (or known ahead of time)
  3. The number on the left is always a number
  4. The number on the left is always handwriting and written by the same person

Then we can simplify the problem using those assumptions:

  1. You can use a simpler approach to find the numbers (template matching). When you have the coordinates of the match you can create a sub image and subtract out the template and be left with only the numbers you want to give to the OCR engine. http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html .
  2. If you know what numbers to expect, then you can get those from another source and not risk OCR errors. You could even include the 8 as part of the template.
  3. You can greatly reduce the vocabulary (possible OCR results), based on this, increasing the OCR engine's accuracy. There is a whitelist setting for TesseractOCR to do this (see https://code.google.com/p/tesseract-ocr/wiki/FAQ#How_do_I_recognize_only_digits?).
  4. Handwriting is much harder for an OCR engine to recognize (They are meant for printed fonts). However, you can train the OCR engine to recognize the author's "font". (see http://michaeljaylissner.com/posts/2012/02/11/adding-new-fonts-to-tesseract-3-ocr-engine/)

The gist though is to use any assumptions that you can to reduce the problem into smaller, simpler sub problems. Then look to see what tools are available to solve each of those sub problems individually.

Assumptions are hard to make as well if you have to start worrying about the real world, like if these will be scanned it, you'll need to consider skew or rotation of the "template" or the numbers.

查看更多
霸刀☆藐视天下
3楼-- · 2020-05-15 14:10

I would recommend to combine 2 basic neural network components:

  • Perceptron
  • Self Organized Map (SOM)

A perceptron is a very simple neural network component. It takes multiple inputs and produces 1 output. You need to train it by feeding it both inputs and outputs. It's a self learning component.

Internally it has a collection of weight factors, which are used to calculate the output. These weight factors are perfected during training. The beautiful thing about a perceptron is that, (with a proper training) it can handle data that it has never seen before.

You can make a perceptron more powerful by arranging it in a multi-layer network, meaning that the output of one perceptron acts as the input of another perceptron.

In your case you should use 10 perceptron networks, one for each numeric value (0-9).

But in order to use perceptrons you will need an array of numeric inputs. So first you need something to convert you visual image to numeric values. A Self Organized Map (SOM) uses a grid of inter-connected points. The points should be attracted to the pixels of your image (See below)

Self Organized Map

The 2 components work well together. The SOM has a fixed number of grid-nodes, and your perceptron needs a fixed number of inputs.

Both components are really popular and are available in educational software packages such as MATLAB.

UPDATE: 06/01/2018 - Tensor Flow

This video tutorial demonstrates how it can be done in python using Google's TensorFlow framework. (click here for a written tutorial).

查看更多
乱世女痞
4楼-- · 2020-05-15 14:17

Neural networks is a typical approach for this kind of problems.

In this scenario, you can consider each handwritten number a matrix of pixels. You may get better results if you train the neural network with images of the same size as the image you want to recognize.

You can train the neural network with different images of handwritten numbers. Once trained, if you pass the image of the handwritten number to identify, it will return the most similar number.

Of course, the quality of training images is a key factor to get good results.

查看更多
ゆ 、 Hurt°
5楼-- · 2020-05-15 14:21

Give it up. Really. I as a human can not say for sure if the third letter is a '1' or a '7'. Humans are better in deciphering, so a computer will fail for this. '1' and '7' is only one problematic case, '8' and '6', '3' and '9' are also hard to decipher/distinguish. Your error quote will be >10%. If all the handwriting is from the same person you could try to train an OCR for that, but even in this case you will still have about ~3% errors. It might be that your use case is special, but this number of errors usually prohibits any kind of automated processing. I would look into Mechanical Turk if I really have to automate this.

查看更多
Explosion°爆炸
6楼-- · 2020-05-15 14:27

Here's a simple approach:

  1. Obtain binary image. Load the image, convert to grayscale, then Otsu's threshold to get a 1-channel binary image with pixels ranging from [0...255].

  2. Detect horizontal and vertical lines. Create horizontal and vertical structuring elements then draw lines onto a mask by performing morphological operations.

  3. Remove horizontal and vertical lines. Combine horizontal and vertical masks using a bitwise_or operation then remove the lines using a bitwise_and operation.

  4. Perform OCR. Apply a slight Gaussian blur then OCR using Pytesseract.


Here's a visualization of each step:

Input image -> Binary image -> Horizontal mask -> Vertical mask

enter image description here enter image description here enter image description here enter image description here

Combined masks -> Result -> Applied slight blur

enter image description here enter image description here enter image description here

Result from OCR

38
18
78

I implemented it with Python but you can adapt a similar approach using Java

import cv2
import pytesseract

pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"

# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]

# Detect horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))
horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=1)

# Detect vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,25))
vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=1)

# Remove horizontal and vertical lines
lines = cv2.bitwise_or(horizontal, vertical)
result = cv2.bitwise_not(image, image, mask=lines)

# Perform OCR with Pytesseract
result = cv2.GaussianBlur(result, (3,3), 0)
data = pytesseract.image_to_string(result, lang='eng', config='--psm 6')
print(data)

# Display
cv2.imshow('thresh', thresh)
cv2.imshow('horizontal', horizontal)
cv2.imshow('vertical', vertical)
cv2.imshow('lines', lines)
cv2.imshow('result', result)
cv2.waitKey()
查看更多
三岁会撩人
7楼-- · 2020-05-15 14:34

You will most likely need to do the following:

  1. Apply the Hough Transform algorithm on the entire page, this should should yield a series of page sections.

  2. For each section you get, apply it again. If the current section yielded 2 elements, then you should be dealing with a rectangle similar to the above.

  3. Once that you are done, you can use an OCR to extract the numeric value.

In this case, I would recommend you take a look at JavaCV (OpenCV Java Wrapper) which should allow you to tackle the Hough Transform part. You would then need something akin to Tess4j (Tesseract Java Wrapper) which should allow you to extract the numbers you are after.

As an extra note, to reduce the amount of false positives, you might want to do the following:

  1. Crop the image if you are certain that certain coordinates will never contain data you are after. This should give you a smaller picture to work with.

  2. It might be wise to change the image to grey scale (assuming you are working with a colour image). Colours can have a negative impact on the OCR's ability to resolve the image.

EDIT: As per your comment, given something like this:

+------------------------------+
|                   +---+---+  |
|                   |   |   |  |
|                   +---+---+  |
|                   +---+---+  |
|                   |   |   |  |
|                   +---+---+  |
|                   +---+---+  |
|                   |   |   |  |
|                   +---+---+  |
|                   +---+---+  |
|                   |   |   |  |
|                   +---+---+  |
+------------------------------+

You would crop the image so that your remove the area which does not have relevant data (the part on the left) by cropping the image, you would get something like so:

+-------------+
|+---+---+    |
||   |   |    | 
|+---+---+    |
|+---+---+    |
||   |   |    |
|+---+---+    |
|+---+---+    |
||   |   |    |
|+---+---+    |
|+---+---+    |
||   |   |    |
|+---+---+    |
+-------------+

The idea would be to run the Hough Transform so that you can get segments of the page which contain rectangles like so:

+---+---+    
|   |   |     
+---+---+ 

Which you would then apply the Hough Transform again and end up with two segments, and you take the left one.

Once that you have the left segment, you would then apply the OCR.

You can try to apply the OCR before hand, but at best, the OCR will recognize both numeric values, both written and both typed, which from what I get, is not what you are after.

Also, the extra lines which depict the rectangles might throw the OCR off track, and make it yield bad results.

查看更多
登录 后发表回答