Note that I'm really looking for an answer to my question. I am not looking for a link to some source code or to some academic paper: I've already used the source and I've already read papers and still haven't figured out the last part of this issue...
I'm working on some fast screen font OCRing and I'm making very good progress.
I'm already finding the baselines, separating the characters, transforming each character in black & white and then contouring each character in order to apply a Freeman chain code to it.
Basically it's an 8-connected chain code looking like this:
3 2 1
\ | /
4-- --0
/ | \
5 6 7
So if I have an 'a', after all my transformations (including transforming to black and white), I end up with something like this:
11110
00001
01111
10001
10001
01110
Then it's external countour may look like this (I may be making a mistake here, that's ASCII-art contouring and my 'algorithm' may get the contour wrong but that's not the point of my question):
XXXX
X1111X
XXXX1X
X01111X
X10001X
X10001X
X111X
XXX
Following the Xs, I get the chain code, which would be:
0011222334445656677
Note that that's the normalized chain code but you can always normalized a chain code like this: you just keep the smallest integer.
(By the way, there's a super-efficient implementation to find the chain code where you simply take the 8 adjacent pixels of an 'X' and then look in a 256 lookup table if you have 0,1,2,3,4,5,6 or 7)
My question now, however, is: from that 0011222334445656677 chain code, how do I find that I have an 'a'?
Because, for example, if my 'a' looks like this:
11110
00001
01111
10001
10001
01111 <-- This pixel is now full
Then my chain code is now: 0002222334445656677
And yet this is also an 'a'.
I know that the whole point of these chain code is to be resilient to such tiny changes but I can't figure out how I'm supposed to find which character corresponds to one chain code.
I've been that far and now I'm stuck...
(By the way, I don't need 100% efficiency and things like differentiating '0' from 'O' or from 'o' isn't really an issue)
Last month, I was dealing with the same problem. Now, I have solved this problem by vetex chain code.
The vetex chain code is the binary chain code. Then, I cut it to 5 parts. Obviously, The number 0-9 has its own charcter in different part.
What you need is a function
d
that measures the distance between chain codes. After then finding the letter to a given chain code is straightforward:Input:
S
for the set of possible letters (generally the cain codes for A-Z, a-z, 0-9, ...)x
of a letter which needs to be detected and which could be slightly deformed (the chain code wouldn't match any chain code in the setS
)The algorithm would iterate through the set of possible chain codes and calculate the distance
d(x,si)
for each element. The letter with the smallest distance would be the output of the algorithm (the identified letter).I would suggest following distance function: For two chain codes, add up the length differences of each direction:
d(x,si) = |x0-si0| + |x1-si1| + .. + |x7-si7|
.x0
is the number of 0s in the chain codex
,si0
is the number of 0s in the chain codesi
, etc.An example will better explain what I'm thinking about. In the following image there are the letters 8, B and D, the fourth letter is a slightly deformed 8, which needs to be identified. The letters are written with Arial with font-size 8. The second line in the image is 10 times enlarged to better see the pixels.
I manually calculated (hopefully correct) the normalized chain codes which are:
The length differences (absolut) are:
8'
has the smallest distance to the chain code of8
, thus the algorithm would identify the letter8
. The distance to the letterB
is not much bigger, but this is because the deformed 8 looks almost like theB
.This method is not scaling invariant. I think there are two options to overcome this:
I'm not quite sure if the distance function is good enough for the set of all alphanumeric letters but I hope so. To minimize the error in identifying a letter you could include other features (not only chain codes) into the classification step. And again, you would need a distance measure -- this time for feature vectors.
As your question is not specific enough (whether you want the full algorithm based on the chain code or just some probabilistic classifying), I'll tell you what I know about the problem.
Using the chain code, you can count some properties of the symbol, e.g. the number of rotations of the form 344445, 244445, 2555556, 344446 (arbitrary number of 4s), i.e. the "spikes" on the letter. Say there are 3 sections in the chain code that looks like this. So, this is almost certainly "W"! But this is a good case. You can count numbers of different kinds of rotations and compare that to previously saved values for every letter (which you do by hand). This is quite a good classifier, but alone is not sufficient, of course. It will be impossible for it to differentiate "D" and "O", "V" and "U". And much depends on your imagination.
You should start by creating a test case of images of some letters with a reference and check your algorithm between the changes and inventing new criteria.
Hope this answers your question at least partially.
Update: One bright idea just came into my mind :) You can count the number of monotonic sequences in the chain, for example, for chain 000111222233334443333222444455544443333 (a quick dumb example, doesn't really correspond to any letter) we have
00011122223333444 3333222444455544443333,
00011122223333444 3333222 444455544443333,
000111222233334443333222 4444555 44443333,
0001112222333344433332224444555 44443333,
i.e. four monotonic subsequences.
This should be a good generalization, just count the number of this changes for real letters and compare to that acquired from the detected chain, this is a good try.
Some problems and ideas:
You could convert the chain code into an even simpler model that conveys the topology and then run machine learning code (which one would probably write in Prolog).
But I wouldn't endorse it. People have done/tried this for years and we still have no good results.
Instead of wasting your time with this non-linear/threshold based approach, why don't you just use a robust technique based on correlation? The easiest thing would be to convolve with templates.
But I would develop Gabor wavelets on the letters and sort the coefficients into a vector space. Train a support vector machine with some examples and then use it as a classifier.
This is pretty much how our brain does it and I'm sure its possible in the computer.
Some random chit chat (ignore):
I wouldn't use neuronal networks because I don't understand them and therefore don't like them. However, I'm always impressed by work of Geoff Hintons group http://www.youtube.com/watch?v=VdIURAu1-aU.
Somehow he works on networks that can propagate information backward (deep learning). There is a talk of him where he lets a trained digit recognition network dream. That means he sets one of the output neurons to "2" and the network will generate pictures of things that it thinks are two on the input neurons.
I found this very cool.