I'd like to separate an image of text into it's component characters, also as images. For example, using the sample below I'd end up with 14 images.
I'm only going to be using text on a single line, so the y-height is unimportant - what I need to find is the beginning and end of each letter and crop to those coordinates. That way I would also avoid problems with 'i','j', etc.
I'm new to image processing, and I'm not sure how to go about it. Some form of edge detection? Is there a way to determine contiguous regions of solid colour? Any help is great.
Trying to improve my Python skills and familiarity with some of the many libraries available, so I'm using the Python Imaging Library (PIL), but I've also had a look at OpenCV.
Sample image:
I know I am few years late :-) but you can do this sort of thing with ImageMagick pretty easily now, straight at the command-line without compiling anything, as it has Connected Component Analysis built-in:
Here is one way to do it like that:
The result looks like this:
First, I threshold your image at 50% so that there are only pure blacks and whites in it, no tonal gradations. Then I tell
ImageMagick
to output details of the bounding boxes it finds, and that I am not interested in objects smaller than 10 pixels of total area. I then allow pixels to be 8-connected, i.e. to their diagonal neighbours (NE,SE,NW,SW) as well as their left-right and above-below neighbours. Finally I parse the bounding box output withawk
to draw in red lines around the bounding boxes.The output of the initial command that I parse with
awk
looks like this:and the
awk
turns that into thisUm, this is actually very easy for the sample you provided:
(Incidentally, this also works for splitting a paragraph into lines.)
If the letters overlap or share columns, it gets a little more
difficultinteresting.Edit:
@Andres, no, it works fine for 'U', you have to look at all of each column
You could start with a simple connected components analysis (CCA) algorithm, which can be implemented quite efficiently with a scanline algorithm (you just keep track of merged regions and relabel at the end). This would give you separately numbered "blobs" for each continuous region, which would work for most (but not all) letters. Then you can simply take the bounding box of each connected blob, and that will give you the outline for each. You can even maintain the bounding box as you apply CCA for efficiency.
So in your example, the first word from the left after CCA would result in something like:
with equivalence classes of 4=2.
Then the bounding boxes of each blob gives you the area around the letter. You will run into problems with letters such as i and j, but they can be special-cased. You could look for a region less than a certain size, which is above another region of a certain width (as a rough heuristic).
The cvBlobsLib library in OpenCV should do most of this for you.
I've been playing around with ocropus recently, an open-source text analysis and ocr-preprocessing tool. As a part of its workflow, it also creates the images you want. Maybe this helps you, although no python magic is involved.
This is not an easy task especially if the background is not uniform. If what you have is an already binary image like the example, it is slightly simpler.
You can start applying a threshold algorithm if your image is not binary (Otsu adaptative threshold works well)
After you can use a labelling algorithm in order to identify each 'island'of pixels which forms your shapes (each character in this case).
The problem arises when you have noise. Shapes that were labelled but aren't of your interest. In this case you can use some heuristic to determine when a shape is a character or not (you can use normalized area, position of the object if your text is in a well define place etc). If this is not enough, you will need to deal with more complex staff like shape feature extraction algorithms and some sort of pattern recognition algorithm, like multilayer perceptrons.
To finish, this seems to be an easy task, but depending the quality of your image, it could get harder. The algorithms cited here can easily be found on the internet or also implemented in some libraries like OpenCv.
Any more help, just ask, if I can help of course ;)
The problem you have posed is really hard—it took some of the world's best image-processing researchers quite some time to solve. The solution is a major part of the Djvu image-compression and display toolset: their first step in compressing a document is to identify foreground and split it into characters. They then use the information to help compression because the image of one lowercase 'e' is much like another—the compressed document needs to contain only the differences. You'll find links to a bunch of technical papers at http://djvu.org/resources/; a good place to start is with High Quality Document Image Compression with Djvu.
A good many of the tools in the Djvu suite have been open-sourced under the title djvulibre; unfortunately, I have not been able to figure out how to pull out the foreground (or individual characters) using the existing command-line tools. I would be very interested to see this done.