This question relates to somewhat practice and experienced based process. I have an Mat binary image which consist of simple white color polygons in a black background. Actually those polygons represent an article in a newspaper page. So what I want is to store the details of the location of the article inside the newspaper page. One Mat image has only one polygon in it. So one option is to
- Use pure OpenCV calls to store Mat into a .xml or .yml file (How to write a Float Mat to a file in OpenCV : accepted answer)
- Find the coordinates of the polygon where there are vertices and store only those coordinates into the database
Following is a sample image of a Mat I am going to store.
The first option seems possible but I don't know how to implement the second approach. If it is possible, that would be most efficient as I think, because then there would be only few coordinates to be saved for each article. I can implement a complex procedure to find the vertices for that and also to redraw the Mat image using those coordinates when needed. But I hope there is a simple process in opencv for this task.
So what I want to know is which approach is better and if the second approach is better, how to do that in opencv with c++. I am neither an opencv expert nor c++ expert so an appropriate answer would save me many hours and also efficiency of the program.
It depends how generic the polygons can be. If the edges of the polygon are always parallel to
x
andy
axes, then you could look at pixels in 8-neigborhood of a particular pixel and if there are odd number of white pixels you have found a corner. Or use a 4-neighborhood and test for even number of white pixels.You can simply use findContours, with an appropriate contour approximation method. Basically, aside from
CV_CHAIN_APPROX_NONE
that will store all points, every other method is fine for this example:CV_CHAIN_APPROX_SIMPLE
,CV_CHAIN_APPROX_TC89_L1
andCV_CHAIN_APPROX_TC89_KCOS
.You can store those points in your database. You can then reload those points, and draw original image with fillPoly.
This simple example show the retrieved contours points with the approximation method, and how to re-draw the image with those points.
Note that you're image is aliased (you probably saved it in jpeg before png), so you need to remove aliasing for example keeping only points with value equals to 255.
Green vertices with contour approximation method:
Then you can save the
nonZeroCoordinates
matrix into your file to use.If you want to create a same image using these coordinates, you can do like this:
Hope it helps!
A slightly off-the-wall approach... you could readily save the Mat as an image in OpenCV - preferably a
PGM
or aPNG
since they are lossless. Then you could pass the image to a vector-tracer program likepotrace
and get it to tell you the outline in SVG format and store that in your database.So,
potrace
likesPGM
files, so you either save your outline as aPGM
in OpenCV or as aPNG
, then you use ImageMagick to make that into a PGM and pass it topotrace
like this:which will get you an
svg
file like this:You can view that in web-browser, by the way.
You can recall the image at any time and re-create it with ImageMagick, or other tools, at the command line like this:
I would note that your entire
PNG
is actually only 32kB and storage is pretty cheap so it hardly seems worth the trouble to generate a vectorised image to save space. In fact, if you use a decent tool like ImageMagick and convert your image to a single bit PNG, it comes down to 6,150 bytes which is pretty small...And, if you can handle reducing the outline in size to 1/5th of its original, which would still probably be adequate to locate the newspaper article, you could do:
which weighs in at just 1,825 bytes.