I am trying to blend 2 images so that the seams between them disappear.
1st image:
2nd image:
if blending NOT applied:
if blending applied:
I used ALPHA BLENDING; NO seam removed; in fact image STILL SAME BUT DARKER
This is the part where I do the blending
Mat warped1;
warpPerspective(left,warped1,perspectiveTransform,front.size());// Warping may be used for correcting image distortion
imshow("combined1",warped1/2+front/2);
vector<Mat> imgs;
imgs.push_back(warped1/2);
imgs.push_back(front/2);
double alpha = 0.5;
int min_x = ( imgs[0].cols - imgs[1].cols)/2 ;
int min_y = ( imgs[0].rows -imgs[1].rows)/2 ;
int width, height;
if(min_x < 0) {
min_x = 0;
width = (imgs).at(0).cols;
}
else
width = (imgs).at(1).cols;
if(min_y < 0) {
min_y = 0;
height = (imgs).at(0).rows - 1;
}
else
height = (imgs).at(1).rows - 1;
Rect roi = cv::Rect(min_x, min_y, imgs[1].cols, imgs[1].rows);
Mat out_image = imgs[0].clone();
Mat A_roi= imgs[0](roi);
Mat out_image_roi = out_image(roi);
addWeighted(A_roi,alpha,imgs[1],1-alpha,0.0,out_image_roi);
imshow("foo",imgs[0](roi));
In order to avoid making the faces transparent outside their intersection, you cannot use a single
alpha
value for the whole image.For instance, you need to use
alpha=0.5
in the intersection ofimg[0]
andimg[1]
,alpha=1
in the region whereimg[1]=0
andalpha=0
in the region whereimg[0]=0
.This example is the easy approach, but it won't completely remove the seams. If you want that, you have to adapt
alpha
more intelligently based on image content. You can have a look at the numerous research articles on that topic, but this is not a trivial task:"Seamless image stitching in the gradient domain", by Levin, Zomet Peleg & Weiss, ECCV 2004 (link)
"Seamless stitching using multi-perspective plane sweep", by Kang, Szeliski & Uyttendaele, 2004 (link)
Ok. here's a new try which might only work for your specific task to blend exactly 3 images of those faces, front, left, right.
I use these inputs:
front (i1):
left (i2):
right (i3):
front mask (m1): (optional):
the problem with these images is, that the front image only covers a small part, while left and right overlap the whole front image, which leads to poor blending in my other solution. In addition, the alignment of the images isn't so great (mostly due to perspective effects), so that blending artifacts can occur.
Now the idea of this new method is, that you definitely want to keep the parts of the front image, which lie inside the area spanned by your colored "marker points", these should not be blended. The further you go away from that marker area, the more information from the left and right images should be used, so we create a mask with alpha values, which linearly lowers from 1 (inside the marker area) to 0 (at some defined distance from the marker region).
So the region spanned by the markers is this one:
since we know that the left image is basically used in the region left from the left marker triangle, we can create masks for left and right image, which are used to find the region which should in addition be covered by the front image:
left:
right:
front marker region and everything that is not in left and not in right mask:
this can be masked with the optional front mask input, this is better because this front image example doesnt cover the whole image, but sadly only a part of the image.
now this is the blending mask, with linear decreasing alpha value until the distance to the mask is
10
or more pixel:now we first create the image covering only left and right image, copying most parts unblended, but blend the parts uncovered by left/right masks with
0.5*left + 0.5*right
blendLR
:finally we blend the
front
image in thatblendLR
by compution:blended = alpha*front + (1-alpha)*blendLR
some improvements might include to caluculate the
maxDist
value from some higher information (like the size of the overlap or the size from the marker triangles to the border of the face).another improvement would be to not compute
0.5*left + 0.5*right
but to do some alpha blending here too, taking more information from the left image the further left we are in the gap. This would reduce the seams in the middle of the image (on top and bottom of the front image part).First create a Mask image from your input image, this can be done by thresholding the source image and perform bitwise_and between them.
Now copy the addweighted result to a new mat using above mask.
In the below code I haven’t used warpPerspective instead I used ROI on both image to align correctly.
I choose to define the alpha value depending on the distance to the "object center", the further the distance from the object center, the smaller the alpha value. The "object" is defined by a mask.
I've aligned the images with GIMP (similar to your warpPerspective). They need to be in same coordinate system and both images must have same size.
My input images look like this:
with blending function: needs some comments and optimizations I guess, I'll add them later.
and helper function:
with result:
edit: forgot a function ;) edit: now keeping original background