How to combine two UIImages of difference size wit

2020-05-01 07:02发布

My current method of combining UIImages come from this answer on SO, as well as this popular question on resizing UIImage with aspect ratio. My current issue is the following:

I have an UIImage called pictureImage taken with the camera that comes out to the standard dimension 2448*3264. I also have a UIImageView called self.annotateView that has a frame of 320*568 where the user could draw and annotate the picture. When I present the pictureImage in a UIImageView, I set the image presentation as Aspect Fill so that it takes up the whole iPhone screen. Of course this means parts of pictureImage is cut off on both left and right (in fact 304 pixels on both sides), but this intended.

My problem is, when I combine the UIImages pictureImage and annotateView.image to a new dimension of 320*568, my combined image alters the original aspects of annotateView.image by stretching it horizontally. This is strange since the new dimensions are exactly that of annotateView.image's original dimensions.

Here is what the outcome looks like -

Before combining the UIImages

enter image description here

After combining the images

enter image description here

Note that the underlying picture is not stretched. However, annotateView.image is stretched only horizontally, not vertically.

Here is my code for merging the UIImages.

//Note: self.firstTakenImage is set to 320*568

CGSize newSize = CGSizeMake(self.firstTakenImage.frame.size.width, self.firstTakenImage.frame.size.height);
UIGraphicsBeginImageContext(newSize);
[self.firstTakenImage.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
[self.drawView.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

4条回答
贼婆χ
2楼-- · 2020-05-01 07:25
  • Support Portrait and Landscape both type of image
  • Drawing and other subviews can be merged in my case I'm adding label to draw
  {

    CGSize fullSize = getImageForEdit.size;
    CGSize sizeInView = AVMakeRectWithAspectRatioInsideRect(imgViewFake.image.size, imgViewFake.bounds).size;
    CGFloat orgScale = orgScale = fullSize.width/sizeInView.width;
    CGSize newSize = CGSizeMake(orgScale * img.image.size.width, orgScale * img.image.size.height);
    if(newSize.width <= fullSize.width && newSize.height <= fullSize.height){
        newSize = fullSize;
    }
    CGRect offsetRect;
    if (getImageForEdit.size.height > getImageForEdit.size.width){
        CGFloat scale = newSize.height/fullSize.height;
        CGFloat offset = (newSize.width - fullSize.width*scale)/2;
        offsetRect = CGRectMake(offset, 0, newSize.width-offset*2, newSize.height);
    }
    else{
        CGFloat scale = newSize.width/fullSize.width;
        CGFloat offset = (newSize.height - fullSize.height*scale)/2;
        offsetRect = CGRectMake(0, offset, newSize.width, newSize.height-offset*2);
    }
    UIGraphicsBeginImageContextWithOptions(newSize, NO, getImageForEdit.scale);
    [getImageForEdit drawAtPoint:offsetRect.origin];
    //        [img.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
    CGFloat oldScale = img.contentScaleFactor;
    img.contentScaleFactor = getImageForEdit.scale;
    [img drawViewHierarchyInRect:CGRectMake(0, 0, newSize.width, newSize.height) afterScreenUpdates:YES];
    img.contentScaleFactor = oldScale;
    UIImage *combImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    imageData = UIImageJPEGRepresentation(combImage, 1);
}
查看更多
forever°为你锁心
3楼-- · 2020-05-01 07:26

While doing the drawInRect, you have to redo the scaling and centering that the system did for you with AspectFill to make it match the original process. Something like this:

CGSize fullSize = self.pictureView.image.size;
CGSize newSize = self.outputView.frame.size;
CGFloat scale = newSize.height/fullSize.height;
CGFloat offset = (newSize.width - fullSize.width*scale)/2;
CGRect offsetRect = CGRectMake(offset, 0, newSize.width-offset*2, newSize.height);
NSLog(@"offset = %@",NSStringFromCGRect(offsetRect));

UIGraphicsBeginImageContext(newSize);
[self.pictureView.image drawInRect:offsetRect];
[self.annotateView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage *combImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

self.outputView.image = combImage;
查看更多
来,给爷笑一个
4楼-- · 2020-05-01 07:26

When you call [self.drawView.image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)]; you need to modify the frame to account for the difference in aspect ratio that you described (because you are chopping some of the image off on both sides).

That means modifying the x position and the width that you are drawing the annotation image.

The modification is based on the difference between the 2 rects when scaled to the same height. You say this is 304, so you can initially set x to 304 and the width to newSize.width - 608 to test. But really the difference should be calculated...

查看更多
放荡不羁爱自由
5楼-- · 2020-05-01 07:33

Mackworth's answer in Swift 3.x

    let fullSize:CGSize = img.size
    let newSize:CGSize = fullSize
    let scale:CGFloat = newSize.height/fullSize.height
    let offset:CGFloat = (newSize.width - fullSize.width*scale)/2
    let offsetRect:CGRect = CGRect.init(x: offset, y: 0, width: newSize.width - offset*2, height: newSize.height)
    print(NSStringFromCGRect(offsetRect))     

    UIGraphicsBeginImageContext(newSize);
    self.pictureView.image.draw(in: offsetRect)
    self.annotateView.image.draw(in: CGRect.init(x: 0, y: 0, width: waterMarkImage.size.width, height: waterMarkImage.size.height))
    let combImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
    UIGraphicsEndImageContext();
    return combImage;
查看更多
登录 后发表回答