有没有说明如何处理捏和平移一起手势识别?(Is there a gesture recognizer

2019-09-20 23:46发布

所以我跟iOS 4.2工作,缩放和平移添加到我的应用程序。 我已经实现了UIPinchGestureRecognizer和UIPanGestureRecognizer的一个实例。 在我看来,只有这一次一个被识别手势。 特别是,当一个手指是向下后者仅反应,而前者起反应,当第二手指存在。 这是正常情况,但它也有一些副作用,我想作出的用户体验质量低劣。

当你把两个手指下来,然后将其中的一个,图像扩大(放大的)像它应该,但手指下的像素是指下不再。 在从图像的中心,而不是在两个手指之间的中点缩放。 而这中心点自行移动。 我想那个中心点的运动来决定整体形象的声像。

做几乎所有的iOS应用程序有此相同的行为,凡在或从周围的图像的中心,而不是手指跟踪手指下的像素的图像缩放?

在我看来,创建一个自定义的手势识别是正确的设计方法解决这个问题,但它也似乎我会有人创造了这样一个识别的商业免费下载和使用。 是否有这样的UIGestureRecognizer?

Answer 1:

对不起,匆匆但是这是我用我的演示应用程序的一个代码,就可以捏缩放,并在同一时间平移,而无需使用滚动视图。

不要忘记,以符合UIGestureRecognizerDelegate协议

如果你不能够在同一时间同时获得捏和平移,也许是因为你错过了这个方法:

-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
    return YES;
}

下面是完整的源代码:

#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>

@interface ViewController ()

@end

@implementation ViewController

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.

    isEditing = false;

    photoView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
    [photoView setImage:[UIImage imageNamed:@"photo.png"]];
    photoView.hidden = YES;

    maskView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
    [maskView setImage:[UIImage imageNamed:@"maskguide.png"]];
    maskView.hidden = YES;

    displayImage = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];

    UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePan:)];
    UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(handlePinch:)];

    [panGesture setDelegate:self];
    [pinchGesture setDelegate:self];

    [photoView addGestureRecognizer:panGesture];
    [photoView addGestureRecognizer:pinchGesture];
    [photoView setUserInteractionEnabled:YES];

    [panGesture release];
    [pinchGesture release];

    btnEdit = [[UIButton alloc] initWithFrame:CGRectMake(60, 400, 200, 50)];
    [btnEdit setBackgroundColor:[UIColor blackColor]];
    [btnEdit setTitle:@"Start Editing" forState:UIControlStateNormal];
    [btnEdit addTarget:self action:@selector(toggleEditing) forControlEvents:UIControlEventTouchUpInside];

    [[self view] addSubview:displayImage];
    [[self view] addSubview:photoView];
    [[self view] addSubview:maskView];
    [[self view] addSubview:btnEdit];

    [self updateMaskedImage];
}

- (void)viewDidUnload
{
    [super viewDidUnload];
    // Release any retained subviews of the main view.
}

- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
    return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown);
}

-(void)dealloc
{
    [btnEdit release];

    [super dealloc];
}

#pragma mark -
#pragma mark Update Masked Image Method
#pragma mark -

-(void)updateMaskedImage
{
    maskView.hidden = YES;

    UIImage *finalImage = 
    [self maskImage:[self captureView:self.view]
           withMask:[UIImage imageNamed:@"mask.png"]];


    maskView.hidden = NO;

    //UIImage *finalImage = [self maskImage:photoView.image withMask:[UIImage imageNamed:@"mask.png"]];

    [displayImage setImage:finalImage];
}

- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {

    CGImageRef maskRef = maskImage.CGImage; 

    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                        CGImageGetHeight(maskRef),
                                        CGImageGetBitsPerComponent(maskRef),
                                        CGImageGetBitsPerPixel(maskRef),
                                        CGImageGetBytesPerRow(maskRef),
                                        CGImageGetDataProvider(maskRef), NULL, false);

    CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
    return [UIImage imageWithCGImage:masked];

}

#pragma mark -
#pragma mark Touches Began
#pragma mark -

// adjusts the editing flag to make dragging and drop work
-(void)toggleEditing
{
    if(!isEditing)
    {
        isEditing = true;

        NSLog(@"editing...");

        [btnEdit setTitle:@"Stop Editing" forState:UIControlStateNormal];

        displayImage.hidden = YES;
        photoView.hidden = NO;
        maskView.hidden = NO;
    }
    else
    {
        isEditing = false;

        [self updateMaskedImage];

        NSLog(@"stopped editting");

        [btnEdit setTitle:@"Start Editing" forState:UIControlStateNormal];

        displayImage.hidden = NO;
        photoView.hidden = YES;
        maskView.hidden = YES;
    }
}

/*
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{   
    if(isEditing)
    {
        UITouch *finger = [touches anyObject];
        CGPoint currentPosition = [finger locationInView:self.view];

        //[maskView setCenter:currentPosition];
        //[photoView setCenter:currentPosition];
        if([touches count] == 1)
        {
            [photoView setCenter:currentPosition];
        }
        else if([touches count] == 2)
        {

        }
    }
}
*/

-(void)handlePan:(UIPanGestureRecognizer *)recognizer
{    
    CGPoint translation = [recognizer translationInView:self.view];
    recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x, 
                                         recognizer.view.center.y + translation.y);
    [recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}

-(void)handlePinch:(UIPinchGestureRecognizer *)recognizer
{    
    recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
    recognizer.scale = 1;
}

-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
    return YES;
}

#pragma mark -
#pragma mark Capture Screen Function
#pragma mark -

- (UIImage*)captureView:(UIView *)yourView 
{
    UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, yourView.opaque, 0.0);
    CGContextRef context = UIGraphicsGetCurrentContext();
    [yourView.layer renderInContext:context];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return image;
}

#pragma mark -

@end


Answer 2:

所以,我在没有人给我一个更好的解决方案,取得了预期的效果的光创建的自定义手势识别。 下面是关键代码段,允许自定义识别以指示视图应重新定位和新的规模应与质心的平移和缩放效果中心是什么使手指下的像素保持手指下,在所有时间,除非手指出现旋转,这是不支持,我不能做任何事情,从这样的手势阻止他们。 该手势识别盘和用两个手指同时放大。 我需要在以后添加支持一个手指的平移,即使两个手指中的一个被举起。

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
    // We can only process if we have two fingers down...
    if ( FirstFinger == nil || SecondFinger == nil )
        return;

    // We do not attempt to determine if the first finger, second finger, or
    // both fingers are the reason for this method call. For this reason, we
    // do not know if either is stale or updated, and thus we cannot rely
    // upon the UITouch's previousLocationInView method. Therefore, we need to
    // cache the latest UITouch's locationInView information each pass.

    // Break down the previous finger coordinates...
    float A0x = PreviousFirstFinger.x;
    float A0y = PreviousFirstFinger.y;
    float A1x = PreviousSecondFinger.x;
    float A1y = PreviousSecondFinger.y;
    // Update our cache with the current fingers for next pass through here...
    PreviousFirstFinger = [FirstFinger locationInView:nil];
    PreviousSecondFinger = [SecondFinger locationInView:nil];
    // Break down the current finger coordinates...
    float B0x = PreviousFirstFinger.x;
    float B0y = PreviousFirstFinger.y;
    float B1x = PreviousSecondFinger.x;
    float B1y = PreviousSecondFinger.y;


    // Calculate the zoom resulting from the two fingers moving toward or away from each other...
    float OldScale = Scale;
    Scale *= sqrt((B0x-B1x)*(B0x-B1x) + (B0y-B1y)*(B0y-B1y))/sqrt((A0x-A1x)*(A0x-A1x) + (A0y-A1y)*(A0y-A1y));

    // Calculate the old and new centroids so that we can compare the centroid's movement...
    CGPoint OldCentroid = { (A0x + A1x)/2, (A0y + A1y)/2 };
    CGPoint NewCentroid = { (B0x + B1x)/2, (B0y + B1y)/2 };    

    // Calculate the pan values to apply to the view so that the combination of zoom and pan
    // appear to apply to the centroid rather than the center of the view...
    Center.x = NewCentroid.x + (Scale/OldScale)*(self.view.center.x - OldCentroid.x);
    Center.y = NewCentroid.y + (Scale/OldScale)*(self.view.center.y - OldCentroid.y);
}

视图控制器通过分配新的规模和中心有问题的视图处理的事件。 我注意到,其他手势识别倾向于允许控制器做一些数学的,但我试图做所有的数学在识别器。

-(void)handlePixelTrack:(PixelTrackGestureRecognizer*)sender
{
    sender.view.center= sender.Center;
    sender.view.transform = CGAffineTransformMakeScale(sender.Scale, sender.Scale);
}


Answer 3:

在简单的解决方案是把你的观点滚动视图中。 然后你得到捏和平移是免费的。 否则,您可以设置两个平底锅和缩放手势代表自我和shouldRecognizeSimultaneously返回YES。 至于放大到用户手指的中心,我从来没有正确地解决了一个,但它涉及操纵anchorPoint改变其规模(我认为)前视图的层。



文章来源: Is there a gesture recognizer that handles both pinch and pan together?