Looking at the Touches example from Apple's documentation, there is this method:
// scale and rotation transforms are applied relative to the layer's anchor point
// this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
First question, can someone explain the logic of setting the anchor point in the subview, and changing the center of the superview (like why this is done)?
Lastly, how does the math work for the anchorPoint statement? If you have a view that has a bounds of 500, 500, and say you touch at 100, 100 with one finger, 500, 500 with the other. In this box your normal anchor point is (250, 250). Now it's ???? (have no clue)
Thanks!
The
center
property of a view is a mere reflection of theposition
property of its backing layer. Surprisingly what this means is that thecenter
need not be at the center of yourview
. Whereposition
is situated within its bounds is based on theanchorPoint
which takes in values anywhere between (0,0) and (1,1). Think of it as a normalized indicator of whether theposition
lies within its bounds. If you were to change either theanchorPoint
or theposition
, the bounds will adjust itself rather than the position shifting w.r.t to itssuperlayer
/superview
. So to readjustposition
so that the frame of the view doesn't shift one can manipulate thecenter
.Imagine the original thing being where
O
is the touch point,Now we want this
X
to be at the point where the user has touched. We do this because all scaling and rotations are done based on theposition
/anchorPoint
. To adjust the frame back to its original position, we set the"center"
of the view to the touch location.So this reflects in the view readjusting its frame back,
Now when the user rotates or scales, it will happen as if the axis were at the touch point rather than the true center of the view.
In your example, the location of view might end up being the average i.e. (300, 300) which means the
anchorPoint
would be (0.6, 0.6) and in response theframe
will move up. To readjust we move the center to the touch location will move theframe
back down.This code isn't changing the center of the superview. It's changing the center of the gesture recognizer's view to be the location of the gesture (coordinates specified in the superview's frame). That statement is simply moving the view around in its superview while following the location of the gesture. Setting
center
can be thought of as a shorthand way of settingframe
.As for the anchor point, it affects how scale and rotation transforms are applied to the layer. For example, a layer will rotate using that anchor point as its axis of rotation. When scaling, all points are offset around the anchor point, which doesn't move itself.
The key concept to note on the
anchorPoint
property is that the range of the values in the point is declared to be from [0, 1], no matter what that actual size of the layer is. So, if you have a view with bounds (500, 500) and you touch twice at (100, 100) and (500, 500), the location in the view of the gesture as a whole will be (300, 300), and the anchor point will be (300/500, 300/500) = (0.6, 0.6).