I have a UIView on which i am loading my image view as a sub view. The image is showing when i doesn't set the anchor point but whenever i set anchor point the image is not showing .I have added the QUARTZCore frame work also.
I am adding my code below
CGRect apprect=CGRectMake(0, 0, 1024, 768);
UIView *containerView = [[UIView alloc] initWithFrame:apprect1];
containerView.backgroundColor = [UIColor whiteColor];
[self.view addSubview:containerView];
handleView1= [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"large.gif"]];
[handleView1 setAlpha:1];
//[handleView1 setFrame:CGRectMake(390, 172.00, 56.00, 316.00)];
handleView1.userInteractionEnabled = YES;
handleView1.layer.anchorPoint=CGPointMake(490.0, 172.0);
[containerView addSubview:handleView1];
The problem are the values you use for the anchorPoint. You don't set points like for the position of a frame as values. Let me quote Apple:
The anchorPoint property is a CGPoint
that specifies a location within the
bounds of a layer that corresponds
with the position coordinate. The
anchor point specifies how the bounds
are positioned relative to the
position property, as well as serving
as the point that transforms are
applied around. It is expressed in
the unit coordinate system-the
(0.0,0.0) value is located closest to
the layer’s origin and (1.0,1.0) is
located in the opposite corner.
Have a look at Layer Geometry and Transforms in the Core Animation Programming Guide for more details.
First set the Frame of your ImageView and then set the ancherpoint like this
[ handleView1.layer setAnchorPoint:CGPointMake(1, 1)];
Related
I am looking to alter an existing iOS application so that instead of using multi-touch gestures to size and rotate images (two-finger pinch/zoom and twist), I want there to be a handle on all four corners of the image and one at the top so that the user can grab one of the handles to re-size or rotate.
I have been researching the topic but am unable to find anything pointing me in the right direction.
See this image for an example of what I'm talking about-
I'm assuming that because you're starting with a app that already has working pinch-zoom and twist gestures that your question is merely how to show those translucent circles for the handles. I'd be inclined to create UIView subclass that draws the circle, like so:
#implementation HandleView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
self.backgroundColor = [UIColor clearColor];
self.userInteractionEnabled = YES;
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(context, rect);
CGContextSetFillColorWithColor(context, [[UIColor colorWithWhite:1.0 alpha:0.5] CGColor]); // white translucent
CGContextSetStrokeColorWithColor(context, [[UIColor colorWithWhite:0.25 alpha:0.5] CGColor]); // dark gray translucent
CGContextSetLineWidth(context, 1.0);
CGContextDrawPath(context, kCGPathEOFillStroke); // draw both fill and stroke
}
#end
You could achieve the same effect with CAShapeLayer layers, too, if you didn't want to write your own drawRect with Core Graphics calls like I did above. But the idea would be the same.
Your view controller can then add those five views and add gesture recognizers for them, like so:
HandleView *handleView = [[HandleView alloc] initWithFrame:CGRectMake(50, 50, 50, 50)];
[self.view addSubview:handleView];
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:)];
[handleView addGestureRecognizer:pan];
Just repeat that for each of the handles you want on the screen. and then write your gesture recognizer to do whatever you want (e.g. move the bounding rectangle, move the appropriate handles, etc.).
Sounds fairly straight forward. The view hierarchy of one possible solution as ASCII art:
Container (ScalingRotatingView)
|
+----imageView (UIImageView)
|
+----upperLeftScalingHandle (HandleView)
|
+----upperRightScalingHandle (HandleView)
|
+----lowerLeftScalingHandle (HandleView)
|
+----lowerRightScalingHandle (HandleView)
|
+----rotatingHandle (HandleView)
All instances of HandleView would have a pan gesture recognizer, that feeds one of two methods in your controller:
--updateForScaleGesture:, where you’d use the gesture recognizer’s -translationInView: to compute and store the new scale, before updating the frames of all views appropriately, and resetting the translation to 0, and
- -updateForRotationGesture:, where you’d use the gesture recognizer’s -translationInView: to compute and store the new angle before updating the frames and resetting the recognizer’s translation.
For both calculations you need the translation in the coordinate system of the image view. For the scaling part, you can then simply divide the new edge lengths by the natural image dimensions, for the rotation you can use the approximation that only the x component of the translation matters:
sin(x) = x (for small values of x)
Oh, and it sure helps if the anchor point of your image view sits at its center…
In GPUImageGaussianSelectiveBlurFilter how to set the point of non image blur ? i tried setting the point excludeCirclePoint but everything gets blur. I want to implement touches gesture and the touched point remains good while the other image gets blur.
You have to assign CGPoint to filter's excludeCirclePoint property where CGPointMake(0, 0) is the top-left corner of the image and CGPointMake(1, 1) is the bottom-right corner.
Something like:
GPUImageGaussianSelectiveBlurFilter *filter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
filter.excludeCirclePoint = CGPointMake(0.5, 0.5); // center
I made two instances of UILabel and added them to my ViewController's view.
And then I changed the anchorPoint of each from 0.5 to 1.0 (x and y).
Next, I reset the frame of uiLabel2 to its frame I created it with: (100,100,100,20).
When I run the app, uiLabel1 and uiLabel2 show at different positions. Why?
UILabel *uiLabel1 = [[[UILabel alloc] initWithFrame:CGRectMake(100, 100, 100, 20)] autorelease];
uiLabel1.text = #"UILabel1";
uiLabel1.layer.anchorPoint = CGPointMake(1, 1);
UILabel *uiLabel2 = [[[UILabel alloc] initWithFrame:CGRectMake(100, 100, 100, 20)] autorelease];
uiLabel2.text = #"UILabel2";
uiLabel2.layer.anchorPoint = CGPointMake(1, 1);
uiLabel2.frame = CGRectMake(100, 100, 100, 20);
[self.view addSubview:uiLabel1];
[self.view addSubview:uiLabel2];
A CALayer has four properties that determine where it appears in its superlayer:
position (which is the same as the view's center property)
bounds (actually only the size part of bounds)
anchorPoint
transform
You will notice that frame is not one of those properties. The frame property is actually derived from those properties. When you set the frame property, the layer actually changes its center and bounds.size based on the frame you provide and the layer's existing anchorPoint.
You create the first layer (by creating the first UILabel, which is a subclass of UIView, and every UIView has a layer), giving it a frame of 100,100,100,20. The layer has a default anchor point of 0.5,0.5. So it computes its bounds as 0,0,100,20 and its position as 150,110. It looks like this:
Then you change its anchor point to 1,1. Since you don't change the layer's position or bounds directly, and you don't change them indirectly by setting its frame, the layer moves so that its new anchor point is at its (unchanged) position in its superlayer:
If you ask for the layer's (or view's) frame now, you will get 50,90,100,20.
When you create the second layer (for the second UILabel), after changing its anchor point, you set its frame. So the layer computes a new position and bounds based on the frame you provide and its existing anchor point:
If you ask the layer (or view) for its frame now, you will get the frame you set, 100,100,100,20. But if you ask for its position (or the view's center), you will get 200,120.
Well that is exactly what an anchor point does. Before changing the anchor points, you were setting the frame based of the center of the label. After that, you are setting the frame based on the right bottom corner.
Since you were only resetting the frame for just one label, one adjusted its frame based on the new anchor point and the other one stayed at the old position.
If you want them to be at the same point, then you need to reset the frame for both of them after editing the anchor point, OR don't the anchor point at all.
This guide explains more about anchor points.
I have a UIView that may have scale and/or rotation transforms applied to it. My controller creates a new controller and passes the view to it. The new controller creates a new view and tries to place it in the same location and rotation as the passed view. It sets the location and size by converting the original view's frame:
CGRect frame = [self.view convertRect:fromView.frame fromView:fromView.superview];
ImageScrollView *isv = [[ImageScrollView alloc]initWithFrame:frame image:image];
This works great, with the scaled size and location copied perfectly. However, if there is a rotation transform applied to fromView, it does not transfer.
So I added this line:
isv.transform = fromView.transform;
That nicely handles transfers the rotation, but also the scale transform. The result is that the scale transform is effectively applied twice, so the resulting view is much too large.
So how do I go about transferring the location (origin), scale, and rotation from one view to another, without doubling the scale?
Edit
Here is a more complete code example, where the original UIImageView (fromView) is being used to size and position a UIScrollView (the ImageScrollView subclass):
CGRect frame = [self.view convertRect:fromView.frame fromView:fromView.superview];
frame.origin.y += pagingScrollView.frame.origin.y;
ImageScrollView *isv = [[ImageScrollView alloc]initWithFrame:frame image:image];
isv.layer.anchorPoint = fromView.layer.anchorPoint;
isv.transform = fromView.transform;
isv.bounds = fromView.bounds;
isv.center = [self.view convertPoint:fromView.center fromView:fromView.superview];
[self.view insertSubview:isv belowSubview:captionView];
And here is the entirety of the configuration in ImageScrollView:
- (id)initWithFrame:(CGRect)frame image:(UIImage *)image {
if (self = [self initWithFrame:frame]) {
CGRect rect = CGRectMake(0, 0, frame.size.width, frame.size.height);
imageLoaded = YES;
imageView = [[UIImageView alloc] initWithImage:image];
imageView.frame = rect;
imageView.contentMode = UIViewContentModeScaleAspectFill;
imageView.clipsToBounds = YES;
[self addSubview:imageView];
}
return self;
}
It looks as though the transform causes the imageView to scale up too large, as you can see in this ugly video.
Copy the first view's bounds, center, and transform to the second view.
Your code doesn't work because frame is a value that is derived from the bounds, center, and transform. The setter for frame tries to do the right thing by reversing the process, but it can't always work correctly when a non-identity transform is set.
The documentation is pretty clear on this point:
If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
...
If the transform property contains a non-identity transform, the value of the frame property is undefined and should not be modified. In that case, you can reposition the view using the center property and adjust the size using the bounds property instead.
Let say viewA is first view that contains frame & transform, and you want to pass those values to viewB.
So that, you need to get the original frame of viewA and pass it to viewB before pass the transform. Otherwise, viewB's frame will be changed 1 more time on when you add transform.
To get the original frame, just make viewA.transform to CGAffineTransformIdentity
Here is code
CGAffineTransform originalTransform = viewA.transform; // Remember old transform
viewA.transform = CGAffineTransformIdentity; // Remove transform so that you can get original frame
viewB.frame = viewA.frame; // Pass originalFrame into viewB
viewA.transform = originalTransform; // Restore transform into viewA
viewB.transform = originalTransform; // At this step, transform will change frame and make it the same with viewA
After that, viewA & viewB will have the same UI on superView.
I'm placing a UIImageView inside of a UIScrollView, basing my code off of the answer in this question. The problem I'm having is that there is a significant amount of white space to the bottom and right, and I can't scroll to some of the image in the top and left. I figure this is due to me incorrectly setting the contentSize of the scrollView. Here's the relevant code:
- (void)viewDidLoad
{
[super viewDidLoad];
_imageView.image = _image;
_imageView.bounds = CGRectMake(0, 0, _imageView.image.size.width,_imageView.image.size.height);
_scroller.contentSize = _imageView.image.size;
}
The view controller I'm in has three properties, a UIScrollView (_scroller), a UIImageView (_imageView), and a UIImage (_image).
You're setting the UIImageView's bounds property. You want to be setting its frame property instead. Setting the bounds will resize it around its center point (assuming you haven't changed the underlying CALayer's anchorPoint property), which is causing the frame origin to end up negative, which is why you can't see the upper-left.
_imageView.frame = CGRectMake(0, 0, _imageView.image.size.width, _imageView.image.size.height);
Alternate syntax:
_imageView.frame = (CGRect){CGPointZero, _imageView.image.size};