how to make a scale and rotate text at the same time using one finger in xcode? - xcode4.3

I'm having problem on my textfield, i can rotate and scale it using gesture but i want to make a one finger rotation and scale at the same time. please help me.I really need this badly.

The closest I can come up with for this would be not to use fingers, but sliders. Consider:
scale = scaleSlider.value;
currentAngle = rotationSlider.value;
//Create a transformation with just the rotation
CGAffineTransform transform = CGAffineTransformMakeRotation(currentAngle);
//Now apply our scale
transform = CGAffineTransformScale(transform, scale, scale);
//Now set the transform on the object to the combined rotation/scale transform.
[tmp setTransform: transform];

To enable more than one gesture recognizer at the same time, you should add this method and return yes, I tested it in my project :
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer{
return YES;
}

Related

How to improve magnification and rotation gesture recogniser methods in Objective-C?

I'm playing with NSGestureRecognizer(s) in Objective-C.
I have a simple Custom View in the XIB canvas to which I applied the press, pan, magnify, and rotate gesture recognisers. From each one I have created an action in AppDelegate.m and then added code to it. They all work to some extent but the way magnify and rotate behave does not satisfy me.
Here is the code for both of them:
- (IBAction)magnifyView:(NSMagnificationGestureRecognizer *)sender {
CGFloat magnification = sender.magnification + 1.0;
NSView *view = sender.view;
CGAffineTransform transform = CGAffineTransformMakeScale(magnification, magnification);
[[view layer] setAffineTransform:transform];
sender.magnification = 0;
}
and ...
- (IBAction)rotateView:(NSRotationGestureRecognizer *)sender {
CGFloat rotation = sender.rotation;
NSView *view = sender.view;
CGAffineTransform transform = CGAffineTransformMakeRotation(rotation);
[[view layer] setAffineTransform:transform];
sender.rotation = 0;
}
I'm using a 2016 MBP with macOS 12.4 and Xcode 13.4.1 to realise this and, using the trackpad for the gestures, I see that magnifyView seems to stutter unless I strongly open my fingers in an "unpinch" gesture—which risks triggering the macOS show desktop, while rotateView only rotates of a few degrees before coming back.
The whole project can be found here, if you feel like giving it a look. It doesn't contain much else anyway, it's an experiment.
Could you please give it a look (or even just at the methods above) and tell me what I could do to improve it?
Thank you!

iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative

I have an iPad application that has a base image UIImageView (in this case a large building or site plan or diagram) and then multiple 'pins' can be added on top of the plan (visually similar to Google Maps). These pins are also UIImageViews and are added to the main view on tap gestures. The base image is also added to the main view on viewDidLoad.
I have the base image working with the pinch gesture for zooming but obviously when you zoom the base image all the pins stay in the same x and y coordinates of the main view and loose there relative positioning on the base image (whose x,y and width,height coordinates have changed).
So far i have this...
- (IBAction)planZoom:(UIPinchGestureRecognizer *) recognizer;
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGRect pinFrame = pin.frame;
// ****************************************
// code to reposition the pins goes here...
// ****************************************
pin.frame = pinFrame;
}
}
}
I need help to calculate the math to reposition the pins x/y coordinates to retain there relative position on the zoomed in or out plan/diagram. The pins obviously do not want to be scaled/zoomed at all in terms of their width or height - they just need new x and y coordinates that are relative to there initial positions on the plan.
I have tried to work out the math myself but have struggled to work it through and unfortunately am not yet acquainted with the SDK enough to know if there is provision available built in to help or not.
Help with this math related problem would be really appreciated! :)
Many thanks,
Michael.
InNeedOfMathTuition.com
First, you might try embedding your UIImageView in a UIScrollView so zooming is largely accomplished for you. You can then set the max and min scale easily, and you can scroll around the zoomed image as desired (especially if your pins are subviews of the UIImageView or something else inside the UIScrollView).
As for scaling the locations of the pins, I think it would work to store the original x and y coordinates of each pin (i.e. when the view first loads, when they are first positioned, at scale 1.0). Then when the view is zoomed, set x = (originalX * zoomScale) and y = (originalY * zoomScale).
I had the same problem in an iOS app a couple of years ago, and if I recall correctly, that's how I accomplished it.
EDIT: Below is more detail about how I accomplished this (I'm looking my old code now).
I had a UIScrollView as a subview of my main view, and my UIImageView as a subview of that. My buttons were added to the scroll view, and I kept their original locations (at zoom 1.0) stored for reference.
In -(void)scrollViewDidScroll:(UIScrollView *)scrollView method:
for (id element in myButtons)
{
UIButton *theButton = (UIButton *)element;
CGPoint originalPoint = //get original location however you want
[theButton setFrame:CGRectMake(
(originalPoint.x - theButton.frame.size.width / 2) * scrollView.zoomScale,
(originalPoint.y - theButton.frame.size.height / 2) * scrollView.zoomScale,
theButton.frame.size.width, theButton.frame.size.height)];
}
For the -(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView method, I returned my UIImageView. My buttons scaled in size, but I didn't include that in the code above. If you're finding that the pins are scaling in size automatically, you might have to store their original sizes as well as original coordinates and use that in the setFrame call.
UPDATE...
Thanks to 'Mr. Jefferson' help in his answer above, albeit with a differing implementation, I was able to work this one through as follows...
I have a scrollView which has a plan/diagram image as a subview. The scrollView is setup for zooming/panning etc, this includes adding UIScrollViewDelegate to the ViewController.
On user double tapping on the plan/diagram a pin image is added as a subview to the scrollView at the touch point. The pin image is a custom 'ZonePin' class which inherits from UIImageView and has a couple of additional properties including 'baseX' and 'baseY'.
The code for adding the pins...
- (IBAction)planDoubleTap:(UITapGestureRecognizer *) recognizer;
{
UIImage *image = [UIImage imageNamed:#"Pin.png"];
ZonePin *newPin = [[ZonePin alloc] initWithImage:image];
CGPoint touchPoint = [recognizer locationInView:planContainer];
CGFloat placementX = touchPoint.x - (image.size.width / 2);
CGFloat placementY = touchPoint.y - image.size.height;
newPin.frame = CGRectMake(placementX, placementY, image.size.width, image.size.height);
newPin.zoneRef = [NSString stringWithFormat:#"%#%d", #"BF", pinSeq++];
newPin.baseX = placementX;
newPin.baseY = placementY;
[planContainer addSubview:newPin];
}
I then have two functions for handling the scrollView interaction and this handles the scaling/repositioning of the pins relative to the plan image. These methods are as follows...
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return planImage;
}
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGFloat newX, newY;
newX = (pin.baseX * scrollView.zoomScale) + (((pin.frame.size.width * scrollView.zoomScale) - pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) + ((pin.frame.size.height * scrollView.zoomScale) - pin.frame.size.height);
CGRect pinFrame = pin.frame;
pinFrame.origin.x = newX;
pinFrame.origin.y = newY;
pin.frame = pinFrame;
}
}
}
For reference, the calculations for position the pins, by the nature of them being pins' centres the pin image on the x axis but has the y-axis bottom aligned.
The only thing left for me to do with this is to reverse the calculations used in the scrollViewDidScroll method when I add pins when zoomed in. The code for adding pins above will only work properly when the scrollView.zoomScale is 1.0.
Other than that, it now works great! :)

CALayer transforming

I need to rotate a square when the user does a UIRotationGesture.
I have the gesture all set up. The problem is every time the user moves their fingers the square returns to the starting position and then animates to the new position. Rather than move from the previous position to the new position.
i.e if the rotation of the square is 90 degrees and the user continues to rotate to 100. the square will go back to 0 degrees and animate to 100.
Essentially I want the square to mirror the user when they perform a rotate gesture.
- (void)respondToGesture:(UIRotationGestureRecognizer *)rec{
NSLog(#"Rotation: %f", rec.rotation);
[self rotateWithRadian:rec.rotation];
if (rec.state == UIGestureRecognizerStateEnded) {
NSLog(#"gesture ended");
}
}
- (void)rotateWithRadian:(float)radian{
CABasicAnimation *spin = [CABasicAnimation animationWithKeyPath:#"transform.rotation"];
spin.removedOnCompletion = NO;
spin.fillMode = kCAFillModeForwards;
[spin setByValue:[NSNumber numberWithFloat:radian]];
[spin setDuration:1.0];
[squarelayer addAnimation:spin forKey:#"spinAnimation"];
Is there a reason you're not directly setting the layer's transform instead of using an animation?
This should do what you want:
- (void)respondToGesture:(UIRotationGestureRecognizer *)rec {
if (rec.state == UIGestureRecognizerStateChanged) {
CGAffineTransform currentTransform = squareLayer.affineTransform;
squareLayer.affineTransform = CGAffineTransformRotate(currentTransform, gesture.rotation);
gesture.rotation = 0;
}
}
You also need to adjust squareLayer.anchorPoint if you want the rotation to happen around the centre of the user's rotation instead of the centre of the layer.
There is a open source ( https://github.com/kirbyt/KTOneFingerRotationGestureRecognizer )for single finger rotation. You can check it out, will be much better. Even if you want only normal rotation gesture, you can look into the code for better rotation code. It uses center as origin.

Animating frame Images with ease effect on iPad

Let me show you example (360 Deg 3D Object Rotator): Demo: http://activeden.net/item/interactive-renders-360-deg-3d-object-rotator/39718?ref=mixDesign
As you see, there is a camera 3D rotating on mouse event. Actually, it is a collection of images (frames) animating frame by frame depending on mouse event.
I want to implement this animation with objective - c using swipe gesture (or maybe I should use another gesture?). So that I can make rotation by my finger, to the left, to the right (I want animation with smooth ease effect, depending on swipe speed velocity).
Note: I have ready images for each frame.
Sample codes, online tutorials doing this will really help me.
! Should I use some external graphics library, in order to keep performance? I have hundreds of images (PNG), each with size of 300kb
Thank you in advance, I really need your help!
Maybe it will be easier to go with touchesBegan:, touchesMoved:, and touchesEnded: here? This will allow you to react to velocity and direction changes very fast.
Update: example can be found here.
I don't think you should use swipe gesture here. I recommend you LongPressGesture with short minimumPressDuration.
Let me show example code:
longPress = [ [UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(handleLongPressGesture:)]
longPress.delegate = self;
longPress.minimumPressDuration = 0.05;
[viewWithImage addGestureRecognizer:longPress];
float startX;
float displacement = 0;
-(IBAction)handleLongPressGesture:(UILongPressGestureRecognizer *)sender
{
float nowX;
if ( sender.state == UIGestureRecognizerStateBegan )
{
startX = [sender locationInView:viewWithImage].x;
}
if ( sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled)
{
... do something at end ...
}
nowX = [sender locationInView:mainWidgetView].x;
displacement = nowX - startX;
// set right rotation with displacement value
[self rotateImageWith:displacement];
}

How do I convert Cocoa co-ords from top left == origin to bottom left == origin

I use CGWindowListCopyWindowInfo to get a list of all windows. It gives me the co-ordinates of each window based upon the origin being the top-left of the screen.
If I use NSWindow's setFrame method, the co-ordinates on based upon the origin being the bottom-left of the screen.
What's a clean, reliable way to convert from one to the other?
Added: By clean and reliable, I mean, something sure to work regardless whether the user has multiple screens or is using Spaces. I figure there must be a known idiom using library APIs.
Math is quite reliable :-)
yFromBottom = screenHeight - windowHeight - yFromTop
Main screen height is
[[[NSScreen screens] objectAtIndex:0] frame].size.height
I would suggest using an NSAffineTransform. If you draw with respect to the default origin and then apply a transform to the view, you can essentially flip things around in one fell swoop.
Try something like this (from here):
NSRect boundsInWindow = [myView convertRect:[myView bounds] toView:nil];
NSRect visibleRectInWindow = [myView convertRect:[myView visibleRect] toView:nil];
// Flip Y to convert NSWindow coordinates to top-left-based window coordinates.
float borderViewHeight = [[myView window] frame].size.height;
boundsInWindow.origin.y = borderViewHeight - NSMaxY(boundsInWindow);
visibleRectInWindow.origin.y = borderViewHeight - NSMaxY(visibleRectInWindow);