Drag and drop UIButton but limit to boundaries - objective-c

I’m not asking for givemesamplecodenow responses, just a nudge in the right direction would be much appreciated. My searches haven’t been any good.
I have a UIButton that I am able to move freely around the screen.
I would like to limit the drag area on this object so it can only be moved up and down or side to side. I believe I need to get the x,y coordinates of the boundaries and then restrict movement outside this area. But that’s a far as I have got. My knowledge doesn’t stretch any further than that.
Has anyone implemented something similar in the past?
Adam

So let's say you're in the middle of the drag operation. You're moving the button instance around by setting its center to the center of whatever gesture is causing the movement.
You can impose restrictions by testing the gesture's center and resetting the center values if you don't like them. The below assumes a button wired to an action for all Touch Drag events but the principle still applies if you're using gesture recognizers or touchesBegan: and friends.
- (IBAction)handleDrag:(UIButton *)sender forEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
if (point.y > 200)
{
point.y = 200; //No dragging this button lower than 200px from the origin!
}
sender.center = point;
}
If you want a button that slides only on one axis, that's easy enough:
- (IBAction)handleDrag:(UIButton *)sender forEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.view];
point.y = sender.center.y; //Always stick to the same y value
sender.center = point;
}
Or perhaps you want the button draggable only inside the region of a specific view. This might be easier to define if your boundaries are complicated.
- (IBAction)handleDrag:(UIButton *)sender forEvent:(UIEvent *)event
{
CGPoint point = [[[event allTouches] anyObject] locationInView:self.someView];
if ([self.someView pointInside:point withEvent:nil])
{
sender.center = point;
//Only if the gesture center is inside the specified view will the button be moved
}
}

Presumably you'd be using touchesBegan:, touchesMoved:, etc., so it should be as simple as testing whether the touch point is outside your view's bounds in touchesMoved:. If it is outside, ignore it, but if it's inside, adjust the position of the button.
I suspect you may find this function useful:
bool CGRectContainsPoint ( CGRect rect, CGPoint point );

Related

Making Sprite respond to mouse clicks

I have a scene (SKScene), in which, whenever a click is performed a ball (SKSprtieNode) is dropped from that point.
Now what I want to do is, whenever a click on the ball is performed, the ball should bounce or something.
What I have in GameScene.m is
- (void)mouseDown:(NSEvent *)theEvent {
CGPoint location = [theEvent locationInNode:self];
[self addBallAtLocation:location];
}
- (void)addBallAtLocation:(CGPoint) location {
Ball *ball = [Ball new];
ball.position = location;
[self addChild:ball];
}
And in Ball.m I add the bounce action to mouseDown method:
- (void)mouseDown:(NSEvent *)theEvent{
CGPoint point = [theEvent locationInNode:self];
CGVector impulse = CGVectorMake(point.x * 5.0, point.y * 5.0);
[self.physicsBody applyImpulse:impulse];
}
Right now a new ball is created even when clicked on a exiting ball. I thought that the ball's mouseDown method would be called since I clicked on it, and if that did not exist the scene's mouseDown method would be called.
P.S. I have a feeling that this could be solved with delegate, I could very easily be wrong, but since I am not totally clear on to use them, I didn't. If you think that might be a good way to resolve this issue, please do use them, as it may help me understand how to use them.
By default SKSpriteNode has its userInteractionEnabled property set to NO (certainly for performance reasons). You simply have to set it to YES and your event-handling methods will be called :)

UITapGestureRecognizer every value the same

I'm a newbie to this and remaking an app. I am trying to use UITapGestureRecognizer. It works in the initial project file but not the new one. The only difference is that the old one uses a navigational controller but mine doesn't.
In the new one the self distance:location to:centre is stuck at 640 no matter where you press on the screen.
Can anyone help? I have no idea why it isn't working.
- (void)handleSingleTap:(UITapGestureRecognizer *)recognizer {
CGPoint location = [recognizer locationInView:[recognizer.view superview]];
CGPoint centre = CGPointMake(512,768 / 2);
NSInteger msg = [self distance:location to:centre];
NSLog(#"location to centre: %d", msg);
if ([self distance:location to:centre] < 330)
The part that looks suspicious to me is [recognizer.view superview].
When the gesture recognizer was added to self.view in a UIViewController that is not inside a container controller (e.g. a Navigation Controller) that view does not have a superview. If you send the superview message to self.view without a container view controller it will return nil.
So your message looked basically like this
CGPoint location = [recognizer locationInView:nil];
This will return the location in the window, which is also a valid CGPoint that tells you were you tapped the screen.
Since this didn't work I guess in [self distance:location to:centre] you do something that does only work with coordinates relative to the view. Maybe it's related to rotation, because the coordinates of the window don't rotate when you rotate the device.
Without knowing your code I'm not sure what the problem is, but it probably doesn't matter.
Just replace [recognizer.view superview] with recognizer.view.
Refer below Link, You may get your answer. It's an example if Gesture Recognition.
http://www.techotopia.com/index.php/An_iPhone_iOS_6_Gesture_Recognition_Tutorial

How to enable a certain part of image to be clicked and display content using objective c?

Hi i am trying to implement an application which allows user to click on certain part of the image, for example fish(fins, tails) and navigate to another page of xib to show the content of the parts. Do anyone know how to enable on click to images and pin point the various part for clicking? Please help! Thanks a lot!
P.S. i have found out how to click on the image and able to get the coordinates for the image. How do i use that coordinates in an if statement? For example,
if (location = i)
{
go this page
}
else if (location = a)
{
go this page
}
If you have static image,
simplest way to achieve this is to add UIButton with type custom, and then you add that over that image parts. Set background color and everything to clearcolor, so it would appear transparent.
On touchUpInside event of that button, you can redirect it to other view.
Try like this may be it helps you,
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
CGPoint lastLocation = [touch locationInView: self.view];
if (![[touch view] isKindOfClass:[UIImageView class]]) {
if (lastLocation.x>103 & lastLocation.y<269) { //place your location points here
// keep you condition based on image view and push the view here.
}
}
}

Objective-C – Making several views respond to the same touch event?

As you all know in Cocoa-touch the GUI is made up of several UIViews. Now in my view I have an UIImageView (containing an image) which is aligned at x:10 y:10, then I have a UILabel aligned at x:30 y:10 and then finally another UILabel aligned at x:50 y:10.
Now I would want all these UIViews to respond to the same touch event. What would be the easiest way to accomplish this? Would it be to create an UIView that will align from x:10 y:10 and cover all the views and then place this view on top of these views (-bringSubviewToFront)?
Cheers,
Peter
I think your variant is ok! Just create UIView and catch all touch events by it.
Or you can put all your subviews (image, label, label) on one subview mergeView of main view, disable user interaction of that views (image, label, label) and to mergeView add gesture recognizer or whatever you want to catch touches. With such approach it will be easier to reposition your views: just reposition mergeView. Don't forget to enable user interaction of mergeView.
Catch the touch events on the top View and explicitly pass those touches to other views you are interested in. But makes sure the touch point interesects the other views to whom you are sending the touch events
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch * touch = [[touches allObjects] objectAtIndex:0];
CGPoint point = [touch locationInView:topView];
if (CGRectContainsPoint(view1.frame, point)) {
[view1 touchesBegan:[NSSet setWithObject:touch] withEvent:nil];
}
if (CGRectContainsPoint(view2.frame, point)) {
[view2 touchesBegan:[NSSet setWithObject:touch] withEvent:nil];
}
}

Problem with touchesMoved and drawRect

I have a sticky bug that I can't seem to figure out, and I think it has something to do with the way touchesMoved is implemented.
in touchesMoved, I check to see where the touch is (if statement) and then accordingly, call setNeedsDisplayWithRect on a 40 by 40 area near the touchpoint. What happens in DrawRect is that a black image is put down if there was a white image there before, and vice versa. At the same time I'm calling setNeedsDisplayWithRect, I'm setting a boolean variable in an array of booleans, so I can keep track of what the current image is, and therefore display the opposite. (Actually, I don't always flip the image... I look at what the first touch is going to do, like switch from black to white, and then put white images on all the subsequent touches, so it's kind of like drawing or erasing with the images).
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:self];
CGPoint lastTouchPoint = [touch previousLocationInView:self];
touchX = touchPoint.x;
touchY = touchPoint.y;
int lastX = (int)floor((lastTouchPoint.x+0.001)/40);
int lastY = (int)floor((lastTouchPoint.y+0.001)/40);
int currentX = (int)(floor((touchPoint.x+0.001)/40));
int currentY = (int)(floor((touchPoint.y+0.001)/40));
if ((abs((currentX-lastX)) >=1) || (abs((currentY-lastY)) >=1))
{
if ([soundArray buttonStateForRow:currentX column:currentY] == firstTouchColor){
[soundArray setButtonState:!firstTouchColor row:(int)(floor((touchPoint.x+0.001)/40)) column:(int)(floor((touchPoint.y+0.001)/40))];
[self setNeedsDisplayInRect:(CGRectMake((CGFloat)(floor((touchPoint.x+0.001)/40)*40), (CGFloat)(floor((touchPoint.y+0.001)/40)*40), (CGFloat)40.0, (CGFloat)40.0))];
}
}
}
My problem is that the boolean array seems to be out of whack with the images I'm putting down. This only happens if I drag really fast across the screen. Eventually the boolean array and the images are no longer in sync, even though I set them at the same time. Any idea what is causing this, or what I can do to fix it?
Here's my drawRect:
- (void)drawRect:(CGRect)rect {
if ([soundArray buttonStateForRow:(int)(floor((touchX+0.001)/40)) column:(int)(floor((touchY+0.001)/40))])
[whiteImage drawAtPoint:(CGPointMake((CGFloat)(floor((touchX+0.001)/40)*40), (CGFloat)(floor((touchY+0.001)/40))*40))];
else
[blackImage drawAtPoint:(CGPointMake((CGFloat)(floor((touchX+0.001)/40)*40), (CGFloat)(floor((touchY+0.001)/40))*40))];
}
I figured out the answer to this. touchX and touchY were instance variables, and they were getting reset in touchesMoved before each call to drawRect was complete. Therefore, if I moved fast on the screen, touchesMoved would get called, then call drawRect, then touchesMoved would get called again before drawRect had used touchX and touchY, so the drawing would get out of sync with the boolean array backend.
to solve this, I stopped using touchX and touchY in drawRect, and started deriving the same point by using the dirty rect that was being passed in from touchesMoved.
tada!