Hi I want to make it so that if you touch an image that I put on the view the -(void)checkcollision happens. The -(void)checkcollision happens, but when I touch anything. How can I say it only works if i touch a specified image. e.g. a pimple:
IBOutlet UIImageView *pimple;
#property (nonatomic, retain) UIImageView *pimple;
here is the touchesBegan code
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *myTouch = [[event allTouches] anyObject];
[self checkcollision];
}
-(void)checkcollision {
if (label.text = #"0") {
label.text = #"1";
}
pimple.hidden = YES;
}
CGPoint point = [myTouch locationInView:pimple];
if ( CGRectContainsPoint(pimple.bounds, point) ) {
... Touch detected.
}
Alternatively you can consider gesture recognizers. You can use a tap recognizer for this case.
There are two options:
Subclass UIImageView and define your own handler
Check sender - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event and compare it to your UIImageView instance.
Related
I’m beginner with Xcode, and i have problem with counter in my app.
When i put timer to count points the player(paddle) jump in the place where it was
in first time(in the middle)
everytime when timer count point.
So what i should do to keep paddle where it is?
Is there something other way to count points?
Its doesnt matter how to control player with swipe or g-sensor,
but in this example i control it with swipe.
And this problem appears only in ios 8.0> , not in ios 7.
Here some code:
.h file:
int ScoreNumber;
#interface ViewController : UIViewController{
IBOutlet UILabel *ScoreLabel;
NSTimer *Timer;
}
#property (nonatomic, strong)IBOutlet UIImageView *paddle;
#property (nonatomic) CGPoint paddleCenterPoint;
#end
.m file:
#implementation ViewController
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch = [touches anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
CGFloat yPoint = self.paddleCenterPoint.y;
CGPoint paddleCenter = CGPointMake(touchLocation.x, yPoint);
self.paddle.center = paddleCenter;
}
-(void)Score{
ScoreNumber = ScoreNumber +1;
ScoreLabel.text = [NSString stringWithFormat:#"%i", ScoreNumber];
}
-(void)viewDidLoad {
Timer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:#selector(Score) userInfo:nil repeats:YES];
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
In iOS 8, autolayout is king, and setting center doesn't change the layout constraints. This has nothing to do with the timer. It has to do with changing the label's text. When that happens, it triggers a layout of the whole view, and that reasserts the constraints (via layoutIfNeeded). Even though you're probably not setting any constraints in IB, Xcode will automatically insert them during the build.
There are several solutions. First, you could insert the paddle programmatically in viewDidLoad, which would bypass the constraints system. For this specific problem, that's probably ok (and it might even be how I'd do it personally). Something like this:
#interface ViewController ()
#property (nonatomic, strong) UIView *paddle;
#end
#implementation ViewController
- (void)viewDidLoad {
self.paddle = [[UIView alloc] initWithFrame:CGRectMake(100, 100, 200, 50)];
self.paddle.backgroundColor = [UIColor blueColor];
[self.view addSubview:self.paddle];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
self.paddle.center = CGPointMake(touchLocation.x, self.paddle.center.y);
}
#end
But how can we solve this without throwing away constraints? Well, we could just modify the constraint. Add your paddle view in IB, and add constraints for the left, bottom, width, and height. Then create an IB outlet for the horizontal and width constraints.
Edit the Horizontal Space Constraint to remove "Relative to margin" from both the first and second items:
Now, you can modify the constraint rather than the center:
#interface ViewController ()
#property (nonatomic, weak) IBOutlet UIView *paddle;
#property (weak, nonatomic) IBOutlet NSLayoutConstraint *paddleHorizontalConstraint;
#property (weak, nonatomic) IBOutlet NSLayoutConstraint *paddleWidthConstraint;
#end
#implementation ViewController
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint touchLocation = [touch locationInView:self.view];
CGFloat width = self.paddleWidthConstraint.constant;
self.paddleHorizontalConstraint.constant = touchLocation.x - width/2;
}
#end
You need to read the width constraint because the frame or bounds may not be correct until viewDidLayoutSubviews: is called. But you don't want to update the constraint there, since that'll force another layout.
I am developing a very simple application in Objective-c.
In the app, the user can change a position of the label by dragging the screen.
Tap & drag the screen, the label moves up and down to match the position of the finger.
When you release your finger, the coordinates information of the label is set to its text.
But the label position is reset at the moment its text is changed.
// IBOutlet UILabel *label
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint touch = [[touches anyObject] locationInView:self.view];
label.center = CGPointMake(label.center.x, touch.y);
}
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint touch = [[touches anyObject] locationInView:self.view];
label.center = CGPointMake(label.center.x, touch.y);
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint touch = [[touches anyObject] locationInView:self.view];
label.text = [NSString stringWithFormat:#"y = %f", touch.y];
}
In touchesEnded event, just only change the text of the label,
but the position of it has reset.
I tried to change touchesEnded event like following but it haven't solved the problem.
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint touch = [[touches anyObject] locationInView:self.view];
label.text = [NSString stringWithFormat:#"y = %f", touch.y];
label.center = CGPointMake(label.center.x, touch.y); // add this line
}
I want to resolve this weird behavior without uncheck "Use auto layout".
I'd like to keep using auto layout.
I have 4 screenshots for my app.
4 Screenshots
The first image is a Storyboard.
I have a label with auto layout constraints.
The second image is a screenshot right after launching app.
The third image is when a user drag the screen,
and the label move down to match the finger.
The forth image is right after release your finger
and the label text has changed.
You cannot use auto layout and change the center / frame of things; those are opposites. Do one or the other - not both.
So, you don't have to turn off auto layout, but if you don't, then you must use auto layout, and auto layout only, to position things.
When you move the label, do not change its center - change its constraints. Or at least, having changed its center, change its constraints to match.
Otherwise, when layout occurs, the constraints will put it back where you have told them to put it. And layout does occur when you change the text, and at many other times.
Solution by OP.
Constraint can be treated as an IBOutlet on Storyboard:
Associating constraints as well as label or other IBOutlet.
// ViewController.h
#import <UIKit/UIKit.h>
#interface ViewController : UIViewController {
IBOutlet UILabel *label;
IBOutlet NSLayoutConstraint *lcLabelTop;
}
#end
// ViewController.m
- (void)viewDidLoad {
[super viewDidLoad];
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panAction:)];
[self.view addGestureRecognizer:pan];
}
- (void)panAction : (UIPanGestureRecognizer *)sender
{
CGPoint pan = [sender translationInView:self.view];
lcLabelTop.constant += pan.y;
if (sender.state == UIGestureRecognizerStateEnded) {
label.text = [NSString stringWithFormat:#"y = %f", lcLabelTop.constant];
}
[sender setTranslation:CGPointZero inView:self.view];
}
Im trying to write a simple program that takes 5 images and allows you to drag them from the bottom of the screen and snap them on to 5 other images on the top in any order you like. I have subclassed the UIImageView with a new class and added the touches began, touches moved and touches ended. Then on my main viewcontroller I place the images and set their class to be the new subclass I created. The movement works great in fact I am NSLogging the coordinates in the custom class.
Problem is I'm trying to figure out how to get that CGpoint info from the touches end out of the custom class and get it to call a function in my main view controller that has the image objects so i can test whether or not the image is over another image. and move it to be centered on the image its over (snap onto it).
here is the code in my custom class m file. MY view controller is basically empty with a xib file with 10 image views 5 of which are set to this class and 5 are normal image views..
#import "MoveImage.h"
CGPoint EndPoint;
#implementation MoveImage
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
startPoint = [[touches anyObject] locationInView:self];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint newPoint = [[touches anyObject] locationInView:self.superview];
newPoint.x -= startPoint.x;
newPoint.y -= startPoint.y;
CGRect frm = [self frame];
frm.origin = newPoint;
[self setFrame:frm];
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *end = [[event allTouches] anyObject];
EndPoint = [end locationInView:self];
NSLog(#"end ponts x : %f y : %f", EndPoint.x, EndPoint.y);
}
#end
You could write a MoveImageDelegate protocol which your app controller would then implement. This protocol would have methods that your MoveImage class can call to get its information.
In your header file ( if you don't know how to write a protocol )
#protocol MoveImageDelegate <TypeOfObjectsThatCanImplementIt> // usually NSObject
// methods
#end
then you can declare an instance variable of type id, but still have the compiler recognize that that variable responds to your delegate selectors ( so you can code without those annoying warnings ).
Something like:
#interface MoveImage : UIImageView
{
id <MoveImageDelegate> _delegate;
}
#property (assign) id <MoveImageDelegate> delegate;
Lastly:
#implementation MoveImage
#synthesize delegate = _delegate;
And your delegate is complete.
I'm trying to link two gestures one after another. UILongPressGestureRecognizer, then UIPanGestureRecognizer.
I want to detect the Long Press, then allow the Pan gesture to be recognized.
I've Subclassed UIPanGestureRecognizer and Added an panEnabled Bool iVar. In the initWith Frame I've set panEnabled to NO.
In Touches Moved I check to see if it is enabled, and then call Super touchesMoved if it is.
In my LongPress Gesture Handler, I loop though the View's Gestures till I find my Subclassed Gesture and then setPanEnabled to YES.
It seems like it is working, though its like the original pan gesture recognizer is not functioning properly and not setting the Proper states. I know if you Subclass the UIGestureRecognizer, you need to maintain the state yourself, but I would think that if you are subclassing UIPanGestureRecognizer, and for all the touches methods calling the super, that it would be setting the state in there.
Here is my subclass .h File
#import <UIKit/UIKit.h>
#import <UIKit/UIGestureRecognizerSubclass.h>
#interface IoUISEListPanGestureRecognizer : UIPanGestureRecognizer {
int IoUISEdebug;
BOOL panEnabled;
}
- (id)initWithTarget:(id)target action:(SEL)action;
#property(nonatomic, assign) int IoUISEdebug;
#property(nonatomic, assign) BOOL panEnabled;
#end
here is the subclass .m File
#import "IoUISEListPanGestureRecognizer.h"
#implementation IoUISEListPanGestureRecognizer
#synthesize IoUISEdebug;
#synthesize panEnabled;
- (id)initWithTarget:(id)target action:(SEL)action {
[super initWithTarget:target action:action];
panEnabled = NO;
return self;
}
- (void)ignoreTouch:(UITouch*)touch forEvent:(UIEvent*)event {
[super ignoreTouch:touch forEvent:event];
}
-(void)reset {
[super reset];
panEnabled = NO;
}
- (BOOL)canPreventGestureRecognizer:(UIGestureRecognizer *)preventedGestureRecognizer {
return [super canPreventGestureRecognizer:preventedGestureRecognizer];
}
- (BOOL)canBePreventedByGestureRecognizer:(UIGestureRecognizer *)preventingGestureRecognizer{
return [super canBePreventedByGestureRecognizer:preventingGestureRecognizer];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[super touchesBegan:touches withEvent:event];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
if (panEnabled) {
[super touchesMoved:touches withEvent:event];
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
[super touchesEnded:touches withEvent:event];
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event{
[super touchesCancelled:touches withEvent:event];
}
#end
If you make a BOOL called canPan and include the following delegate methods you can have both a standard UILongPressGestureRecognizer and UIPanGestureRecognizer attached to the same view. On the selector that gets called when the long press gesture is recognized - change canPan to YES. You might want to disable the long press once it has been recognised and re-enable it when the pan finishes. - Don't forget to assign the delegate properties on the gesture recognisers.
- (BOOL)gestureRecognizerShouldBegin:(UIGestureRecognizer *)gestureRecognizer{
if (!canPan && [gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]]) {
return NO;
}
return YES;
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer{
return YES;
}
I have a UIPickerView that gets faded out to 20% alpha when not in use. I want the user to be able to touch the picker and have it fade back in.
I can get it to work if I put a touchesBegan method on the main View, but this only works when the user touches the View. I tried sub-classing UIPickerView and having a touchesBegan in there, but it didn't work.
I'm guessing it's something to do with the Responder chain, but can't seem to work it out.
I've been searching for a solution to this problem for over a week. I'm answering you even if you're question is over a year old hoping this helps others.
Sorry if my language is not very technical, but I'm pretty new to Objective-C and iPhone development.
Subclassing UIpickerView is the right way to do it. But you've to override the - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event method. This is the method called whenever you touch the screen and it returns the view that will react to the touch. In other words the view whose touchesBegan:withEvent: method will be called.
The UIPickerView has 9 subviews! In the UIPickerView class implementation - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event won't return self (this means the touchesBegan:withEvent: you write in the subclass won't be called) but will return a subview, exactly the view at index 4 (an undocumented subclass called UIPickerTable).
The trick is to make the - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event method to return self so you have control over the touchesBegan:withEvent:, touchesMoved:withEvent: and touchesEnded:withEvent: methods.
In these methods, in order to keep the standard functionalities of the UIPickerView, you MUST remember to call them again but on the UIPickerTable subview.
I hope this makes sense. I can't write code now, as soon as I'm at home I will edit this answer and add some code.
Here is some code that does what you want:
#interface TouchDetectionView : UIPickerView {
}
- (UIView *)getNextResponderView:(NSSet *)touches withEvent:(UIEvent *)event;
#end
#implementation TouchDetectionView
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UIView * hitTestView = [self getNextResponderView:touches withEvent:event];
[hitTestView touchesBegan:touches withEvent:event];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UIView * hitTestView = [self getNextResponderView:touches withEvent:event];
[hitTestView touchesMoved:touches withEvent:event];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIView * hitTestView = [self getNextResponderView:touches withEvent:event];
[hitTestView touchesEnded:touches withEvent:event];
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
UIView * hitTestView = [self getNextResponderView:touches withEvent:event];
[hitTestView touchesCancelled:touches withEvent:event];
}
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
return self;
}
- (UIView *)getNextResponderView:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch * touch = [touches anyObject];
CGPoint point = [touch locationInView:self];
UIView * hitTestView = [super hitTest:point withEvent:event];
return ( hitTestView == self ) ? nil : hitTestView;
}
Both of the above answers were very helpful, but I have a UIPickerView nested within a UIScrollView. I'm also doing continual rendering elsewhere on-screen while the GUI is present. The problem is that the UIPickerView doesn't update fully when: a non-selected row is tapped, the picker is moved so that two rows straddle the selection area, or a row is dragged but the finger slides outside of the UIPickerView. Then it's not until the UIScrollView is moved that the picker instantly updates. This result is ugly.
The problem's cause: my continual rendering was keeping the UIPickerView's animation from getting the CPU cycles it needed to finish, hence to show the correct, current selection. My solution --which works-- was this: in the UIPickerView's touchesEnded:withEvent:, execute something to pause my rendering for a short while. Here's the code:
#import "SubUIPickerView.h"
#implementation SubUIPickerView
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
[pickerTable touchesBegan:touches withEvent:event];
}
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
[pickerTable touchesMoved:touches withEvent:event];
}
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
[singleton set_secondsPauseRendering:0.5f]; // <-- my code to pause rendering
[pickerTable touchesEnded:touches withEvent:event];
}
- (void) touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event
{
[pickerTable touchesCancelled:touches withEvent:event];
}
- (UIView*) hitTest:(CGPoint)point withEvent:(UIEvent*)event
{
if (CGRectContainsPoint(self.bounds, point))
{
if (pickerTable == nil)
{
int nSubviews = self.subviews.count;
for (int i = 0; i < nSubviews; ++i)
{
UIView* view = (UIView*) [self.subviews objectAtIndex:i];
if ([view isKindOfClass:NSClassFromString(#"UIPickerTable")])
{
pickerTable = (UIPickerTable*) view;
break;
}
}
}
return self; // i.e., *WE* will respond to the hit and pass it to UIPickerTable, above.
}
return [super hitTest:point withEvent:event];
}
#end
and then the header, SubUIPickerView.h:
#class UIPickerTable;
#interface SubUIPickerView : UIPickerView
{
UIPickerTable* pickerTable;
}
#end
Like I said, this works. Rendering pauses for an additional 1/2 second (it already pauses when you slide the UIScrollView) allowing the UIPickerView animation to finish. Using NSClassFromString() means you're not using any undocumented APIs. Messing with the Responder chain was not necessary. Thanks to checcco and Tylerc230 for helping me come up with my own solution!
Set canCancelContentTouches and delaysContentTouches of parent view to NO, that worked for me
Couldn't find a recent, easier solution to this so I rigged something to solve my problem. Created an invisible button that stretches over the picker view. Connected that button with a "Touch Down" recognizer to my parent UIView. Now any functionality can be added, I happened to need a random selection timer to be invalidated when someone touches the picker view. Not elegant but it works.