I must be missing something obvious here but ...
UIControl has a method
- (void)addTarget:(id)target action:(SEL)action forControlEvents: (UIControlEvents)controlEvents
which lets you add an action to be called when any of the given controlEvents occur. ControlEvents are a bitmask of events which tell you if a touch went down, or up inside, or was dragged etc., there's about 16 of them, you or them together and get called when any of them occur.
The selector can have one of the following signatures
- (void)action
- (void)action:(id)sender
- (void)action:(id)sender forEvent:(UIEvent *)
none of those tell you what the control event bitmask was. The UIEvent is something slightly different, it's related to the actual touch event and doesn't (I think) contain the UIControlEvent. The sender (UIControl) doesn't have a way to find the control events either.
I'd like to have one method which deals with a number of control events as I have some common code regardless of which event or events happened but I still need to know what the UIControlEvents were for some specific processing.
Am I missing a way to find out what UIControlEvents were used when the action was called or do I really have to separate my code into
- (void)actionWithUIControlEventX;
- (void)actionWithUIControlEventY;
I encountered the same problem, and came up with a solution. It's not amazingly pretty, but it works quite well. It is a UIControl category that stores the last UIControlEvent fired to its own tag property. You can get it from the link below. For further reference, here's the doc from my category, for a more detailed description of what's going on.
Hopefully this helps! Cheers/J
UIControl+CaptureUIControlEvents
git gist at: http://gist.github.com/513796
PROBLEM: upon firing, UIControlEvents are not passed into the target action
assigned to the particular event. This would be useful in order to have only
one action that switches based on the UIControlEvent fired.
SOLUTION: add a way to store the UIControlEvent triggered in the UIEvent.
PROBLEM: But we cannot override private APIs, so:
(WORSE) SOLUTION: have the UIControl store the UIControlEvent last fired.
The UIControl documentation states that:
When a user touches the control in a way that corresponds to one or more
specified events, UIControl sends itself sendActionsForControlEvents:.
This results in UIControl sending the action to UIApplication in a
sendAction:to:from:forEvent: message.
One would think that sendActionsForControlEvents: can be overridden (or
subclassed) to store the flag, but it is not so. It seems that
sendActionsForControlEvents: is mainly there for clients to trigger events
programatically.
Instead, I had to set up a scheme that registers an action for each control
event that one wants to track. I decided not to track all the events (or in
all UIControls) for performance and ease of use.
USAGE EXAMPLE:
On UIControl setup:
UIControlEvents capture = UIControlEventTouchDown;
capture |= UIControlEventTouchDown;
capture |= UIControlEventTouchUpInside;
capture |= UIControlEventTouchUpOutside;
[myControl captureEvents:capture];
[myControl addTarget:self action:#selector(touch:) forControlEvents:capture];
And the target action:
- (void) touch:(UIControl *)sender {
UIColor *color = [UIColor clearColor];
switch (sender.tag) {
case UIControlEventTouchDown: color = [UIColor redColor]; break;
case UIControlEventTouchUpInside: color = [UIColor blueColor]; break;
case UIControlEventTouchUpOutside: color = [UIColor redColor]; break;
}
sender.backgroundColor = color;
}
When you create your UIControl, set a value for the tag property. Then in your action function, you can determine the tag of the UIControl that called it using [sender tag]. Here's an example:
-(void)viewDidLoad {
UIButton *button1 = [[UIButton alloc] initWithFrame(CGRectMake(0.0,0.0,100.0,100.0)];
button1.tag = 42;
[button1 addTarget:self action:#selector(actionWithUIControlEvent:) forControlEvents:UIControlEventTouchUpInside];
[self.view addSubview:button1];
UIButton *button2 = [[UIButton alloc] initWithFrame(CGRectMake(100.0,0.0,100.0,100.0)];
button2.tag = 43;
[button2 addTarget:self action:#selector(actionWithUIControlEvent:) forControlEvents:UIControlEventTouchUpInside];
[self.view addSubview:button2];
}
-(void)actionWithUIControlEvent:(id)sender {
switch([sender tag]) {
case 42:
//Respond to button 1
break;
case 43:
//Respond to button 2
break;
default:
break;
}
}
Related
I've been searching online for this answer, and every single post skips over the part of where to actually write the code for an action. I have a simple Interactive UIButton. And If i could just see a template of code that says "\write code for action here", that would be super helpful!!! ( it's for iPad IOS7 )
This is as far as I can get...
UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
[button addTarget:self
action:#selector(aMethod:)forControlEvents:UIControlEventTouchDown];
[button setTitle:#"Show View" forState:UIControlStateNormal];
button.frame = CGRectMake(80.0, 210.0, 160.0, 40.0);
[view addSubview:button];
I think I understand that this is how to set up a potential action, but where do I write the actual code for the action itself?
I want to kind of expand more on what was answered here already, Both responses are correct but i want to explain why/how this all works.
[button addTarget:self
action:#selector(aMethod:)forControlEvents:UIControlEventTouchDown];
The first thing to look at; The target.
The target is the instance of a class, any class. The only requirement for this class is that it has to implement the action.
action is the method you wish to invoke when the user presses the button.
#selector(aMethod:) Basically think of this as a method signature. Because Objective-c is a dynamic language, aMethod: does not need to exist, but will crash your program if it does not.
So if we put this all together, Whenever I want to press this button:
The system will invoke the action method, on the target instance.
and for the method itself, it can look like this
- (void) aMethod:(id)sender { }
You would put the action related code in a method, in that class, named aMethod:
- (void)aMethod:(id)sender {
// your code for the action goes here
}
You perhaps might also want to use UIControlEventTouchUpInside for the control event.
Since you've set the target as self, the aMethod: method should be added to the same class.
- (IBAction)aMethod:(id)sender
{
// do something here
}
I often have a need to fire a sequence of events as a result of holding down a button. Think of a + button that increments a field: tapping it should increment it by 1, but tap & hold should say increment this by 1 every second until the button is released. Another example is the scrubbing function when holding down the backwards or forwards button in an audio player type app.
I usually resort to the following strategy:
On touchDownInside I set up a repeating timer with my desired interval.
On touchUpInside I invalidate and release the timer.
But for every such button I need a separate timer instance variable, and 2 target-actions, and 2 method implementations. (This is assuming I'm writing a generic class and don't want to impose restrictions on the max number of simultaneous touches).
Is there a more elegant way to solve this that I'm missing?
Register the events for every button by:
[button addTarget:self action:#selector(touchDown:withEvent:) forControlEvents:UIControlEventTouchDown];
[button addTarget:self action:#selector(touchUpInside:withEvent:) forControlEvents:UIControlEventTouchUpInside];
For each button, set the tag attribute:
button.tag = 1; // 2, 3, 4 ... etc
In the handler, do whatever you need. Identify a button by the tag:
- (IBAction) touchDown:(Button *)button withEvent:(UIEvent *) event
{
NSLog("%d", button.tag);
}
I suggest UILongPressGestureRecognizer.
UILongPressGestureRecognizer *longPress = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(addOpenInService:)];
longPress.delegate = self;
longPress.minimumPressDuration = 0.7;
[aView addGestureRecognizer:longPress];
[longPress release];
longPress = nil;
On firing the event, you can get the call in
- (void) addOpenInService: (UILongPressGestureRecognizer *) objRecognizer
{
// Do Something
}
Likewise you can use UITapGestureRecognizer for recognizing user taps.
Hope this helps. :)
I have an image(clubcow.png) that is passed from an nsarray string to make a picture. Then facedowncow represents the picture. I was wondering how to make a button that will move facedowncow to position (100,100) when button is tapped. Any tips will be appreciated. Also, there is more to this code, I just posted the important parts to give an idea on what is going on.
cardKeys = [[NSArray alloc] initWithObjects:#"clubcow", nil];
currentName = [NSMutableString stringWithFormat:#"%#.png", [cowsShuffled objectAtIndex:currentcow]];
faceDowncow = (UIImageView *)[self.view viewWithTag:1];
faceDowncow.userInteractionEnabled = YES;
I would start by creating a UIButton and adding it to your view controller's view.
UIButton *button = [UIButton buttonWithType:UIButtonTypeRoundedRect];
button.frame = CGRectMake(0, 0, 100, 20);
[button setTitle:#"Tap Me" forState:UIControlStateNormal];
[button addTarget:self action:#selector(animateImage:)
forControlEvents:UIControlEventTouchUpInside];
[self.view addSubview:button];
Then, link this button to a function that will animate your faceDowncow object. You could add your faceDowncow as a property of the view controller so the following function can easily reference it:
- (void)animateImage:(UIButton *)sender {
[UIView animateWithDuration:0.2
animations:^{
// change origin of frame
faceDowncow.frame = CGRectMake(100, 100, faceDowncow.frame.size.width, faceDowncow.frame.size.height);
} completion:^(BOOL finished){
// do something after animation
}];
}
First of all, it looks like this code is from a view controller subclass, and the cow is a subview. In that case, you should probably have a property for it rather than obtaining it by its tag all the time. If it's instantiated in a storyboard scene/nib then you can hook up an outlet to a property/ivar in your subclass fairly easily.
The easiest way to do what you want is to create the button and use target action so that when it is tapped, it calls a method in your view controller. In the method body, obtain a reference to your cow and set it's frame property, like so:
[faceDowncow setFrame: CGRectMake(100,100,faceDowncow.bounds.size.width,faceDowncow.bounds.size.height)];
If you don't know how target action works, I suggest reading Apple's documentation on the matter. It's as simple as getting a button, calling one method to tell it what events should make a certain method get called, and then implementing that method.
I have a UISlider on screen, and I need to be able to detect when the user stops touching it. (so I can fade some elements away).
I have tried using:
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
but this did not work when ending touches on a slider.
You can detect when a touch ends using two control events; try
[slider addTarget:self action:#selector(touchEnded:)
forControlEvents:UIControlEventTouchUpInside];
or
[slider addTarget:self action:#selector(touchEnded:)
forControlEvents:UIControlEventTouchUpOutside];
If you want to detect both types of the touchesEnd event, use
[slider addTarget:self action:#selector(touchEnded:)
forControlEvents:UIControlEventTouchUpInside | UIControlEventTouchUpOutside];
Instead of using touchesEnded: (which shouldn't be used for this purpose anyway), attach an action to the UISlider's UIControlEventValueChanged event and set the continuous property of the UISlider to NO, so the event will fire when the user finishes selecting a value.
mySlider.continuous = NO;
[mySlider addTarget:self
action:#selector(myMethodThatFadesObjects)
forControlEvents:UIControlEventValueChanged];
I couldn't get anything to capture both the start and end of the touches, but upon RTFD-ing, I came up with something that will do both.
#IBAction func sliderAction(_ sender: UISlider, forEvent event: UIEvent) {
if let touchEvent = event.allTouches?.first {
switch touchEvent.phase {
case .began:
print("touches began")
sliderTouchBegan()
case .ended:
print("touches ended")
sliderTouchEnded()
default:
delegate?.sliderValueUpdated(sender.value)
}
}
}
sliderTouchBegan() and sliderTouchEnded() are just methods I wrote that handle animations when the touch begins and when it ends. If it's not a begin or end, it's a default and the slider value updates.
In one of my iPhone projects, I have three views that you can move around by touching and dragging. However, I want to stop the user from moving two views at the same time, by using two fingers. I have therefore tried to experiment with UIView.exclusiveTouch, without any success.
To understand how the property works, I created a brand new project, with the following code in the view controller:
- (void)loadView {
self.view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
UIButton* a = [UIButton buttonWithType:UIButtonTypeInfoDark];
[a addTarget:self action:#selector(hej:) forControlEvents:UIControlEventTouchUpInside];
a.center = CGPointMake(50, 50);
a.multipleTouchEnabled = YES;
UIButton* b = [UIButton buttonWithType:UIButtonTypeInfoDark];
[b addTarget:self action:#selector(hej:) forControlEvents:UIControlEventTouchUpInside];
b.center = CGPointMake(200, 50);
b.multipleTouchEnabled = YES;
a.exclusiveTouch = YES;
[self.view addSubview:a];
[self.view addSubview:b];
}
- (void)hej:(id)sender
{
NSLog(#"hej: %#", sender);
}
When running this, hej: gets called, with different senders, when pressing any of the buttons - even though one of them has exclusiveTouch set to YES. I've tried commenting the multipleTouchEnabled-lines, to no avail. Can somebody explain to me what I'm missing here?
Thanks,
Eli
From The iPhone OS Programming Guide:
Restricting event delivery to a single view:
By default, a view’s exclusiveTouch property is set to NO. If you set
the property to YES, you mark the view so that, if it is tracking
touches, it is the only view in the window that is tracking touches.
Other views in the window cannot receive those touches. However, a
view that is marked “exclusive touch” does not receive touches that
are associated with other views in the same window. If a finger
contacts an exclusive-touch view, then that touch is delivered only if
that view is the only view tracking a finger in that window. If a
finger touches a non-exclusive view, then that touch is delivered only
if there is not another finger tracking in an exclusive-touch view.
It states that the exclusive touch property does NOT affect touches outside the frame of the view.
To handle this in the past, I use the main view to track ALL TOUCHES on screen instead of letting each subview track touches. The best way is to do:
if(CGRectContainsPoint(thesubviewIcareAbout.frame, theLocationOfTheTouch)){
//the subview has been touched, do what you want
}
I was encountering an issue like this where taps on my UIButtons were getting passed through to a tap gesture recognizer that I had attached to self.view, even though I was setting isExclusiveTouch to true on my UIButtons. Upon reviewing the materials here so far, I decided to put some code in my tap gesture code that checks if the tap location is contained in any UIButton frame and if that frame is also visible on the screen at the same time. If both of those conditions are true, then the UIButton will already have handled the tap, and the event triggered in my gesture recognizer can then be ignored as a pass through of the event. My logic allows me to loop over all subviews, checking if they are of type UIButton, and then checking if the tap was in that view and the view is visible.
#objc func singleTapped(tap: UITapGestureRecognizer)
{
anyControlsBreakpoint()
let tapPoint = tap.location(in: self.view)
// Prevent taps inside of buttons from passing through to singleTapped logic
for subview in self.view.subviews
{
if subview is UIButton {
if pointIsInFrameAndThatFrameIsVisible(view: subview, point: tapPoint)
{
return // Completely ignores pass through events that were already handled by a UIButton
}
}
}
Below is the code that checks if point was inside a visible button. Note that I hide my buttons by setting their alpha to zero. If you are using the isHidden property, your logic might need to look for that.
func pointIsInFrameAndThatFrameIsVisible(view : UIView, point : CGPoint) -> Bool
{
return view.frame.contains(point) && view.alpha == 1
}