My problem is the following:
I wish to make a sprite visible once a button is pressed, other forums I visited suggested I did visible = false, but that didn't work, others said I should use self.visible = false, but that didn't work either.
I connected the pressed signal of the button to the sprite but the problem comes when I have to change the visibility.
What should I use that will work?
Either setting the 'visible' property to true/false or using the hide()/show() methods should work. There's no apparent reason for any of them 'not working' aside from that maybe you are calling the methods or setting the properties in the wrong object.
Try again in a minimal test scene: A scene with only one node, with a sprite node as the root, and use any texture (like the default icon.png) on it. Attach a built-in script to the node and on the _ready() function try hiding it with 'hide()' or 'visible = false'
Related
I'd like to listen in to the text change event for an NSTextField, so I looked at the docs and saw a textDidChange event. So, I connect my text field to a delegate and implement the method, run the app, and nothing happens. I also tried textDidEndEditing, but to no avail.
What are these methods for, exactly? How can one trigger them? Changing the text and tabbing out of the field or pressing enter doesn't do anything. After a bit of googling I found a controlTextDidChange method (in the NSControl parent class of NSTextField) so I implemented it, and it worked (though it had to be an override func, not just a plain func).
Handling the text change event in .NET would be a cinch, just switch to the events panel and double click on "changed" and lo and behold, it creates a method stub into which I can handle the event.
Obviously, I'm an XCode newb comparatively. What's the typical way to go about handling events in XCode/Swift? Did I go about it the wrong way, and is there a better/easier way?
Thanks.
You know that effect when you drag out an item from the dock and that cloud drag cursor appears and when you let go it disappears with a poof effect? Similarly, in Xcode when you drag a breakpoint outside the line number gutter the same happens.
I would like to implement the same effect in my application but can't find the right way.
I have an NSImageView descendant to implement the NSDraggingSource and NSDraggingDestination protocols. I have several instances of this view which allow to drag their content between the others (a copy operation takes place in this scenario, but that's only relevant to show I have drag'n drop implmented and fully working for standard tasks).
Now, when I drag out an image from its view to anywhere (except another view instance) I want to have the delete operation taking place on drop. However the drag operation is fully controlled by the target view. I could manage to make them respond the way I want (even though this would be a lot of work), but it fails completely if I'm dragging outside my application.
If I could get the delete drag operation I could handle this however easily by:
- (void)draggedImage: (NSImage *)image
endedAt: (NSPoint)screenPoint
operation: (NSDragOperation)operation
{
if (operation == NSDragOperationDelete) {
NSRect rect = [self.window convertRectToScreen: [self convertRect: self.frame fromView: nil]];
NSShowAnimationEffect(NSAnimationEffectPoof, rect.origin, self.bounds.size, nil, nil, NULL);
}
}
I tried already to set the delete cursor like this:
- (void)draggingSession: (NSDraggingSession *)session
movedToPoint: (NSPoint)screenPoint
{
if (!NSPointInRect(screenPoint, self.window.frame)) {
[[NSCursor disappearingItemCursor] set];
}
}
(for simplicity this is for the entire windw at the moment). This works as long as I don't hit the desktop or a finder window. In starts flickering, probably because the Finder concurrently sets its own drag cursor. It is completely without effect when I hit the dock. This also happens when I define my own pasteboard data type.
Additionally, any other drop enabled view in my application will still accept my drag data (e.g. NSTextView) which I don't want to happen (I'm writing an NSURL to the dragging pasteboard with a custom scheme).
Update:
I've come a few steps further. As Peter already indicated it is essential to handle draggingSession:sourceOperationmaskForDraggingContext: which looks so in my code:
- (NSDragOperation) draggingSession: (NSDraggingSession *)session
sourceOperationMaskForDraggingContext: (NSDraggingContext)context;
{
switch(context) {
case NSDraggingContextOutsideApplication:
return NSDragOperationDelete;
break;
case NSDraggingContextWithinApplication:
default:
return NSDragOperationDelete | NSDragOperationMove;
break;
}
}
This solves 2 problems: 1) outside the application the drag operation is not accepted at all, 2) it keeps all standard views from accepting this operation too (because NSOutlineView, NSTextView etc. don't handle the given drag operations). Additionally, I created an own pasteboard datatype, but this doesn't seem to be necessary. Still it's clearer to have an own one.
Unfortunately, dropping outside of my NSImageView descendant (both within and outside the application) does not give me NSDragOperationDelete in draggedImage:endedAt:operation: (what I specified above) but NSDragOperationNone. Additionally the drag cursor when moving the mouse outside the application is the not-allowed one, not the disappearing-item. So, if someone could solve these two things I'd accept it as answer to my question.
There may be a less hacky way to do this, but I can think of one possibility: once the drag begins, create a transparent, borderless window the size of the desktop to be a dummy drag destination. You may need to call -setIgnoresMouseEvents: with NO to allow it to receive the drop even though it's transparent. You'll also have to set its window level above the menu bar (NSMainMenuWindowLevel + 1) to make sure that drags to the menu bar or Dock are still intercepted by your window.
As a drag destination, this window will have to check if one of your image views is under the cursor. You can use +[NSWindow windowNumberAtPoint:belowWindowWithWindowNumber:] to find the window below your transparent overlay window which is under the cursor. Then use -[NSApplication windowWithWindowNumber:] to determine if it's one of your app's windows and, if so, call -[NSView hitTest:] on its content view (converting the cursor coordinates as appropriate) to find the view. You can then forward NSDraggingDestination methods to that view as desired.
My guess is that NSDragOperationDelete only concerns drag/drops targeting the dock's Trash, and nothing else.
NSDragOperationGeneric should be a better fit.
Be sure not to mix methods, if you're going the 10.7 route, prefer :
-(void)draggingSession:(NSDraggingSession *)session endedAtPoint:(NSPoint)screenPoint operation:(NSDragOperation)operation
I just finished implementing something very similar to what you've described. Your code is quite similar to mine with a few exceptions:
Where you use draggedImage:endedAt:operation: I'm using draggingSession:session:endedAt:operation:. Also, in this method, I do not check the operation as all of my operations are set to generic. This is also where I perform the actual delete, so I only show the poof animation if the delete is successful.
In your draggingSession:session:movedToPoint:, you may also want to set the session's animatesToStartingPositionsOnCancelOrFail to false when the point is outside the window (this is also when you set the disappearingItemCursor) and set to true otherwise. This adds a final touch that once the deletion operation is completed, the dragged image doesn't rebound back to it's originating source location.
As for why you are not seeing the proper cursors, my guess is that you are using a Pasteboard type that other things (Finder, etc.) are willing to accept. I know you said you created your own Pastboard datatypes, but I would double-check that they are being used. When I made my own types, I gained control over the cursor. At least, I've not found anything that contends for my custom type.
OS X 10.7 or better:
- (void) draggingEnded: (id<NSDraggingInfo>) aInfo {
/*! Delete the current leaf if it is no longer represented in the subviews. */
RETURN_IF ( , NSNotFound != [self.visibleLeafs indexOfObject: iSelectedControl.representedObject] );
iIgnoreLeafsObservation = YES;
[self.leafs removeObject: iSelectedControl.representedObject];
iIgnoreLeafsObservation = NO;
}
This assumes that you have visually removed the object already. You could also set a flag during draggingEntered: and draggingExited: to indicate if drag session was last inside or outside of self.
In this program, leafs is the actual observed collection while visibleLeafs is a simulated collection of the representedObject of each visible control. Other NSDraggingDestination overloads present what will happen visually before it actually happens.
You can use the following command to make the poof appears at the mouse location:
NSShowAnimationEffect(NSAnimationEffectPoof, [NSEvent mouseLocation], NSZeroSize, NULL, NULL, NULL);
See the Application Kit Functions Reference for further information.
So I have a few properties that I'm using in some sample code I'm playing with. Notably the "tag" property of the UIView class. Now I set this property, and if I NSLog it, or setup control statements based on the value of tag, I can see that the value I set is there, and being acted upon as expected.
However, if I hover the mouse over the .tag to see which tag value is there, I get nothing at all from Xcode. No pop up showing the value. So then I go to the auto/local/all window and I try to "Add Expression..." (seems that's the only way to setup a traditional "watch" variable, if there is another way, please let me know). Anyhow so I put my object.tag into the "watch" window and it's blank. No value. It isn't zero it's just nothing, as if it didn't exist.
Of course if I hover the mouse over the "object" part of "object.tag" then I get a pop up for the object with the disclosure triangle, which I expand, then I go looking for "_tag" (which appears to be the underlying instance variable).
So what is so difficult about this? Why isn't the tag property viewable during debug by simply hovering over it? Is this something to do with properties in Xcode dev?
I'm running Xcode 4.3.2
The tag property, as any other Objective-C property, is a syntactic sugar. In fact, properties are implemented as accessor methods, which, in turn, are translated to objc_msgSend() function calls. This machinery is nothing like accessing a struct field.
The debugger can show any field in a struct basically because it doesn't require any special knowledge and doesn't have any consequences. Only the struct definition is needed. Getting the value of an Objective-C property, on the other hand, requires executing code in the process context. You can do that manually in the debugger console, but the debugger just won't do this automatically.
I think this is still theoretically possible in isolated cases, but incredibly hard. Consider a case where executing an accessor method changes the object's internal state. For example, calling -[UIViewController view] (accessing its view property) results in loading the view. There may also be delegate methods called, etc. In such cases hovering the mouse over the property in IDE would alter the execution state of the process and thus make debugging itself a joke.
im trying to make my sprite blink, but it just disappears, i have searched google, but i cant find a solution, heres what im doing:
CCBlink * blinker = [CCBlink actionWithDuration: 0.5 blinks: 1];
[player runAction: blinker];
this method is called when two of my sprites collide, when the collision takes place, i want the 'player' sprite to blink for a few seconds. at the moment, when the sprites collide, the 'player' sprite becomes invisible....thanks
CCBlink seems to work by toggling the visibility of your sprite on and off a given number of times within the stated duration you gave it. Depending on the duration you set, you might sometimes end up with an "off" visibility state at the end of the action (very buggy yeah, I had that too before), which isn't quite desired.
Two suggestions:
(1) Play around with the number of blinks.
(2) Always force the sprite to be visible at the end of the blink:
Add: [CCShow action] to the end of your blink action. You can string both actions into a CCSequence.
Verify that when (and where) you process 'onCollision' types of events you do not remove the sprite from its parent.
Blink action is buggy. I always use the following to guarantee that the object remains visible at the end of the animation:
Sequence* action = Sequence::create(Blink::create(BLINK_DURATION, BLINK_TIMES), Show::create(), NULL);
UPDATE: With the blush of shame I discovered that the order had nothing to do with the speed of tapping. I was calling the visual code before the super touchesEnded:withEvent call, which was why if you tapped really fast, the display never got a chance to draw the highlighted state before being dismissed again. Because the code that was actually causing the main thread to block just a few milliseconds, the highlighted state would stay visible until the main thread unblocked again, where as if you tapped really fast, it looked like nothing happened at all. Moving the super call up to the top of the overridden method fixed it all. Sorry, if any moderator sees this post it can be deleted. shame
This problem must have been asked a 1000 times at SO, yet I can't find the explanation to match my specific issue.
I have a UIButton subclass with a custom design. Of course the design is custom enough that I can't just use the regular setSomething:forControlState: methods. I need a different backgroundcolor on touch, for one, and some icons that need to flash.
To implement these view changes, I (counter-intuitively) put the display code in (A) touchesBegan:withEvent and (Z) touchesEnded:withEvent:, before calling their respective super methods. Feels weird, but it works as intended, or so it seemed at first.
After implementing addTarget:action:forControlEvents was used to bind the UIControlEventTouchUpInside to the method (X) itemTapped:, I would expect these methods to always fire in the order (A)(X)(Z). However, if you tap the screen real fast (or the mouse in simulator), they fire in the order (A)(Z)(X). Where (A) and (Z) follow each other in such rapid succession, that the whole visual feedback for tapping is invisible. This is unwanted behavior. This also can't be the way to go, for so many apps need similar behavior, right?
So my question to you is: What am I doing wrong? One thing I'm guessing is that the visual appearance of the buttons shouldn't be manipulated in the touchesBegan:withEvent and touchesEnded:withEvent, but then where? Or am I missing some other well known fact?
Thanks for the nudge,
Eric-Paul.
I don't know why the order is different, but here's 2 suggestions to help deal with it.
What visual changes are you making to the button? If it's things like changing title/image/background image, you can do all this by modifying the highlighted state of the button. You can set a few properties like title and background image per-state. When the user's finger is down on the button, the highlighted state is turned on, so any changes you make to this state will be visible at this time. Do note that if you're making use of the selected state on the button, then you'll need to also set up the visual appearance for UIControlStateHighlighted|UIControlStateSelected, otherwise it will default back to inheriting from Normal when both highlighted & selected are on.
The other suggestion is to ditch touchesBegan:withEvent: and touchesEnded:withEvent: and switch over to using the methods inherited from UIControl, namely beginTrackingWithTouch:withEvent: and endTrackingWithTouch:withEvent:. You may also want to implement continueTrackingWithTouch:withEvent: and use the touchInside property to turn off your visual tweaks if the touch leaves the control.