So I'm working on an issue I have when trying to do some simple animation using CAKeyframeAnimation, and I believe my problem is more related to not fully understanding how NSWindow, NSView, and CALayer work together.
I have two main objects in question. MyContainerWindow (NSWindow subclass) and MyMovableView (NSView subclass). My goal is to be able to animate MyMovableView back and forth across the screen, while maintaining the ability to click on anything through MyContainerWindow unless you are clicking on wherever MyMovableView is. I am able to accomplish the first part fine, by calling -addAnimation:forKeyPath: on myMovableView.layer, and everything is great except I can't click through MyContainerWindow. I could make the window smaller, but then the animation would clip by the bounds of the window.
Important points:
1) MyContainerWindow is initWithFrame to [[NSScreen mainScreen] frame], NSBorderlessWindowMask, defer no, buffered
2) I setWantsLayer:TRUE to MyMovableView
3) MyContainerWindow is clear, and I want it to be as if there wasnt a window at all, but need it so I have a larger canvas to animate on.
Is there something obvious I'm missing to be able to click through an NSWindow?
Thanks in advance!
My solution in this scenario was actually to use:
[self ignoresMouseEvents:YES];
I originally was hoping to be able to retain the mouse events on the specific CALayer that I'm animating, but upon some further research I understand this comes with the cost of custom drawing everything from scratch, which is not ideal for this particular project.
Related
I am working on an app, which actually works like MSPaint (something to draw lines, etc...).
I got a white UIView, basically where the user draws. On top of this UIView I set up a UIImage, which is gray, with a 0,4 alpha. I want this UIImage to be used as a blotting paper. The idea is to disable touch when the user put the palm of his hand on this area, so it's more comfortable to draw (multitouch is disabled, and with this "blotting paper" you won't draw something accidentally with your palm...)
Even if I bring the UIImage to the front, on top of the view, and even if I disable user interactions on this UIImage, it is still possible to draw on the UIView. , behind the UIImage (kind of strange!)
I do not understand what's happening, because, it seems that the image is transparent, and that the UIView "behind" is still active, even if she's overlaid by the UIImage?!
Any help/indication/idea would be much appreciated! Thank you :-)
Have you set the "userInteractionEnabled" property of the UIImage to "NO" ?
You may actually want to do the opposite. When you disable user interaction or touches, the view basically becomes invisible to touches and they are passed on to the next view.
In your case you do want userInteractionEnabled because you want the view to catch those touches.
You have to disable the user interaction on the UIImageView not the UIImage and it should work.
Edit:
Or you could be sneaky and just add an empty view over it. Use the same frame size so it overlaps perfectly and thats it. You'll be able to see everything you need and it's not a subview of it so there will eb no interaction and touches will get registered but won't have any effect. :P
No better ideas unless you post some of your code...
OK, so I managed to do what I wanted to! YAY!
I got 3 different classes :
StrokesViewController (UIViewController)-the view controller
StrokesView (UIView) - the view where the user draws the strokes.
BlottingPaper (UIView) - the blotting paper.
I got a XIB file "linked" to all three.
I created this new class, called "BlottingPaper", typed UIView. the .h and .m file are actually empty (I do import #import < Foundation/Foundation.h >)
User interaction is enable on BlottingPaper.
I do not use the exclusive touch on this class.
On the XIB file, I just put a view on top of StrokesView. I link it to BlottingPaper (modify the alpha as I want, blablabla...)
And that's it! When I put the palm of my hand on it, it doesn't draw anything on the area where my hand is, but I still can draw with my finger on the rest of the StrokesView!
In addition to Dancreek's response, you should be setting buvard.userInteractionEnabled = YES; so that it captures interaction.
You should also set buvard.exclusiveTouch = YES; so that buvard is the only view which will receive touch events.
When you remove buvard you should set buvard.exclusiveTouch = NO; so that other views will regain their ability to receive touches.
In my application, I have a UIViewController with a subclassed UIView (and several other elements) inside of it. Inside of the UIView, called DrawView, in my drawRect: method, I draw a table grid type thing, and plot an array of CGPoints on the grid. When the user taps on the screen, it calls touchesBegan:withEvent: and checks to find the closest point on the grid to the touch, adds a point to the array that the drawRect: method draws points from, and calls [self setNeedsDisplay]. As the user moves their finger around the screen, it checks to see if the point changed from the last location, and updates the point and calls [self setNeedsDisplay] as necessary.
This works great in the Simulator. However, when run on a real iPhone, it runs very slowly, when you move your finger around, it lags in drawing the dot. I have read that running calculations for where to place the points in a different thread can improve performance. Does anyone have experience with this that knows this for a fact? Any other suggestions to reduce lag?
Any other suggestions to reduce lag?
Yes. Don't use -drawRect:. It's a long and complicated reason why, but basically when UIKit sees that you've implemented -drawRect: in your UIView subclass, rendering goes through the really slow software-based rendering path. When you draw with CALayer objects and composite views, you can get hardware accelerated graphics, which can make your app FAR more performant.
I'm currently using UISplitViewController as my app's rootViewController. To present progress dialogs I use presentModalViewController, but for the one and other reason I'm not entirely happy with it and want to do my own modal thing.
If one of my modal views is supposed to be shown, I want to add another subview to my app's main window. This subview is going to be managed by its own UIViewController subclass to make it rotate properly and do all the stuff I need.
Is this design aproach okay or will I run into issues with UISplitViewController (it is very special in so many ways and seems to be offended easily if not treaten right! :-))?
Is it a problem to have two UIViewControllers next to each other?
Please don't discuss "why don't you use presentModalViewController then?".
You may be ok with UISplitViewController if you do it properly, as presentModalViewController does something similar.
One alternative you should look into is UIPopoverController and the UIViewController's modalIfPopover property.
Also, you say you aren't happy with presentModalViewController and perhaps if you say what was wrong with it we can help you work around whatever issues you have with it. This is the exact case that it seems to be meant for.
The scenario is that I have a UIViewController containing multiple "InteractiveUIImageViews" (inherited from UIImageView) each containing their own UIImage. In InteractiveUIImageView I have iplemented methods for touchesBegan, touchesMoved and touchesEnded to handle their movement and behaviour on screen. Certain objects of this type will be set as 'containers' (think recycle bin) with the objective being that when one image is dragged onto it, it will be removed from the screen and placed inside it to be potentially retrieved later.
My current thinking would be to call a new method in UIViewController from the touchesEnded method of my InteractiveUIImageView but being new to all this I'm not really sure how to go about doing that (e.g. calling a method from the 'parent') or indeed if this is the best way to achieve what I want to do.
Thanks in advance for any advice.
I'm afraid your question is (too me at least) a bit unclear. I get that your are trying drag a UIImage around a scene and drop it in drop-locations.
What is unclear is you class hierarchy. I believe that you are going about it in a wrong way. You shouldn't have to subclass UIImage at all.
Instead I would urge you to let the UIViewController manage the movement of the images. When you touch down on an image, you also touch down on its parent (containing) view.
What you have to to is then reposition the UIImage (all handled by the UIViewController) as you drag the image across the screen. When you let go you check if your finger was inside your drop-zone on touch up.
Afternoon, I have a UIImageView that I progmatically add to the window. Infact I have multiple UIImageViews that do so and when I click on any specific UIImageView I want it to become 'top-dog' so to say and be drawn over all other objects on the screen. Basically like the priority drawing for MSWindows operating systems when it comes to their windows. I've scoured all the options built in for UIImageViews when it comes to layering but I cannot seem to find any! I know it exists because in UIBuilder there is a command for sending back/front toBack/toFront. How do I access these progmatically?
Edit*
Also I fear that you might have to access the order in which the subViews are pushed into the 'subView stack' and manually move these around to achieve the result that I want and if so, how would I go about doing this?
Edit2*
Perhapse these are the functions I'm looking for?
bringSubviewToFront
sendSubviewToBack
exchangeSubviewAtIndex
Does this allow for easy Index shuffling?
UIView class has bringSubviewToFront: and sendSubviewToBack: for changing subviews z-order (see "Managing the View Hierarchy" section in class reference for more).