Currently in my project i've a simple nsimage which is updated via drawRect function.
I need to change the nsimage when the user switches between spaces... so i set my new image and called setNeedsDisplay when the NSWorkspaceActiveSpaceDidChangeNotification triggers.
The problem is that the drawRect method seems to be called to late... in other words, the previous image appears briefly before the new one. Is there any way to run the drawRect method while the transition between spaces is running? or any other way to accomplish the desired effect?
Thanks
Related
After upgrading my project to iOS 6, I realized that auto layout is only effective in viewDidAppear and most of my code expects the view's frame to be available in viewDidLoad. This limitation renders the really nice auto layout feature almost useless for me. Is there any suggestions to help me use auto layout?
For example, sometimes the developer needs to adjust information about a subview based on where auto layout chooses to place that particular subview. The subview's final location cannot be ascertained by the developer until AFTER the user has already seen it. The user should not see these information adjustments but be presented the final results all at once.
More specifically: What if I want to change an image in a view based on where auto-layout places that view? I cannot query that location and then change the image without the user seeing that happen.
As a general rule, the views frame/bounds should never be relied on in viewDidLoad.
The viewDidLoad method only gets called once the view has been created either programmatically or via a .nib/.xib file. At this point, the view has not been setup, only loaded into memory.
You should always do your view layout in either viewWillAppear or viewDidAppear as these methods are called once the view has been prepared for presentation.
As a test, if you simply NSLog(#"frame: %#", NSStringFromCGRect(self.view.frame)); in both your viewDidLoad and viewWillAppear methods, you will see that only the latter method returns the actual view size in relation to any other elements wrapped around your view (such as UINavigationBar and UITabBar).
As told by #charshep in a comment, calling view.layoutIfNeeded() in viewWillAppear can do the trick.
Quote of his original comment
I had trouble getting a table view to appear at the correct scroll position when pushing it [...] because layout wasn't occurring until after viewWillAppear. That meant the scroll calculation was occurring before the correct size was set so the result was off. What worked for me was calling layoutIfNeeded followed by the code to set the scroll position in viewWillAppear.
I am working on an app, which actually works like MSPaint (something to draw lines, etc...).
I got a white UIView, basically where the user draws. On top of this UIView I set up a UIImage, which is gray, with a 0,4 alpha. I want this UIImage to be used as a blotting paper. The idea is to disable touch when the user put the palm of his hand on this area, so it's more comfortable to draw (multitouch is disabled, and with this "blotting paper" you won't draw something accidentally with your palm...)
Even if I bring the UIImage to the front, on top of the view, and even if I disable user interactions on this UIImage, it is still possible to draw on the UIView. , behind the UIImage (kind of strange!)
I do not understand what's happening, because, it seems that the image is transparent, and that the UIView "behind" is still active, even if she's overlaid by the UIImage?!
Any help/indication/idea would be much appreciated! Thank you :-)
Have you set the "userInteractionEnabled" property of the UIImage to "NO" ?
You may actually want to do the opposite. When you disable user interaction or touches, the view basically becomes invisible to touches and they are passed on to the next view.
In your case you do want userInteractionEnabled because you want the view to catch those touches.
You have to disable the user interaction on the UIImageView not the UIImage and it should work.
Edit:
Or you could be sneaky and just add an empty view over it. Use the same frame size so it overlaps perfectly and thats it. You'll be able to see everything you need and it's not a subview of it so there will eb no interaction and touches will get registered but won't have any effect. :P
No better ideas unless you post some of your code...
OK, so I managed to do what I wanted to! YAY!
I got 3 different classes :
StrokesViewController (UIViewController)-the view controller
StrokesView (UIView) - the view where the user draws the strokes.
BlottingPaper (UIView) - the blotting paper.
I got a XIB file "linked" to all three.
I created this new class, called "BlottingPaper", typed UIView. the .h and .m file are actually empty (I do import #import < Foundation/Foundation.h >)
User interaction is enable on BlottingPaper.
I do not use the exclusive touch on this class.
On the XIB file, I just put a view on top of StrokesView. I link it to BlottingPaper (modify the alpha as I want, blablabla...)
And that's it! When I put the palm of my hand on it, it doesn't draw anything on the area where my hand is, but I still can draw with my finger on the rest of the StrokesView!
In addition to Dancreek's response, you should be setting buvard.userInteractionEnabled = YES; so that it captures interaction.
You should also set buvard.exclusiveTouch = YES; so that buvard is the only view which will receive touch events.
When you remove buvard you should set buvard.exclusiveTouch = NO; so that other views will regain their ability to receive touches.
In the world of delegates and detectors, I'm having trouble finding a method for detecting if an animation I am playing is playing or not. Once the animation is done, I want the UIImageView to fade out and remove itself from its superview.
I am using the common animationImages value of the UIImageView.
Right now, I have a once-per-second looped checker looking at the UIImageView's animates value, and determining if it's still playing.
But this feels a bit excessive, so does anyone know a smarter method of doing this, possibly with custom classing the UIImageView if thats what it takes ?
If you are using UIImageView's animationImages property, then you might need to follow Eugene's solution.
Immediately after calling the UIImageView's startAnimating method, you start an NSTimer which has to be fired after animationDuration. In the fired timer method, you could hide the imageview.
If you are setting you're UIImageView's animationDuration property, why don't you try creating a timer which will fire after that same duration and set your UIImageView hidden in some method?
The scenario is that I have a UIViewController containing multiple "InteractiveUIImageViews" (inherited from UIImageView) each containing their own UIImage. In InteractiveUIImageView I have iplemented methods for touchesBegan, touchesMoved and touchesEnded to handle their movement and behaviour on screen. Certain objects of this type will be set as 'containers' (think recycle bin) with the objective being that when one image is dragged onto it, it will be removed from the screen and placed inside it to be potentially retrieved later.
My current thinking would be to call a new method in UIViewController from the touchesEnded method of my InteractiveUIImageView but being new to all this I'm not really sure how to go about doing that (e.g. calling a method from the 'parent') or indeed if this is the best way to achieve what I want to do.
Thanks in advance for any advice.
I'm afraid your question is (too me at least) a bit unclear. I get that your are trying drag a UIImage around a scene and drop it in drop-locations.
What is unclear is you class hierarchy. I believe that you are going about it in a wrong way. You shouldn't have to subclass UIImage at all.
Instead I would urge you to let the UIViewController manage the movement of the images. When you touch down on an image, you also touch down on its parent (containing) view.
What you have to to is then reposition the UIImage (all handled by the UIViewController) as you drag the image across the screen. When you let go you check if your finger was inside your drop-zone on touch up.
Currently, any time I manually move a UIImage (via handling the touchesMoved event) the last thing I call in that event is [self setNeedsDisplay], which effectively redraws the entire view.
My images are also being animated, so every time a frame of animation changes, i have to call setNeedsDisplay.
I find this to be horrific since I don't expect iphone/cocoa to be able to perform such frequent screen redraws very quickly.
Is there an optimal, more efficient way that I could be doing this?
Perhaps somehow telling cocoa to update only a particular region of the screen (the rect region of the image)?
setNeedsDisplayInRect: does exactly what you need.
See documentation at developer.apple.com