I ran into this problem, I have 27 UIView's laying on top of each other, with exactly the same frame, visually they are laying next to each other, visual respresentation:
So the frame for all of them is the big yellow square. They get masked using a CAShapeLayer (the colored triangles). I add a UIPanGesture to all of them (not sure this is the best way to do it), and I want to determine on which one the user panned. Of course (obviously), only the top one is accessible, and always is the view in the UIPanGesture. How do I determine on which one of the UIView's the finger was panning? A for loop seems expensive too.. Or am I approaching this wrong?
This is the code for creating all of them and handling the pan gesture:
Gist to code
Sender to tag is always 27 though, of course... No clue how to solve this one guys! Anyone a clue?
Figure it out!
I moved everything into a custom UIView, where in draw rect I draw all the triangles using CAShapeLayers, add a UIPanGesture to it self, and in the gesture handler you can check in a for loop with CGPathContainsPoint if that layer contains the gesture point!
Solution code snippet
Hope this helps someone!
Related
I've scoured the internet and I cannot seem to find any help on this. I want to have an image perform a wipe animation. By that I mean I would like the image itself to fade in from the left to right (not move from left to right, but fade, like reveal itself. I hope that's not a terrible description) I've found material on how to transition the image with a wipe, but it's outdated and I don't want to transition the image, I want to straight up fade it in. If anybody could help me on this I would be incredible grateful. Thank you so much, let me know if I can help clarify anything!
First thing that came to my mind is to have a separate UIImageView, with image something like this: http://www.creativecow.net/articles/ahearn_luke/creating_clouds/images/gradient.jpg (but transparent-to-white).
So - once You want to fade out Your image, You add as a subview a new UIImageView, which has frame like 3 times the width of the original image You want to hide. Once You add it as a subview, visually nothing would change, as the beginning of the hovering UIImageView would be transparent, But then You start animate frame (origin.x) of the hovering UIImageView so that it would be moved to the left side. While this happens, bottom ImageView will be gradually hidden from one side. After animation ends, You remove/hide both UIImageViews.
Not the best solution, but - if You are stuck, and still want the effect... Atleast some option.. Good luck!
I am developing a iOS-6 app. I have a UIViewController with a view that needs fixed orientation (portrait mode). But when the phone is rotated, one control on that view needs to be moved and rotated (so that it will always be in the upper left corner, and its text will be readable).
I am achieving this by shifting the control(a UIView) using the frame-property of my control (it is a custom view, more on that later), and then using CGAffineTRansformMakeRotate() afterwards, since I know that it's not advisable to use the frame after rotating a view. Everything is fine so far, but here's the thing: That custom view has three UIButtons of type UIButtonTypeCustom as its subviews. Because I rotated the View, but cannot rotate the buttons inside the view (they are not squares), I need to rotate the titleLabels of the Buttons for the text to be readable in the new deviceOrientation.
But it won't work very well. The text will be rotated, as I intended, but it will be clipped by the titleLabel, because the titleLabel has the wrong frame. I checked this by applying borders to the label. So I need to change the titleLabels frame, right? But how can I do that? I tried setting it using [titleLabel setFrame: frameThatFits];, but to no avail. (frameThatFits is a CGRect I created). Also, calling [button.titleLabel sizeToFit]; has no effect that I could see.
I am using [button setTitle:title forControlState: UIControlStateNormal];to set the title.
TL;DR: I'm trying to change the frame/bounds of a UIButtons titleLabel after rotating it using an affine transformation. Any help?
Thanks.
PS: I can supply code when needed, but I wouldn't know what to show you. Tell me what you need, I'll post it.
OK, first of all, thanks to everyone who tried to help. Im posting an alternative solution for my problem, and although it doesnt really address the problem of changing the titleLabels dimensions, it will result in the proper display of my ViewController.
It turns out using the frame is a bad idea. I initially used the frame to reposition the view and i figured that this couldnt be a problem because i only ever applied transformations afterwards, but i was wrong. Because OBVIOUSLY i tried to change the titleLabels frame. AFTER the rotation. And that didnt work.
So the way to go here is using the center-property and the bouds of the view consistently throughout the code. It will result in properly rotated Buttons, that do not need any fidgeting afterwards.
My takeaway here is that i will never ever again use the frame-property outside of a NSLog-statement. But why [button sizeToFit];wouldnt yield any results is still beyond me. If i ever figure it out, i might post it if i remember.
EDIT:
#ZevEisenberg nailed it with this comment:
“Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.” So you are right to use the center and bounds here, but if you do not have a transform, the frame is perfectly safe to use.
NEXT EDIT:
Heres how i ended up repositioning the Buttons:
-(CGPoint)centerForView:(UIView *)view{
//calculate a suitableposition for the view
//depending on the current orientation and the device type (iphone 4S/5, etc)
return point;
}
Then, as a reaction to the deviceOrientation change notification, i apply CGAffineTransformIdentity to all the views, reposition them using my centerForView shown above, and apply the correct rotation transformation to the View. I do this for all the subviews every time the divice rotates, like so:
-(void)setRightRotationTransformations{
[self resetAllTransformations];
self.someSubview.transform = CGAffineTransformRotate(self.someSubview.transform, -M_PI_2);
}
In my case works such hack:
set Line Break mode to Word Wrap
Add extra line to title (even for one line title)
I am creating an app for practice that is a simple drawing app. The user drags his/her finger along the screen and it colors in a 100px x 100px square.
I currently achieve this by creating a new colored UIView where the user taps, and that is working. But, after a little time coloring in, there is substantial lag, which I believe is down to there being too many UIViews as a subview of the main view.
How can I, and others who similarly create UIViews on dragging a finger reduce the lag to none at all, no matter how many UIViews there are. I also think that perhaps this is an impossible task, so how else can someone like me color a cube of the size stated above in the main view on a finger dragged along the screen?
I know that this may seem like a specific question, but I believe that it could help others understand how to reduce lag if there are a very large amount of UIViews where a less performance reducing option is available.
One approach is to draw each square into an image and display that image, rather than keeping around an UIView for each square.
If your drawing is simple enough, though, you can use OpenGL to do this, which is much faster. You should look at Apple's GL Paint Sample Code which shows how to do this in OpenGL.
If your drawing is too complex for OpenGL, you could create, for example, a CGBitmapContext, and draw each square into that context when the user drags their finger. Whenever you draw a new square into that bitmap, you can turn the bitmap into an image (via CGBitmapConxtextCreateImage) and display that image an a UIImageView.
There are two things that come to my mind:
1- Use Instruments tool to check if you are leaking any memory
2- If you are just coloring the views than instead of creating images for each of them, either set the background color property of UIView or override the drawRect method to do custom drawing
I think what you are looking for is the drawRect: method of UIView. You could create your custom UIView (you propably have that already) and override the drawRect method and do your drawing there! You will have to save your drawings in an array or another container and call the setNeedsDisplay method whenever the array content is changed.
I have two different UIImageViews. I'd like to make the top UIImageView blend in using the Screen blend mode with the bottom UIImageView.
I know of the property of CALayer: compositingFilter and I know that it doesn't work in iOS. I've searched a lot for solutions, and I've found how one should subclass UIView and override drawRect.
I've tried to set the context in drawRect to the screen blend mode, although it still draws every one of the images normally. Perhaps I am doing something wrong, or the approach should be different. Maybe I need OpenGL or CALayer to achieve this. Could someone assist?
Unfortunately there is no way to do a non-composite blend between UIViews on iOS. UIKit doesn't provide the functionality, and you've already noted, CALayer can't do it either.
In general, implementing -drawRect in a UIView won't help you. You're drawing into an empty bitmap -- it doesn't contain the bits of the views behind it, since those might change at any time (any view or layer might be animated). CA fundamentally assumes that layers' contents should be independent of each other.
You could try, in your -drawRect:
create an image context
capture the views under your view using -[CALayer renderInContext:] for each
create an image from the image context
draw that image into your view
set the blend mode and draw on top of that
But that will be slow and fragile, and won't work if you animate any of the views. I wouldn't recommend it.
If you really need to do this, you're going to have to switch your whole scene to render with OpenGL, where you've got more freedom.
Couldn't find anything on the net about this and wondered if anyone on SO has a solution.
I have an NSView with several subviews that are centered by removing the left and right anchor points. When I resize my view, programatically or with the mouse, to a smaller width than the subviews: it pushes them off center. Has anyone come across this before and do you have a solution?
EDIT: I want to be able to resize my view to a zero width. The reason being, the view is actually part of a split view and I have hooked up a button to 'collapse' it. When it collapses all of the subviews are pushed off-center and aren't re-centered when the view is resized, effectively un-collapsing it.
I have solved my problem now and thought I would share incase anyone comes across this issue in the future.
No amount of playing with autosizing options or view layouts in Interface Builder seemed to stop my subviews from getting moved off center. I did manage to find this link here and from this page, the advice:
Springs and struts, as currently
implemented, are really no good for
anything but keeping either one or
both sides of a view "stuck" to the
nearest edge. Any sort of centering
behavior, division of gained/lost area
between multiple views, etc. has to be
done by hand.
Based on this I overrode my view's setFrame: method and manually laid out my subviews using their setFrame: method. This works great and gives me the results I'm looking for.
There is the same issue using NSSplitView, resizing here one Subview to be smaller than the Subview Subviews makes sense,e.g. having small charts in the upper subview, and an rss reader in the lower subview.
If you want to show only the rss reader in the lower subview, you can "hide" the upper subview, but after resizing the upper subview the NSImageView are not layed out the same as in the beginning. Check this nib/xCode Project and the following screenshot to see this behaviour.
Only workaroung is to override the resize function to stop getting smaller.