I originally setup some conditions using CGRectIntersectsRect for some collision detection which worked fine. In the greater scale of things I only need part of the view to be detected.
So originally within the ViewController it was comparing 2 UIviews.
Now what I need to do is collision detection of subViews within 2 different UIViews that are contained in a view in which the view controller does the logic.
My script is no longer working as I suspect CGRectIntersectsRect only compares frames within the same view? I'll keep digging to confirm this.
Any ways around this? Is it possible for example to get the x and y pos of the sub view in relation to the main view that's performing the logic?
You'll need to use UIView's convertRect:toView: or convertRect:fromView: (or the point equivalents) to get them in the same coordinate space.
Related
I would like to implement a new CATransition.
For example, let's say I would like to do this effect: a n by m array of two sided layers that have a portion of the source image on one side, and the a portion of the destination image on the other side. Each layer would rotate around its x axis by 180 degrees to reveal the other side. Can such a logic (animated CALayers) be used for a CATransition ?
I don't think you can do this. The only way to customize CATransition beyond the built-in types and subtypes, is by using a CIFilter(from iOS 5.0 or OSX 10.5). And the only way to create a custom CIFilter beyond chaining and configuring built-in filters is, according to the doc :
You can subclass CIFilter in order to create:
A filter by chaining together two or more built-in Core Image filters (iOS and OS X)
A custom filter that uses an image processing kernel that you write (OS X only)
So doing this based on CALayer animations is not possible.
On OSX this should be managable with some math work, but not directly using CALayers animations. In iOS this is simply not possible.
Instead of doing a CATransition you could achieve the same effect by using layers directly, but obviously it would not integrate as easily in your UI code.
I have written a transition effect like you describe, but it was not a CATransition. Instead, I created my own method to handle view transitions, and invoke that.
My transition cut the two views into vertical slices, and then did an animation where I start rotating the slices around their Y axes, starting with the left-most slice and working to the right, which creates a cool-looking cascade effect. It took quite a bit of work with CAAnimationGroup obbjects, plus using beginTime for each strip's animation to stagger it's beginning. That one transition animation took about 5 pages of code.
I would really love if someone could tell me how to tell whether 2 objects have touched (An image or a button) I know how to make them draggable but not how to tell if they have touched and to do something when they touch!
Thanks!
If you never rotate the objects, you can use CoreGraphics functions.
BOOL objectsTouch = CGRectIntersectsRect(object1.frame, object2.frame);
This requires of course that the two objects are in the same superview. Otherwise you have to transform the frames using functions of the NSView.
The classical approach is to calculate a minimal circle that encompasses each object, then calculate the distance between circle centers (Pythagorean theorem) and see if it's less than R(object1 circle) + R(object2 circle). If less then you have to get down and dirty with bit mapping or some other scheme, but if greater then you can assume the objects don't touch.
I'm currently programming an app that deals with line graphs. I have three "dots" that allow the user to move the whole line or rotate it, one for the former and two for the latter. The problem that I'm having, though, is that sometimes the dots end up overlapping each other. If I try to touch one of the dots that rotate the graph when that happens, it just moves the whole graph itself, as if I'm touching the moving dot. What I want to know is if there is a simple way to calculate whether a uiimageview is overlapping another? Any help is appreciated.
Assuming they have the same superview:
NSIntersectsRect(imageView1.frame, imageView2.frame)
If they have different superviews, you may need to use -convertRect:toView: and friends.
I have two UIButtons on the base, that on click of a third UIButton move to specific positions (These positions are generated on application of a complex algorithm). To move them , I am using animateWithDuration:delay:options. When I was making a different application, I was moving UIButtons randomly around the screen using NSTimers so detection of a collision was easy with a simple CGRectIntersectsRect. I have two options: 1. Is it possible to detect their collision with each other if Im moving them using animateWithDuration? 2. If I use NSTimers, I would be able to detect collsion but in that case, how do I move them to a particular position on the screen? Any help would be appreciated!
check out the link detect collision of two moving buttons in iPhone
I am working through Aaron Hillegass' Cocoa Programming for Mac OS X and am doing the challenge for Chapter 18. Basically, the challenge is to write an app that can draw ovals using your mouse, and then additionally, add saving/loading and undo support. I'm trying to think of a good class design for this app that follows MVC. Here's what I had in mind:
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
The thing is that I wasn't sure how to add archiving. Should I archive each JBOval? I think this would work, but archiving an NSView doesn't seem very efficient. Any ideas on a better class design?
Thanks.
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
That doesn't sound very MVC. “JBOval” sounds like a model class to me.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
I do like this part.
My suggestion is to have each model object (JBOval, etc.) able to create a Bézier path representing itself. The JBDrawingView (and you should come up with a better name for that, as all views draw by definition) should ask each model object for its Bézier path, fill settings, and stroke settings, and draw the object accordingly.
This keeps the knowledge of how to draw (the path, line width, colors, etc.) in the various shape classes where they belong, while also keeping the actual drawing code in the view layer where it belongs.
The answer to where to put archiving code should be intuitively obvious from this point.
Having a whole NSView for each oval seems rather heavyweight to me. I would descend them from NSObject instead and just have them draw to the current view.
They could also know how to archive themselves, although at that point you'd probably want to think about pulling them out of the view and thinking of them more as part of your model.
Your JBOval views would each be responsible for drawing themselves (basically drawing an oval path and filling it, within their bounds), but JBDrawingView would be responsible for mousing and dragging (and thereby sizing and positioning the JBOvals, which would be its subviews). The drawingView would do no drawing itself.
So far as archiving, you could either have a model class to represent each oval (such as its bounding rectangle, or any other dimensions you choose to represent each oval with). You could archive and unarchive these models to recreate your views.
Finally, I use the JB prefix too, so … :P at you.