CGRectContainsPoint using shape opposed to frame - objective-c

So we have our detection here
-(void)checkInFOVWithPlayer:(Player *)player andEnemy:(Player *)enemy {
SKNode *fovNode = [player childNodeWithName:player.playersFOVName];
SKNode *node = [self childNodeWithName:#"enemy"];
CGPoint newPosition = [self convertPoint:node.position toNode:fovNode.parent];
if (CGRectContainsPoint(fovNode.frame, newPosition)) {
[self playerAimAtEnemy:enemy withPlayer:player];
}
}
And our implementation for the field of vision
SKShapeNode *fov = [SKShapeNode node];
UIBezierPath *fovPath = [[UIBezierPath alloc] init];
[fovPath moveToPoint:CGPointMake(0, 0)];
[fovPath addLineToPoint:CGPointMake(fovOpposite *-1, fovDistance)];
[fovPath addLineToPoint:CGPointMake(fovOpposite, fovDistance)];
[fovPath addLineToPoint:CGPointMake(0, 0)];
fov.path = fovPath.CGPath;
fov.lineWidth = 1.0;
fov.strokeColor = [UIColor clearColor];
fov.antialiased = NO;
fov.fillColor = [UIColor greenColor];
fov.alpha = 0.2;
fov.name = #"playerFOV";
[_playerImage addChild:fov];
Now, this works. However, the detection range for the "field of vision" is not actually the boundaries of the BezierPath, it's in fact the CGRect that creates the image.
So, the detection will run, even if it's outside of the visual field of vision.
I'm curious as to whether there's an easy fix for this, as I don't really want to go down physics body paths if I don't need to.

Finally I figured out what was required.
I had to redraw the path and optimise it for each frame, after that;
if (CGPathContainsPoint(fovPath.CGPath, nil, newPosition, NO)) {}
Ended my problems.

Related

how to smoothly move object on ellipse shape view

Consider this image
How would I go about drawing a custom UIView that is literally just a ellipse. Would I just override the drawRect method? And can someone show me the code for dragging red ball on ecllips path?
Drawing the ball can be done in a custom drawRect if you want, or you could use CAShapeLayer:
UIView *ball = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
ball.userInteractionEnabled = TRUE;
CAShapeLayer *ballLayer = [CAShapeLayer layer];
ballLayer.path = [UIBezierPath bezierPathWithArcCenter:CGPointMake(50, 50) radius:48 startAngle:0 endAngle:M_PI * 2.0 clockwise:YES].CGPath;
ballLayer.strokeColor = [UIColor blackColor].CGColor;
ballLayer.lineWidth = 0.5;
ballLayer.fillColor = [UIColor redColor].CGColor;
ballLayer.shadowColor = [UIColor blackColor].CGColor;
ballLayer.shadowRadius = 2;
ballLayer.shadowOpacity = 0.75;
ballLayer.shadowOffset = CGSizeZero;
[ball.layer addSublayer:ballLayer];
[self.view addSubview:ball];
Creating the ellipse could be done in a similar fashion.
Dragging the ball could be done with a gesture.
UIGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:)];
[ball addGestureRecognizer:pan];
Where:
- (void)handlePan:(UIPanGestureRecognizer *)gesture {
CGPoint translate = [gesture translationInView:gesture.view];
if (gesture.state == UIGestureRecognizerStateChanged) {
gesture.view.transform = CGAffineTransformMakeTranslation(translate.x, translate.y);
} else if (gesture.state == UIGestureRecognizerStateEnded) {
gesture.view.center = CGPointMake(gesture.view.center.x + translate.x, gesture.view.center.y + translate.y);
gesture.view.transform = CGAffineTransformIdentity;
}
}
You presumably would want to constrain the ball's movement to the ellipse, then you'd just adjust the translate coordinates accordingly. But hopefully this illustrates how to create a gesture recognizer, and move a UIView accordingly.

AVVideoCompositionCoreAnimationTool not adding all CALayers

Okay, this one has me completed stumped. I'm happy to post other code if you need it but I think this is enough. I cannot for the life of me figure out why things are going wrong. I'm adding CALayers, which contain images, to a composition using AVVideoCompositionCoreAnimationTool. I create an NSArray of all the annotations (see interface below) I want to add and then add them to the animation layer with an enumerator. No matter how many, as far as I can tell, annotations are in the array, the only ones that end up in the outputted video are the ones added by the last loop. Can someone spot what I'm missing?
Here's the interface for the annotations
#interface Annotation : NSObject// <NSCoding>
#property float time;
#property AnnotationType type;
#property CALayer *startLayer;
#property CALayer *typeLayer;
#property CALayer *detailLayer;
+ (Annotation *)annotationAtTime:(float)time ofType:(AnnotationType)type;
- (NSString *)annotationString;
#end
And here's the message that creates the video composition with the animation.
- (AVMutableVideoComposition *)createCompositionForMovie:(AVAsset *)movie fromAnnotations:(NSArray *)annotations {
AVMutableVideoComposition *videoComposition = nil;
if (annotations){
//CALayer *imageLayer = [self layerOfImageNamed:#"Ring.png"];
//imageLayer.opacity = 0.0;
//[imageLayer setMasksToBounds:YES];
Annotation *ann;
NSEnumerator *enumerator = [annotations objectEnumerator];
CALayer *animationLayer = [CALayer layer];
animationLayer.frame = CGRectMake(0, 0, movie.naturalSize.width, movie.naturalSize.height);
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0, 0, movie.naturalSize.width, movie.naturalSize.height);
[animationLayer addSublayer:videoLayer];
// TODO: Consider amalgamating this message and scaleVideoTrackTime:fromAnnotations
// Single loop instead of two and sharing of othe offset variables
while (ann = (Annotation *)[enumerator nextObject]) {
CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:#"opacity"];
animation.duration = 3; // TODO: Three seconds is the currently hard-coded display length for an annotation, should this be a configurable option after the demo?
animation.repeatCount = 0;
animation.autoreverses = NO;
animation.removedOnCompletion = NO;
animation.fromValue = [NSNumber numberWithFloat:1.0];
animation.toValue = [NSNumber numberWithFloat:1.0];
animation.beginTime = time;
// animation.beginTime = AVCoreAnimationBeginTimeAtZero;
ann.startLayer.opacity = 0.0;
ann.startLayer.masksToBounds = YES;
[ann.startLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.startLayer];
ann.typeLayer.opacity = 0.0;
ann.typeLayer.masksToBounds = YES;
[ann.typeLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.typeLayer];
ann.detailLayer.opacity = 0.0;
ann.detailLayer.masksToBounds = YES;
[ann.detailLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.detailLayer];
}
videoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:movie];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:animationLayer];
}
return videoComposition;
}
I want to stress that the video outputs correctly and I do get the layers appearing at the right time, just not ALL the layers. Very confused and would greatly appreciate your help.
So I was fiddling around trying to figure out what might have caused this and it turns out that it was caused by the layers hidden property being set to YES. By setting it to NO, all the layers appear but then they never went away. So I had to change the animation's autoreverses property to YES and halve the duration.
So I changed the code to this:
while (ann = (Annotation *)[enumerator nextObject]){
CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:#"opacity"];
animation.duration = 1.5; // TODO: Three seconds is the currently hard-coded display length for an annotation, should this be a configurable option after the demo?
animation.repeatCount = 0;
animation.autoreverses = YES; // This causes the animation to run forwards then backwards, thus doubling the duration, that's why a 3-second period is using 1.5 as duration
animation.removedOnCompletion = NO;
animation.fromValue = [NSNumber numberWithFloat:1.0];
animation.toValue = [NSNumber numberWithFloat:1.0];
animation.beginTime = time;
// animation.beginTime = AVCoreAnimationBeginTimeAtZero;
ann.startLayer.hidden = NO;
ann.startLayer.opacity = 0.0;
ann.startLayer.masksToBounds = YES;
[ann.startLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.startLayer];
ann.typeLayer.hidden = NO;
ann.typeLayer.opacity = 0.0;
ann.typeLayer.masksToBounds = YES;
[ann.typeLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.typeLayer];
ann.detailLayer.hidden = NO;
ann.detailLayer.opacity = 0.0;
ann.detailLayer.masksToBounds = YES;
[ann.detailLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.detailLayer];
}

CABasicAnimation not working

I am trying to animate a circular stroke (from another SO answer). I am trying to do it in a Cocoa based application.
However it's not working and animationDidStop:finished: is being immediately called with finished flag as NO. Why is this happening? Any pointers on how I can get some information why the finished flag is NO?
Here is the code I use:
Note: quartzPath and NSColorToCGColor are from categories on NSColor and NSBezierPath.
- (IBAction)animateCircle:(id)sender {
int radius = 10;
circle = [CAShapeLayer layer];
// Make a circular shape
circle.path = [[NSBezierPath bezierPathWithOvalInRect:NSMakeRect(10, 10, 2.0*radius, 2.0*radius)] quartzPath] ;
// Center the shape in self.view
circle.position = CGPointMake(CGRectGetMidX(self.vcForCellView.view.frame)-radius,
CGRectGetMidY(self.vcForCellView.view.frame)-radius);
// Configure the apperence of the circle
circle.fillColor = [NSColor NSColorToCGColor:[NSColor clearColor]];
circle.strokeColor = [NSColor NSColorToCGColor:[NSColor blackColor]];
circle.lineWidth = 5;
// Add to parent layer
[self.vcForCellView.view.layer addSublayer:circle];
// Configure animation
CABasicAnimation *drawAnimation = [CABasicAnimation animationWithKeyPath:#"strokeEnd"];
drawAnimation.duration = 2.0; // "animate over 10 seconds or so.."
drawAnimation.repeatCount = 1.0; // Animate only once..
drawAnimation.removedOnCompletion = NO; // Remain stroked after the animation..
// Animate from no part of the stroke being drawn to the entire stroke being drawn
drawAnimation.fromValue = [NSNumber numberWithFloat:0.0f];
drawAnimation.toValue = [NSNumber numberWithFloat:1.0f];
// Experiment with timing to get the appearence to look the way you want
drawAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn];
drawAnimation.delegate = self;
// Add the animation to the circle
[circle addAnimation:drawAnimation forKey:#"drawCircleAnimation"];
}
Also the following code:
-(void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag{
CALayer *layer = [anim valueForKey:#"parentLayer"];
NSLog(#"%#",layer);
NSLog(#"%#",flag?#"YES":#"NO");
}
gives the following output:
2013-07-02 19:28:14.760 TicTacToe[16338:707] (null)
2013-07-02 19:28:14.761 TicTacToe[16338:707] NO
UPDATE:
The animationDidStop:... method gets called with finished as YES after I backed the view with a layer ( [view setWantsLayer:YES]). But still nothing is being shown on screen.
Here is the code for getting CGPathRef from NSBezierPath:
- (CGPathRef)quartzPath
{
int i;
NSInteger numElements;
// Need to begin a path here.
CGPathRef immutablePath = NULL;
// Then draw the path elements.
numElements = [self elementCount];
if (numElements > 0)
{
CGMutablePathRef path = CGPathCreateMutable();
NSPoint points[3];
BOOL didClosePath = YES;
for (i = 0; i < numElements; i++)
{
switch ([self elementAtIndex:i associatedPoints:points])
{
case NSMoveToBezierPathElement:
CGPathMoveToPoint(path, NULL, points[0].x, points[0].y);
break;
case NSLineToBezierPathElement:
CGPathAddLineToPoint(path, NULL, points[0].x, points[0].y);
didClosePath = NO;
break;
case NSCurveToBezierPathElement:
CGPathAddCurveToPoint(path, NULL, points[0].x, points[0].y,
points[1].x, points[1].y,
points[2].x, points[2].y);
didClosePath = NO;
break;
case NSClosePathBezierPathElement:
CGPathCloseSubpath(path);
didClosePath = YES;
break;
}
}
// Be sure the path is closed or Quartz may not do valid hit detection.
if (!didClosePath)
CGPathCloseSubpath(path);
immutablePath = CGPathCreateCopy(path);
CGPathRelease(path);
}
return immutablePath;
}
And I'm using 10.7 SDK and can't use the inbuilt CGPath method added with the new SDKs of NSBezierPath.
By default views on OS X don't have layers attached to them so the likely problem is that vcForCellView.view.layer is nil. This means that the shape layer never gets added to the layer hierarchy so when the animation is added to the shape layer it is immediately cancelled (as seen by finished being NO).
You can tell your view that it should be backed by a layer using: [myView setWantsLayer:YES];

CALayer shadow while scrolling UITableView

I have added a shadow to a UITableView (which covers a third of the screen sfrom the bottom - see attached screenshot) using the following in a UIView Category:
- (void) addShadow {
UIBezierPath *path = [UIBezierPath bezierPathWithRect:self.bounds];
self.layer.masksToBounds = NO;
self.layer.shadowColor = [UIColor blackColor].CGColor;
self.layer.shadowOpacity = 1;
self.layer.shadowOffset = CGSizeMake(-5,-5);
self.layer.shadowRadius = 20;
self.layer.shadowPath = path.CGPath;
self.layer.shouldRasterize = YES;
}
It appears as expected, but when I scroll it up, the shadow scrolls up too. Also, the table scrolls beyond its upper bound. Can you suggest what is wrong here? if I comment self.layer.masksToBounds = NO;, the shadow disappears, but the table scrolling is as expected. Hence, the problem lies somewhere around masksToBounds perhaps.
I solved it by putting an identical view underneath, just for the shadow. Not a clean solution ... hence I am still open to answers. My code is as follows:
- (UIView*) addShadow {
UIView* backView = [[UIView alloc] initWithFrame:self.frame];
UIBezierPath *path = [UIBezierPath bezierPathWithRect:backView.bounds];
backView.layer.masksToBounds = NO;
backView.layer.shadowColor = [UIColor blackColor].CGColor;
backView.layer.shadowOpacity = 1;
backView.layer.shadowOffset = CGSizeMake(-5,-5);
backView.layer.shadowRadius = 20;
backView.layer.shadowPath = path.CGPath;
backView.layer.shouldRasterize = YES;
[self.superview addSubview:backView];
[self.superview bringSubviewToFront:self];
return backView;
}
(void) removeShadow {
self.layer.masksToBounds = YES;
self.layer.shadowColor = nil;
self.layer.shadowOpacity = 0;
self.layer.shadowOffset = CGSizeMake(0,0);
self.layer.shadowRadius = 0;
}

Core Animation Layers not displaying under Snow Leopard

I have recently converted one of the views in my OS X App to be layer-hosted and all is working well under Mountain Lion, however one of my testers is complaining that the layers aren't showing under Snow Leopard. I have written a small test app to perform further tests (source code here), and this test app also doesn't work under 10.6.
Here is the main body of code that sets-up the layers:
- (id)initWithFrame:(NSRect)frameRect
{
NSLog(#"initWithFrame");
self = [super initWithFrame:frameRect];
if (self != nil)
{
srand((unsigned)time(NULL));
_rootLayer = [[CALayer alloc] init];
_rootLayer.delegate = self;
_rootLayer.anchorPoint = CGPointMake(0.0, 0.0);
_rootLayer.frame = NSRectToCGRect([self bounds]);
_rootLayer.needsDisplayOnBoundsChange = NO;
_rootLayer.masksToBounds = YES;
self.layer = _rootLayer;
self.wantsLayer = YES;
_backgroundLayer = [[CALayer alloc] init];
_backgroundLayer.delegate = self;
_backgroundLayer.anchorPoint = CGPointMake(0.5, 0.5);
_backgroundLayer.frame = CGRectInset(NSRectToCGRect([self bounds]), BACKGROUND_INSET, BACKGROUND_INSET);
_backgroundLayer.cornerRadius = 5.0;
_backgroundLayer.needsDisplayOnBoundsChange = NO;
_backgroundLayer.masksToBounds = YES;
[_rootLayer addSublayer:_backgroundLayer];
_mouseLayer = [self _createOtherLayer];
_mouseLayer.opacity = 0.5;
for (unsigned i = 0; i < NUM_OTHER_LAYERS; i++)
_otherLayers[i] = [self _createOtherLayer];
[_backgroundLayer addSublayer:_mouseLayer];
[_rootLayer setNeedsDisplay];
[_backgroundLayer setNeedsDisplay];
[self _positionOtherLayersInRect:frameRect];
_trackingArea = nil;
[self updateTrackingAreas];
}
return self;
}
And here is the method that creates the other layers:
- (CALayer *)_createOtherLayer
{
CALayer *layer = [[CALayer alloc] init];
layer.delegate = self;
layer.anchorPoint = CGPointMake(0.5, 0.5);
layer.bounds = CGRectMake(0.0, 0.0, 64.0, 64.0);
layer.position = CGPointMake(0.0, 0.0);
layer.needsDisplayOnBoundsChange = NO;
layer.masksToBounds = YES;
layer.shadowColor = CGColorGetConstantColor(kCGColorBlack);
layer.shadowOffset = CGSizeMake(2.0, -2.0);
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
[_backgroundLayer addSublayer:layer];
[layer setNeedsDisplay];
return layer;
}
Can anyone suggest why these layers don't work under 10.6?
Have you tried moving the code in initWithFrame: into awakeFromNib? It seems to be a common enough mistake that causes the layers to get screwed up. In this question the problem was that the layers were set up in initWithFrame, but since nibs are marked by default as not needing layers, they were wiped out immediately after. Move the code to awakeFromNib, and instead of using the passed frame use self.frame and see if that fixes the problem. At the very least it shouldn't be any worse (running on my Mac running Lion after moving the code to awakeFromNib and it still works fine, so it didn't break anything), and it may just be the solution you're looking for.
Hopefully this works, or you find another solution soon. Have a good day. :)
What happens if you change:
CALayer *layer = [[CALayer alloc] init];
to:
CALayer *layer = [CALayer layer];
Not sure why it would make a difference, but worth a shot maybe. Also have you tried to use insertSubLayer:atIndex: instead of addSubLayer?