I am trying to animate a circular stroke (from another SO answer). I am trying to do it in a Cocoa based application.
However it's not working and animationDidStop:finished: is being immediately called with finished flag as NO. Why is this happening? Any pointers on how I can get some information why the finished flag is NO?
Here is the code I use:
Note: quartzPath and NSColorToCGColor are from categories on NSColor and NSBezierPath.
- (IBAction)animateCircle:(id)sender {
int radius = 10;
circle = [CAShapeLayer layer];
// Make a circular shape
circle.path = [[NSBezierPath bezierPathWithOvalInRect:NSMakeRect(10, 10, 2.0*radius, 2.0*radius)] quartzPath] ;
// Center the shape in self.view
circle.position = CGPointMake(CGRectGetMidX(self.vcForCellView.view.frame)-radius,
CGRectGetMidY(self.vcForCellView.view.frame)-radius);
// Configure the apperence of the circle
circle.fillColor = [NSColor NSColorToCGColor:[NSColor clearColor]];
circle.strokeColor = [NSColor NSColorToCGColor:[NSColor blackColor]];
circle.lineWidth = 5;
// Add to parent layer
[self.vcForCellView.view.layer addSublayer:circle];
// Configure animation
CABasicAnimation *drawAnimation = [CABasicAnimation animationWithKeyPath:#"strokeEnd"];
drawAnimation.duration = 2.0; // "animate over 10 seconds or so.."
drawAnimation.repeatCount = 1.0; // Animate only once..
drawAnimation.removedOnCompletion = NO; // Remain stroked after the animation..
// Animate from no part of the stroke being drawn to the entire stroke being drawn
drawAnimation.fromValue = [NSNumber numberWithFloat:0.0f];
drawAnimation.toValue = [NSNumber numberWithFloat:1.0f];
// Experiment with timing to get the appearence to look the way you want
drawAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn];
drawAnimation.delegate = self;
// Add the animation to the circle
[circle addAnimation:drawAnimation forKey:#"drawCircleAnimation"];
}
Also the following code:
-(void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag{
CALayer *layer = [anim valueForKey:#"parentLayer"];
NSLog(#"%#",layer);
NSLog(#"%#",flag?#"YES":#"NO");
}
gives the following output:
2013-07-02 19:28:14.760 TicTacToe[16338:707] (null)
2013-07-02 19:28:14.761 TicTacToe[16338:707] NO
UPDATE:
The animationDidStop:... method gets called with finished as YES after I backed the view with a layer ( [view setWantsLayer:YES]). But still nothing is being shown on screen.
Here is the code for getting CGPathRef from NSBezierPath:
- (CGPathRef)quartzPath
{
int i;
NSInteger numElements;
// Need to begin a path here.
CGPathRef immutablePath = NULL;
// Then draw the path elements.
numElements = [self elementCount];
if (numElements > 0)
{
CGMutablePathRef path = CGPathCreateMutable();
NSPoint points[3];
BOOL didClosePath = YES;
for (i = 0; i < numElements; i++)
{
switch ([self elementAtIndex:i associatedPoints:points])
{
case NSMoveToBezierPathElement:
CGPathMoveToPoint(path, NULL, points[0].x, points[0].y);
break;
case NSLineToBezierPathElement:
CGPathAddLineToPoint(path, NULL, points[0].x, points[0].y);
didClosePath = NO;
break;
case NSCurveToBezierPathElement:
CGPathAddCurveToPoint(path, NULL, points[0].x, points[0].y,
points[1].x, points[1].y,
points[2].x, points[2].y);
didClosePath = NO;
break;
case NSClosePathBezierPathElement:
CGPathCloseSubpath(path);
didClosePath = YES;
break;
}
}
// Be sure the path is closed or Quartz may not do valid hit detection.
if (!didClosePath)
CGPathCloseSubpath(path);
immutablePath = CGPathCreateCopy(path);
CGPathRelease(path);
}
return immutablePath;
}
And I'm using 10.7 SDK and can't use the inbuilt CGPath method added with the new SDKs of NSBezierPath.
By default views on OS X don't have layers attached to them so the likely problem is that vcForCellView.view.layer is nil. This means that the shape layer never gets added to the layer hierarchy so when the animation is added to the shape layer it is immediately cancelled (as seen by finished being NO).
You can tell your view that it should be backed by a layer using: [myView setWantsLayer:YES];
Related
So we have our detection here
-(void)checkInFOVWithPlayer:(Player *)player andEnemy:(Player *)enemy {
SKNode *fovNode = [player childNodeWithName:player.playersFOVName];
SKNode *node = [self childNodeWithName:#"enemy"];
CGPoint newPosition = [self convertPoint:node.position toNode:fovNode.parent];
if (CGRectContainsPoint(fovNode.frame, newPosition)) {
[self playerAimAtEnemy:enemy withPlayer:player];
}
}
And our implementation for the field of vision
SKShapeNode *fov = [SKShapeNode node];
UIBezierPath *fovPath = [[UIBezierPath alloc] init];
[fovPath moveToPoint:CGPointMake(0, 0)];
[fovPath addLineToPoint:CGPointMake(fovOpposite *-1, fovDistance)];
[fovPath addLineToPoint:CGPointMake(fovOpposite, fovDistance)];
[fovPath addLineToPoint:CGPointMake(0, 0)];
fov.path = fovPath.CGPath;
fov.lineWidth = 1.0;
fov.strokeColor = [UIColor clearColor];
fov.antialiased = NO;
fov.fillColor = [UIColor greenColor];
fov.alpha = 0.2;
fov.name = #"playerFOV";
[_playerImage addChild:fov];
Now, this works. However, the detection range for the "field of vision" is not actually the boundaries of the BezierPath, it's in fact the CGRect that creates the image.
So, the detection will run, even if it's outside of the visual field of vision.
I'm curious as to whether there's an easy fix for this, as I don't really want to go down physics body paths if I don't need to.
Finally I figured out what was required.
I had to redraw the path and optimise it for each frame, after that;
if (CGPathContainsPoint(fovPath.CGPath, nil, newPosition, NO)) {}
Ended my problems.
Okay, this one has me completed stumped. I'm happy to post other code if you need it but I think this is enough. I cannot for the life of me figure out why things are going wrong. I'm adding CALayers, which contain images, to a composition using AVVideoCompositionCoreAnimationTool. I create an NSArray of all the annotations (see interface below) I want to add and then add them to the animation layer with an enumerator. No matter how many, as far as I can tell, annotations are in the array, the only ones that end up in the outputted video are the ones added by the last loop. Can someone spot what I'm missing?
Here's the interface for the annotations
#interface Annotation : NSObject// <NSCoding>
#property float time;
#property AnnotationType type;
#property CALayer *startLayer;
#property CALayer *typeLayer;
#property CALayer *detailLayer;
+ (Annotation *)annotationAtTime:(float)time ofType:(AnnotationType)type;
- (NSString *)annotationString;
#end
And here's the message that creates the video composition with the animation.
- (AVMutableVideoComposition *)createCompositionForMovie:(AVAsset *)movie fromAnnotations:(NSArray *)annotations {
AVMutableVideoComposition *videoComposition = nil;
if (annotations){
//CALayer *imageLayer = [self layerOfImageNamed:#"Ring.png"];
//imageLayer.opacity = 0.0;
//[imageLayer setMasksToBounds:YES];
Annotation *ann;
NSEnumerator *enumerator = [annotations objectEnumerator];
CALayer *animationLayer = [CALayer layer];
animationLayer.frame = CGRectMake(0, 0, movie.naturalSize.width, movie.naturalSize.height);
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0, 0, movie.naturalSize.width, movie.naturalSize.height);
[animationLayer addSublayer:videoLayer];
// TODO: Consider amalgamating this message and scaleVideoTrackTime:fromAnnotations
// Single loop instead of two and sharing of othe offset variables
while (ann = (Annotation *)[enumerator nextObject]) {
CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:#"opacity"];
animation.duration = 3; // TODO: Three seconds is the currently hard-coded display length for an annotation, should this be a configurable option after the demo?
animation.repeatCount = 0;
animation.autoreverses = NO;
animation.removedOnCompletion = NO;
animation.fromValue = [NSNumber numberWithFloat:1.0];
animation.toValue = [NSNumber numberWithFloat:1.0];
animation.beginTime = time;
// animation.beginTime = AVCoreAnimationBeginTimeAtZero;
ann.startLayer.opacity = 0.0;
ann.startLayer.masksToBounds = YES;
[ann.startLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.startLayer];
ann.typeLayer.opacity = 0.0;
ann.typeLayer.masksToBounds = YES;
[ann.typeLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.typeLayer];
ann.detailLayer.opacity = 0.0;
ann.detailLayer.masksToBounds = YES;
[ann.detailLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.detailLayer];
}
videoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:movie];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:animationLayer];
}
return videoComposition;
}
I want to stress that the video outputs correctly and I do get the layers appearing at the right time, just not ALL the layers. Very confused and would greatly appreciate your help.
So I was fiddling around trying to figure out what might have caused this and it turns out that it was caused by the layers hidden property being set to YES. By setting it to NO, all the layers appear but then they never went away. So I had to change the animation's autoreverses property to YES and halve the duration.
So I changed the code to this:
while (ann = (Annotation *)[enumerator nextObject]){
CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:#"opacity"];
animation.duration = 1.5; // TODO: Three seconds is the currently hard-coded display length for an annotation, should this be a configurable option after the demo?
animation.repeatCount = 0;
animation.autoreverses = YES; // This causes the animation to run forwards then backwards, thus doubling the duration, that's why a 3-second period is using 1.5 as duration
animation.removedOnCompletion = NO;
animation.fromValue = [NSNumber numberWithFloat:1.0];
animation.toValue = [NSNumber numberWithFloat:1.0];
animation.beginTime = time;
// animation.beginTime = AVCoreAnimationBeginTimeAtZero;
ann.startLayer.hidden = NO;
ann.startLayer.opacity = 0.0;
ann.startLayer.masksToBounds = YES;
[ann.startLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.startLayer];
ann.typeLayer.hidden = NO;
ann.typeLayer.opacity = 0.0;
ann.typeLayer.masksToBounds = YES;
[ann.typeLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.typeLayer];
ann.detailLayer.hidden = NO;
ann.detailLayer.opacity = 0.0;
ann.detailLayer.masksToBounds = YES;
[ann.detailLayer addAnimation:animation forKey:#"animateOpacity"];
[animationLayer addSublayer:ann.detailLayer];
}
How can I hide my UIImageView after the animation is completed? My animation is working fine when I segue into the view. I want to be a able to hide the image after the animation is complete. How might I do that?
-(void) animate
{
NSLog(#"Animate");
CGPoint startPoint = [pickerCircle center];
CGPoint endPoint = [pickerButton1 center];
CGMutablePathRef thePath = CGPathCreateMutable();
CGPathMoveToPoint(thePath, NULL, startPoint.x, startPoint.y);
CGPathAddLineToPoint(thePath, NULL, endPoint.x, endPoint.y);
CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:#"position"];
animation.duration = 3.f;
animation.path = thePath;
animation.repeatCount = 2;
animation.removedOnCompletion = YES;
[pickerCircle.layer addAnimation:animation forKey:#"position"];
pickerCircle.layer.position = endPoint;
}
Set the animation's delegate property to self like so:
animation.delegate = self;
And then you can use:
-(void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag {
// hide your UIImage here
}
It will be called when the animation is done.
Did you try completion? Here example 4.2there is an example about showing and hiding an view with using completion.
I don't know animations well but I think completion should work.
I've been unsuccessful at animating a flashing stroke on a CAShapeLayer using the answer from this previous thread, and after many searches I can find no other examples of animating the stroke using CABasicAnimation.
What I want to do is have the stroke of my CAShapeLayer pulse between two colors. Using CABasicAnimation for opacity works fine, but the [CABasicAnimation animationWithKeyPath:#"strokeColor"] eludes me, and I'd appreciate any advice on how to successfully implement.
CABasicAnimation *strokeAnim = [CABasicAnimation animationWithKeyPath:#"strokeColor"];
strokeAnim.fromValue = (id) [UIColor blackColor].CGColor;
strokeAnim.toValue = (id) [UIColor purpleColor].CGColor;
strokeAnim.duration = 1.0;
strokeAnim.repeatCount = 0.0;
strokeAnim.autoreverses = NO;
[shapeLayer addAnimation:strokeAnim forKey:#"animateStrokeColor"];
// CABasicAnimation *opacityAnimation = [CABasicAnimation animationWithKeyPath:#"opacity"];
// opacityAnimation.fromValue = [NSNumber numberWithFloat:0.0];
// opacityAnimation.toValue = [NSNumber numberWithFloat:1.0];
// opacityAnimation.duration = 1.0;
// opacityAnimation.repeatCount = 0.0;
// opacityAnimation.autoreverses = NO;
// [shapeLayer addAnimation:opacityAnimation forKey:#"animateOpacity"];
Uncommenting the opacity animation results in an expected opacity fade. The stroke animation produces no effect. An implicit strokeColor change animates as expected, but I would like documented confirmation that strokeColor can be explicitly animated using CABasicAnimation.
Update: The specific problem was that shapeLayer.path was NULL. Correcting that fixed the problem.
The code below works great for me. What is the lineWidth of your shapeLayer stroke path? Could that be the issue?
- (void)viewDidLoad
{
[super viewDidLoad];
UIBezierPath * circle = [UIBezierPath bezierPathWithOvalInRect:self.view.bounds];
CAShapeLayer * shapeLayer = [CAShapeLayer layer];
shapeLayer.path = circle.CGPath;
shapeLayer.strokeColor =[UIColor blueColor].CGColor;
[shapeLayer setLineWidth:15.0];
[self.view.layer addSublayer:shapeLayer];
CABasicAnimation *strokeAnim = [CABasicAnimation animationWithKeyPath:#"strokeColor"];
strokeAnim.fromValue = (id) [UIColor redColor].CGColor;
strokeAnim.toValue = (id) [UIColor greenColor].CGColor;
strokeAnim.duration = 3.0;
strokeAnim.repeatCount = 0;
strokeAnim.autoreverses = YES;
[shapeLayer addAnimation:strokeAnim forKey:#"animateStrokeColor"];
}
Let me know if it works for you...
I've been trying to display a NSImage on a CALayer. Then I realised I need to convert it to a CGImage apparently, then display it...
I have this code which doesn't seem to be working
CALayer *layer = [CALayer layer];
NSImage *finderIcon = [[NSWorkspace sharedWorkspace] iconForFileType:NSFileTypeForHFSTypeCode(kFinderIcon)];
[finderIcon setSize:(NSSize){ 128.0f, 128.0f }];
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)finderIcon, NULL);
CGImageRef finalIcon = CGImageSourceCreateImageAtIndex(source, 0, NULL);
layer.bounds = CGRectMake(128.0f, 128.0f, 4, 4);
layer.position = CGPointMake(128.0f, 128.0f);
layer.contents = finalIcon;
// Insert the layer into the root layer
[mainLayer addSublayer:layer];
Why? How can I get this to work?
From the comments: Actually, if you're on 10.6, you can also just set the CALayer's contents to an NSImage rather than a CGImageRef...
If you're on OS X 10.6 or later, take a look at NSImage's CGImageForProposedRect:context:hints: method.
If you're not, I've got this in a category on NSImage:
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return (CGImageRef)[(id)cgImage autorelease];
}
I think I wrote this myself. But it's entirely possible that I ripped it off from somewhere else like Stack Overflow. It's an older personal project and I don't really remember.
Here's some code which may help you - I sure hope the formatting of this does not get all messed up like it appears is going to happen - all I can offer is that this works for me.
// -------------------------------------------------------------------------------------
- (void)awakeFromNib
{
// setup our main window 'contentWindow' to use layers
[[contentWindow contentView] setWantsLayer:YES]; // NSWindow*
// create a root layer to contain all of our layers
CALayer *root = [[contentWindow contentView] layer];
// use constraint layout to allow sublayers to center themselves
root.layoutManager = [CAConstraintLayoutManager layoutManager];
// create a new layer which will contain ALL our sublayers
// -------------------------------------------------------
mContainer = [CALayer layer];
mContainer.bounds = root.bounds;
mContainer.frame = root.frame;
mContainer.position = CGPointMake(root.bounds.size.width * 0.5,
root.bounds.size.height * 0.5);
// insert layer on the bottom of the stack so it is behind the controls
[root insertSublayer:mContainer atIndex:0];
// make it resize when its superlayer does
root.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
// make it resize when its superlayer does
mContainer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
}
// -------------------------------------------------------------------------------------
- (void) loadMyImage:(NSString*) path
n:(NSInteger) num
x:(NSInteger) xpos
y:(NSInteger) ypos
h:(NSInteger) hgt
w:(NSInteger) wid
b:(NSString*) blendstr
{
#ifdef __DEBUG_LOGGING__
NSLog(#"loadMyImage - ENTER [%#] num[%d] x[%d] y[%d] h[%d] w[%d] b[%#]",
path, num, xpos, ypos, hgt, wid, blendstr);
#endif
NSInteger xoffset = ((wid / 2) + xpos); // use CORNER versus CENTER for location
NSInteger yoffset = ((hgt / 2) + ypos);
CIFilter* filter = nil;
CGRect cgrect = CGRectMake((CGFloat) xoffset, (CGFloat) yoffset,
(CGFloat) wid, (CGFloat) hgt);
if(nil != blendstr) // would be equivalent to #"CIMultiplyBlendMode" or similar
{
filter = [CIFilter filterWithName:blendstr];
}
// read image file via supplied path
NSImage* theimage = [[NSImage alloc] initWithContentsOfFile:path];
if(nil != theimage)
{
[self setMyImageLayer:[CALayer layer]]; // create layer
myImageLayer.frame = cgrect; // locate & size image
myImageLayer.compositingFilter = filter; // nil is OK if no filter
[myImageLayer setContents:(id) theimage]; // deposit image into layer
// add new layer into our main layer [see awakeFromNib above]
[mContainer insertSublayer:myImageLayer atIndex:0];
[theimage release];
}
else
{
NSLog(#"ERROR loadMyImage - no such image [%#]", path);
}
}
+ (CGImageRef) getCachedImage:(NSString *) imageName
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSImage *img = [NSImage imageNamed:imageName];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
return [img CGImageForProposedRect:&rect context:context hints:NULL];
}
+ (CGImageRef) getImage:(NSString *) imageName withExtension:(NSString *) extension
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSString* imagePath = [[NSBundle mainBundle] pathForResource:imageName ofType:extension];
NSImage* img = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
CGImageRef imgRef = [img CGImageForProposedRect:&rect context:context hints:NULL];
[img release];
return imgRef;
}
then you can set it:
yourLayer.contents = (id)[self getCachedImage:#"myImage.png"];
or
yourLayer.contents = (id)[self getImage:#"myImage" withExtension:#"png"];