CABasicAnimation - Infinite scrolling of an image on retina device is garbled - objective-c

In a similar post, I found a solution to animating a scrolling image infinitely:
Animate infinite scrolling of an image in a seamless loop
While this solution works beautifully, there seems to be an issue when running on a retina device. In particular, I am running this project on an iPad only. A non-retina iPad 2 scrolls the image without any problems. But run on a retina iPad 3 or 4 the image is a mess! It's difficult to describe, but the best I can say is that it is garbled. Pixels are stretched every which way. It resembles a Jackson Pollak painting.
Screen shot:
http://imgur.com/Q7n08kv
I tested this using a non-retina image (non 2x) and a retina version (#2x). The image is large - full screen (landscape) and is 4 panels wide (4096 x 768). I played around with smaller images but with the same result.
Is there a limitation with the scrolling functionality of CABasicAnimation that would affect retina devices? Here is the code I am using (as contributed by rob mayoff):
UIImage *crawlImage = [UIImage imageNamed:#"CrawlBackground.png"];
UIColor *crawlPattern = [UIColor colorWithPatternImage:crawlImage];
self.crawlLayer = [CALayer layer];
self.crawlLayer.backgroundColor = crawlPattern.CGColor;
self.crawlLayer.transform = CATransform3DMakeScale(1, -1, 1);
self.crawlLayer.anchorPoint = CGPointMake(0, 1);
self.crawlLayer.frame = CGRectMake(0, 0, crawlImage.size.width + 1024, crawlImage.size.height);
[self.backgroundCrawl.layer addSublayer:self.crawlLayer];
self.backgroundCrawl.layer.zPosition = 0;
CGPoint startPoint = CGPointZero;
CGPoint endPoint = CGPointMake(-crawlImage.size.width, 0);
self.crawlLayerAnimation = [CABasicAnimation animationWithKeyPath:#"position"];
self.crawlLayerAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear];
self.crawlLayerAnimation.fromValue = [NSValue valueWithCGPoint:startPoint];
self.crawlLayerAnimation.toValue = [NSValue valueWithCGPoint:endPoint];
self.crawlLayerAnimation.repeatCount = HUGE_VALF;
self.crawlLayerAnimation.duration = 30; // nn seconds to complete the cycle

self.crawlLayer = [CALayer layer];
As soon as you make a layer you should set its contentsScale:
self.crawlLayer.contentsScale = [[UIScreen mainScreen] scale];
Otherwise it may not display correctly on double-resolution screens.

self.crawlLayer.contents = (id)crawlImage.CGImage;
or:
self.crawlLayer.contents = (__bridge id)([crawlImage CGImage]);
This should fix it. Allowing large images to scroll correctly.

Related

CGAffineTransformMakeRotation bug when masking

I have a bug when masking a rotated image. The bug wasn't present on iOS 8 (even when it was built with the iOS 9 SDK), and it still isn't present on non-retina iOS 9 iPads (iPad 2). I don't have any more retina iPads that are still on iOS 8, and in the meantime, I've also updated the build to support both 32bit and 64bit architectures. Point is, I can't be sure if the update to iOS 9 brought the bug, or the change to the both architectures setting.
The nature of the bug is this. I'm iterating through a series of images, rotating them by an angle determined by the number of segments I want to get out of the picture, and masking the picture on each rotation to get a different part of the picture. Everything works fine EXCEPT when the source image is rotated M_PI, 2*M_PI, 3*M_PI or 4*M_PI times (or multiples, i. e. 5*M_PI, 6*M_PI etc.). When It's rotated 1-3*M_PI times, the resulting images are incorrectly masked - the parts that should be transparent end up black. When it's rotated 4*M_PI times, the masking of the resulting image ends up with a nil image, thus crashing the application in the last step (where I'm adding the resulting image in an array).
I'm using CIImage for rotation, and CGImage masking for performance reasons (this combination showed best performance), so I would prefer keeping this choice of methods if at all possible.
UPDATE: The code shown below is run on a background thread.
Here is the code snippet:
CIContext *context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:kCIContextUseSoftwareRenderer]];
for (int i=0;i<[src count];i++){
//for every image in the source array
CIImage * newImage = [[CIImage alloc] init];
if (IS_RETINA){
newImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
}else {
CIImage *tImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
newImage = [tImage imageByApplyingTransform:CGAffineTransformMakeScale(0.5, 0.5)];
}
float angle = angleOff*M_PI/180.0;
//angleOff is just the inital angle offset, If I want to start cutting from a different start point
for (int j = 0; j < nSegments; j++){
//for the given number of circle slices (segments)
CIImage * ciResult = [newImage imageByApplyingTransform:CGAffineTransformMakeRotation(angle + ((2*M_PI/ (2 * nSegments)) + (2*M_PI/nSegments) * (j)))];
//the way the angle is calculated is specific for the application of the image later. In any case, the bug happens when the resulting angle is M_PI, 2*M_PI, 3*M_PI and 4*M_PI
CGPoint center = CGPointMake([ciResult extent].origin.x + [ciResult extent].size.width/2, [ciResult extent].origin.y+[ciResult extent].size.width/2);
for (int k = 0; k<[src count]; k++){
//this iteration is also specific, it has to do with how much of the image is being masked in each iteration, but the bug happens irrelevant to this iteration
CGSize dim = [[masks objectAtIndex:k] size];
if (IS_RETINA && (floor(NSFoundationVersionNumber)>(NSFoundationVersionNumber_iOS_7_1))) {
dim = CGSizeMake(dim.width*2, dim.height*2);
}//this correction was needed after iOS 7 was introduced, not sure why :)
CGRect newSize = CGRectMake(center.x-dim.width*1/2,center.y+((circRadius + orbitWidth*(k+1))*scale-dim.height)*1, dim.width*1, dim.height*1);
//the calculation of the new size is specific to the application, I don't find it relevant.
CGImageRef imageRef = [context createCGImage:ciResult fromRect:newSize];
CGImageRef maskRef = [[masks objectAtIndex:k] CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef masked = CGImageCreateWithMask(imageRef, mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(masked);
CGImageRelease(maska);
[temps addObject:result];
}
}
}
I would be eternally grateful for any tips anyone might have. This bug has me puzzled beyond words :) .

Fade background content to new background

I want to be able to fade my background from my image bg1 to bg2.
Right now I'm trying to animate it by...
scene.background.contents = #"bg1";
[CATransaction begin];
CABasicAnimation *displayBackground2 =
[CABasicAnimation animation];
displayBackground2.keyPath = #"contents";
displayBackground2.toValue = #"bg2";
displayBackground2.duration = 5.0;
[scene.background addAnimation:displayBackground2
forKey:#"contents"];
[CATransaction commit];
However I get this error...
[SCNKit ERROR] contents is not an animatable path (from <SCNMaterialProperty: 0x170149530 | contents=bg1>)
It says that scene.background.contents in Apple's API, but I can't figure out how to animate it.
Here's an answer if you wish to use the UIImageView solution.
Set the image of bg1 and bg2:
//Declare them in header file
CGFloat imageHeight = self.height
CGFloat proportionalWidth = (height of background image / imageHeight) * self.width
//Set height and width to value of height and width of screen respectively
self.bg1 = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, imageHeight, proportionalWidth)];
[self.bg1 setImage:[UIImage imageNamed:#"the image"]];;
[self.view addSubview:self.bg1];
Change opacity:
[UIView animateWithDuration: duration
animations: ^{[self.bg1 setAlpha: 0]}
];
Pretty straight forward from there. To call the methods at delayed times use:
[self performSelector:#selector(methodName) withObject:nil afterDelay:delay];
As far as I can tell, there is no way to animate a SCNScene's background's contents property.
However you can make the SCNView's background a clear color, and remove any contents from scene.background.contents then structure your view hierarchy to look like this.
UIView
|
|_UIImageView *backgroundOne
|
|_UIImageView *backgroundTwo
|
|_SCNView *gameView
After that you can animate the UIImageViews as needed with
[UIView animateWithDuration:time animations:animation_block];

Physics Bodies are being offset from their node upon creation

I have been working on a puzzle game that uses a customizable grid with levels that are preset and created when the scene inits. They are comprised of static blocks scattered around the grid and a movable player controlled block. The gravity of the world is controllable and the block falls in that direction (up, down, left, right). I am using SKContactDelegate to track when the player block touches a block or the edge of the grid and stops it in place.
The problems I am having involve the physics bodies of the blocks and grid edge.
I am using bodyWithRectOfSize for the blocks and bodyWithEdgeLoopFromRect for the grid border.
The physics bodies of 1x1 blocks placed in the grid and the player 1x1 block have normal physics bodies (as they should be). However, larger blocks ex: 1x5, the bodies are shifted down on the y axis for no reason. Also depending on the grid size, the grid edge would be offset by a random number. Note: the bodies are offset and the nodes themselves are in the right place.
This is the code for creating the blocks; this one works fine
(cellsize is the size of each grid space)
Basic 1x1 block
-(SKShapeNode *)basic {
CGRect blockRect = CGRectMake(10, 10, self.cellSize, self.cellSize);
UIBezierPath *blockPath = [UIBezierPath bezierPathWithRoundedRect:blockRect cornerRadius:8];
SKShapeNode *blockNode = [SKShapeNode node];
blockNode.path = blockPath.CGPath;
blockNode.fillColor =[UIColor blackColor];
blockNode.lineWidth = 0;
blockNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:blockRect.size];
blockNode.physicsBody.categoryBitMask = blockCategory;
blockNode.physicsBody.dynamic = NO;
return blockNode;
}
and custom blocks (offset down on the y axis)
-(SKShapeNode *)basicWithWidth:(int)width WithHeight:(int)height {
CGRect blockRect = CGRectMake(10, 10, self.cellSize * width, self.cellSize * height);
UIBezierPath *blockPath = [UIBezierPath bezierPathWithRoundedRect:blockRect cornerRadius:8];
SKShapeNode *blockNode = [self basic];
blockNode.path = blockPath.CGPath;
blockNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:blockRect.size];
blockNode.physicsBody.categoryBitMask = blockCategory;
blockNode.physicsBody.dynamic = NO;
return blockNode;
}
And here is the grid edge
SKShapeNode *edge = [SKShapeNode node];
CGRect edgeRect = CGRectMake(10, 10, 260, 260);
UIBezierPath *edgeShape = [UIBezierPath bezierPathWithRoundedRect:edgeRect cornerRadius:8];
edge.path = edgeShape.CGPath;
edge.lineWidth = 0;
edge.fillColor = [UIColor grayColor];
edge.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:edgeRect];
edge.physicsBody.categoryBitMask = boardCategory;
[self addChild:edge];
Note: The edge and blocks are children to a "board" node that is a child of the game scene
Thank you for reading this.
When you use SKPhysicsBody's bodyWithRectangleOfSize:, it is centered on its node's origin. What you want to do in your case is center them on the center of the rect that defines your node. To do that, use bodyWithRectangleOfSize:center: instead.
For example, you define your block with the rect:
CGRect blockRect = CGRectMake(10, 10, self.cellSize, self.cellSize);
So, you'll want your physicsBody centered on the center of that rect. You easily get the center of a CGRect with:
CGPoint center = CGPointMake(CGRectGetMidX(blockRect), CGRectGetMidY(blockRect));
To create the physicsBody centered there, use:
blockNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:blockRect.size center:center];
Note: bodyWithRectangleOfSize:center: is only available in iOS 7.1 and later
iOS 7.0 method
For iOS 7.0, you can use bodyWithPolygonFromPath: instead, it just takes an additional line of code:
CGPathRef blockPath = CGPathCreateWithRect(blockRect, nil);
blockNode.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath: blockPath];

UIImageView cropped being displayed wrong on device

I have an image and i am cropping part of it. The problem is that in the simulator it is displayed correctly, but on the device it is much more zoomed in. It's quite a bit difference. What am i doing wrong? (first image is from the simulator and second from the iphone device)
// create bounds and initialise default image
CGRect imageSizeRectangle = CGRectMake(0, 0, 300, 300);
UIImage *df_Image = [UIImage imageNamed:#"no_selection.png"];
self.imageView = [[UIImageView alloc] initWithFrame:imageSizeRectangle];
[imageView setImage:df_Image];
[self.view addSubview:imageView];
//crop image
CGRect test = CGRectMake(0, 0, 150,150);
CGImageRef imageRef = CGImageCreateWithImageInRect([photo.image CGImage], test);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
The problem here is that retina devices are 2x the size of normal devices. You could check if the device is retina or not with the following method;
+(BOOL)iPhoneRetina{
return ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0))?1:0;
}
And increase/decrease the size of your rect according to the BOOL value returned.
Note* displayLinkWithTarget:selector: is just a random method that works in iOS 4.0+ but not previous versions. You don't need to pay much attention to it.
Edit*
CGRect rect;
if([self iPhoneRetina]){rect = CGRectMake(0,0,300,300);}//Retina
else{rect = CGRectMake(0,0,150,150);}//Non retina
//Then the rest of your code
if you want to simplize your code you may use
CGRectMake(0,0,[UIScreen mainScreen].scale*150,[UIScreen mainScreen].scale*150)

Lag with CALayer when double tap on the home button

When I put shadow etc. with CALayer my app is lagging when I double tap on the home button to see tasks running. I don't have any other lag, just when I double tap.
I call this method 20 times to put 20 images :
- (UIView *)createImage:(CGFloat)posX posY:(CGFloat)posY imgName:(NSString *)imgName
{
UIView *myView = [[UIView alloc] init];
CALayer *sublayer = [CALayer layer];
sublayer.backgroundColor = [UIColor blueColor].CGColor;
sublayer.shadowOffset = CGSizeMake(0, 3);
sublayer.shadowRadius = 5.0;
sublayer.shadowColor = [UIColor blackColor].CGColor;
sublayer.shadowOpacity = 0.8;
sublayer.frame = CGRectMake(posX, posY, 65, 65);
sublayer.borderColor = [UIColor blackColor].CGColor;
sublayer.borderWidth = 2.0;
sublayer.cornerRadius = 10.0;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = sublayer.bounds;
imageLayer.cornerRadius = 10.0;
imageLayer.contents = (id) [UIImage imageNamed:imgName].CGImage;
imageLayer.masksToBounds = YES;
[sublayer addSublayer:imageLayer];
[myView.layer addSublayer:sublayer];
return myView;
}
I have commented all my code except this, so I'm sure the lag comes from here. Also I've checked with the Allocations tools and my app never exceeded 1Mo. When I'm just putting images without shadow etc. everything works fine.
Try setting a shadowPath on the layer as well. It will need to be a rounded rect since you've got rounded corners on your layer.
CALayer has to calculate where it is drawing, and where to put the shadow, if it doesn't have a shadow path. This has a big effect on animation performance.
Another way to improve performance with CALayers is to set the shouldRasterize property to YES. This stores the layer contents as a bitmap and prevents it having to re-render everything.