UIImageView won't animate with UIImages of CGImage - objective-c

I have a UIImageView that I am trying to make do an animation of a set of UIImages that are created by flipping other UIImages. Here's my code:
turtle = [[UIImageView alloc] initWithFrame:CGRectMake(self.view.frame.size.width-200,
self.view.frame.size.height - sand.frame.size.height - turtle.frame.size.height - 10 - heightOfKeyboard,
100,100)];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle1.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle2.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle3.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[flippedTurtleArray addObject:[UIImage imageWithCGImage:[UIImage imageNamed:#"Turtle2.png"].CGImage scale:1 orientation:UIImageOrientationDownMirrored]];
[self.view addSubview: turtle];
Problem is, when I try and make it animate from the array of flipped images, it shows the originals, not the flipped ones (i.e., when I do this):
turtle.animationImages = flippedTurtleArray;
turtle.animationDuration = 0.8f;
turtle.animationRepeatCount = 0;
[turtle startAnimating];
the original non flipped images are shown.
Now, if I do this however:
turtle.image = [flippedTurtleArray objectAtIndex:1];
the flipped image is shown. I thought maybe you can't do the animation with CGImage, but couldn't find that anyone else had had the same problem. Any ideas?
Thanks,
Sam

I would look at the CGLayer property, and specifically the transform property of the layer.
The Transform property is animatable, and would allow you to apply an arbitrary transform to you original image without having to manage your intermediate flipped Turtle Array.
The transform applied to the layer’s contents. Animatable.
#property CATransform3D transform
Discussion
This property is set to the identity transform by default. Any transformations you apply to the layer occur relative to the layer’s anchor point.
Availability
Available in iOS 2.0 and later.
Related Sample Code
oalTouch
Declared In
CALayer.h
If you have other reasons for wanting the array, then you may want to render the flipped image into a separate image, and pass that into your array.
Similarly if you can use iOS 7 as a target, take a look a SpriteKit, it may make the whole task much simpler.

Related

Rotate NSImage around its vertical axis

I am trying to rotate the NSImage around the vertical axis, saving the result. The main purpose is to generate an array of frames of the same image with different rotation degrees. I need to have that array to modify each frame separately. The desired result for a single frame is the following:
So far I was trying to use CATransform3DRotate to make a single rotated NSImage, but I am not able to apply this transform to NSImage in any way. I tried applying CATransform3DRotate to a newly created CALayer and even to a layer of a specially created view, the best I can get is the same picture, as before.
CATransform3D t = CATransform3DRotate(CATransform3DIdentity, (60 * M_PI / 180), 0, 1, 0);
CALayer *layer = [CALayer layer];
[layer setFrame: imageRect];
[layer setContents:image];
[layer setTransform:t];
NSImage * newImage = [[NSImage alloc] initWithSize:layer.bounds.size];
[newImage lockFocus];
[layer renderInContext:[NSGraphicsContext currentContext].CGContext];
[newImage unlockFocus];
This code results in newImage being the same non-rotated image. I tried some other ways too, but none of them succeed.
Maybe there is an easier way to achieve my aim (generate the array of rotation frames of the same NSImage), but I have no idea, what it is.
NSImage object cannot be rotated. You need to rotate a view or layer and then save the contents of the view/layer as an image.
See Saving an NSView to a png file? for how to save a view as image.

NSImageView setImageScaling notWorkingCorrectly

I'm having problems scaling an image in nsimageview correctly. What happens is when the image is loaded, it doesn't fill up the entire nsimageview. I used NSScaleToFit and it completely stretched the image. Since I don't know the dimensions of the photo (it changes, depending on what image is used) I can't set the nsimageview to a size and leave it there. What I want to do is fill the entire nsimageview with an image, proportionally, even if some of the image is cut off.
This is the code that I'm using:
NSString *image_path = [applicationSupportDirectory stringByAppendingPathComponent:desktop_name];
NSImage *imageFromBundle = [[NSImage alloc] initWithContentsOfFile:image_path];
[ViewImage setImageScaling:NSScaleProportionally];
[ViewImage setImage: imageFromBundle];
I had the same problem. I wanted to have the image to be scaled to fill but keeping the aspect ratio of the original image. Strangely, this is not as simple as it seems, and does not come out of the box with NSImageView. I wanted the NSImageView scale nicely while it resize with superview(s). I made a drop-in NSImageView subclass you can find on github: KPCScaleToFillNSImageView
I don't think something like this is built-in; you'll likely have to transform the image yourself.
It is relatively straightforward though; you can see an example implementation here.
I made my own solution, since the ones that were built-in didn't work.
NSRect frame = [MyWindow frame];
if (abs(windowHeight-[imageFromBundle size].width) > abs(windowWidth-[imageFromBundle size].height)) {
frame.size.height = windowWidth;
frame.size.width = 1+windowWidth*(([imageFromBundle size].width/[imageFromBundle size].height-(10*pow([imageFromBundle size].width, 2))/(pow([imageFromBundle size].height, 2)* windowHeight)));
} else {
frame.size.width = windowHeight;
frame.size.height = ((windowHeight/[imageFromBundle size].width)*[imageFromBundle size].height);
}
[MyWindow setFrame: frame
display: YES
animate: YES];

Changing sprite texture from a CCSpriteBatchNode

I am trying to change the texture of a sprite that I create from a SpriteBatchNode.
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];
spritesBgNode = [CCSpriteBatchNode batchNodeWithFile:#"playingCards.pvr.ccz"];
[self addChild:spritesBgNode];
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"playingCards.plist"];
I searched and found examples that use
Texture2D *texture = [[Texture2D alloc] initWithImage:myUIImage]
[sprite setTexture: texture];
So my questions is: how do I get the image from my batchNode file? Or do I use another method to get a reference to the image in my playingCards.pvr.ccz file.
UPDATE
First off thanks for the response. So I have mySprite with the image of a King using the code snippet you provided. But I want to change the sprite's texture to display the back of the card (so it can be played face up or down) I have both images inside CCSpriteBatchNode.
But as you point out "You can't get the image from a batchNode", so I can't use [[Texture2D alloc] initWithImage:myUIImage]
So do I go about changing the sprite's image from face up to face down.
Thanks
If you want to display the images in your .pvr.ccz file to the screen then add the following code:
CCSprite * mySprite = [CCSprite spriteWithSpriteFrameName: #"name of sprite frame"];
[spritesBgNode addChild: mySprite];
Basically, to display parts of your batchNode, you need to add a sprite to it. The name of the sprite frame is in the .plist file you added to the FrameCache.
You can't get the image from a batchNode. UIImage is the iPhone API type of image, not cocos2d. In cocos2d, initWithImage:(UIImage*)image is provided for convenience.
If you use [[Texture2D alloc] initWithImage:myUIImage], the UIImage is used to create an NSData object, and [texture initWithData: data] is called internally. The image isn't stored for later use.
Update
The sprite works as a 'view to a batchNode' in this case. To view a different part of the batch node, change the frame of your sprite.
[mySprite setDisplayFrame:
[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName: #"back of card"]]];

Odd problem with NSImage -lockFocusFlipped:

I'm using NSImage's -lockFocusFlipped: method to do some drawing into an image. My code looks like this:
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(256, 256)];
[image lockFocusFlipped:YES];
NSShadow *shadow = [[NSShadow alloc] init];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:6.0];
[shadow setShadowOffset:NSMakeSize(0, 3)];
[shadow set];
NSRect shapeRect = NSMakeRect(0, 0, 256, 100);
[[NSColor redColor] set];
NSRectFill(shapeRect);
[image unlockFocus];
This code works to a certain point. I can confirm that the context is indeed flipped because [[NSGraphicsContext currentContext] isFlipped] returns YES, and also because shapeRect is drawn at the right position (using the top left corner as the origin). That said, the NSShadow does not seem to respect the flipped status of the context. Setting the shadow offset to (0, 3) should move the shadow down when the context is flipped, but it actually moves it up (which is what would happen in a standard non-flipped context).
This problem seems specific to -lockFocusFlipped, because when I'm drawing using this same code into a CALayer with a flipped coordinate system, the shadow is drawn just fine (respecting the flip). Documentation on -lockFocusFlipped also seems to be quite vague. This is all it says in the NSImage class documentation:
Prepares the image to receive drawing commands using the specified flipped state.
And I also found this note in the Snow Leopard AppKit Release Notes:
There are cases, for example drawing directly via NSLayoutManager, that require a flipped context. To cover this case, we add
- (void)lockFocusFlipped:(BOOL)flipped;
This doesn't alter the state of the image itself, only the context on which focus is locked. It means that (0,0) is at the top left and positive along the Y-axis is down in the locked context.
None of the docs seem to explain NSShadow's behaviour in this case. And through further testing, it seems NSGradient does not seem to respect the flipped state of the drawing context used by NSImage either.
Any insight is greatly appreciated :-)
From the NSShadow class reference:
Shadows are always drawn in the default user coordinate space, regardless of any transformations applied to that space. This means that rotations, translations and other transformations of the current transformation matrix (the CTM) do not affect the resulting shadow.
And that's what flipping ultimately is: Translate up, scale back the other way.
There's no such statement for NSGradient, so I'd suggest filing a bug about that one.

Does anyone know how I can get the "real" size of the UIImageView instead of creating a frame

I was hoping there was some sort of function that could take the real size of my picture
CGRect myImageRect = CGRectMake(0.0f, 0.0f, 320.0, 210.0f); // 234
UIImageView *myImage = [[UIImageView alloc] initWithFrame:myImageRect];
[myImage setImage:[UIImage imageNamed:#"start.png"]];
And with CGRectMake which does something like this ..( imageViewHeight ,
CGFloat imageViewHeight = [myImage frame].size.height;
So that I could get the real size instead of having to define it like you can seen above.
I don't really get what you're asking, but here's a shot:
If you have a UIImage, and you want to know its size, you can ask it for its -[UIImage size] property.
However, if you want to create a UIImageView that's the same size as a UIImage, then you can just use -[UIImageView initWithImage:], which will automatically set the frame of the UIImageView to correspond to the dimensions of the image.
If, however, you're just looking to change the dimensions of a currently existing view, there's really no easy way to do that without messing around with the view's frame. You could maybe apply an affine transform to scale it, but it's easier to manipulate the frame.
It looks like you're asking to add the image to the imageview without first creating a frame. If this is the case, you can do the following:
UIImageView *myImage = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"start.png"]];
As far as I understood, you are looking for the size of the image used in UIImageView object. To do that, there is not a function built in UIImageView but you can do it this way:
NSString* image= [myImage image].accessibilityIdentifier; // Get the image's name
UIImage *img = [UIImage imageNamed:image]; // Create an image object with that name
CGSize size = img.size; // Get the size of image
Hope this helps your question.