How to animate an image onto the screen - objective-c

I am new to Objective-C and I want to let a screenshot made with
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
...
UIGraphicsBeginImageContext(imageSize);
...
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
"fly in" (rotating and stoping in random angle) inside an own image container, looking like a small photo (shadow, white border). how could I realize that?

Since you have a vague question - and your code shows only how you are getting the screen image - here's a general answer.
Put the image on a CAImageLayer
Add a border or a shadow or whatever other chrome you need to your image layer (or on another layer underneath it)
Use Core Animation to animate this CAImageLayer to whatever position or state you want it to be in.

You can accomplish it with OpenGL ES.
Here is a great project which provides generic base OpenGL ES view transition class.
https://github.com/epatel/EPGLTransitionView
There are several predefined transitions, you can take as an example https://github.com/epatel/EPGLTransitionView/tree/master/src
Demo3Transition actually shows how to rotate the screenshot
https://github.com/epatel/EPGLTransitionView/blob/master/src/Demo3Transition.m
You can view available transitions in actions if you launch DemoProject.

Related

Drag And Swap Two UIScrollViews (Hover & Swap Effect)

I am creating a photo editor in my app and working on a multiple picture layout option.
Currently when the user takes a picture, I display their image in UIImageView nested in a UIScrollView. The reason I nest it in a UIScrollView is so I can pan and zoom.
In the situation where the user selects duo layout mode, the app puts 2 images side by side (so 2 UIScrollViews).
If they hold a finger on one UIScrollView I want to raise the UIScrollView up a tad and give it a drop shadow.
I have a long way of doing this, but I was curious if there was anything in the native libraries that will let me automatically create this hover effect for dragging a view class.
Thanks
No, there isn't any preexisting functionality for this. You can add shadow and animate a "lift" motion yourself with something like this:
// define dragged somewhere as the UIView subclass of your choosing
dragged.layer.shadowColor = [UIColor blackColor].CGColor;
dragged.layer.shadowOffset = CGSizeMake(-2, 2);
dragged.layer.masksToBounds = NO;
dragged.layer.shadowRadius = 4;
dragged.layer.shadowOpacity = .7;
[UIView animateWithDuration:.2 delay:0
options:UIViewAnimationOptionCurveEaseInOut
animations:^{self.draggedPhoto.transform = CGAffineTransformMakeScale(1.2, 1.2);}
completion:nil];

Blending two images and drawing resized image from two UIImageViews

I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.

Zooming image after rotation

I have an Image inside UIScrollView which i can zoom in and out.
I have a button that let the user rotate the Image 90 degrees:
(void)RotateImage {
CGAffineTransform rotateTrans = CGAffineTransformMakeRotation(-90.0 / 180.0 * 3.14);
BaseImg.transform = rotateTrans;
}
After the imaged is rotated i cannot zoom in and out.. the image is going crazy on the screen and going back to the UNRotated state.
What am i doing wrong? code examples will be great!
Thanks :)
UIScrollView likes to take over the transforms of the views it contains. There are two solutions:
Rotate the image without changing the containing view.
Create a UIView subclass that displays an image within a sublayer.
To rotate the image, see How to Rotate a UIImage 90 degrees?. If you're always and only doing 90 degree rotation, see #Peter Sarnowski's solution. To adapt it to what you're doing here, assuming that BaseImg is a UIImageView:
- (void) rotateImage
{
UIImage *sourceImage = [baseImg image];
UIImage *rotatedImage = [UIImage imageWithCGImage:[sourceImage CGImage] scale:1.0 orientation:UIImageOrientationRight];
[baseImg setImage:rotatedImage];
}
This will only rotate once. To have rotateImage work repeatedly, read the existing orientation property and move it on to the next in clockwise or anticlockwise order.
If the image is not square, you may also need to resize baseImg to reflect its new aspect ratio.
To create a UIView subclass, you need to have it store a CALayer as a sublayer of the view layer. Store the image in the sublayer, and transform the sublayer at will. This is faster, and allows arbitrary rotation, but you need to calculate your own scaling to prevent the rotate image going outside the view bounds.
To simply rotate image the code is perfect but to add zoom in and out and then add rotation you need to retain its transforms.Here is a sample code that can help you.
https://github.com/elc/iCodeBlogDemoPhotoBoard

Drawing on top of UIImageView to make the image transparent

I am working on one iPhone App wherein I need to make a portion of the image transparent by setting its alpha level to 0 as the user moves around his finger on the image. Basically if you happen to know the app store application iSteam, user should be able to move his finger around on a top image which will make the background image transparent.
Currently I am using two UIImageView. One that holds the background image and the other on top of it which holds a darker image. Now user should be able to draw random curves on this darker image which will make the part of the background image appear on top. I am not able to figure out how should go about making the top image transparent which the top most of two UIImageView holds.
Any idea on this? Also what should I use for this? Quartz or Open GL. I am a newbie to the iPhone App Dev and have absolutely no idea about these APIs so some guidance from the experts will surely help me getting ahead with iPhone SDK Development.
The UIImageView has a layer which you can refer to as its layer and talk to when you've linked your project to QuartzCore. As the user moves a finger, clip a clear shape in an opaque-color-filled graphics context the same size as the UIImageView, turn that into a CGImageRef, set that as a CALayer's contents (again this CALayer needs to be the same size as the UIImageView), and set that layer as the UIImageView's layer.mask. Wherever the mask is clear, that punches a transparent hole in the layer, which means the view, which means the image the UIImageView is showing. (If that doesn't work, because the UIImageView doesn't like your interfering with its layer, you can use a superview of the UIImageView instead.)
EDIT (next day) - Here's sample code for a layer's delegate that punches a circular hole in the center:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)c {
CGRect r = CGContextGetClipBoundingBox(c);
CGRect r2 = CGRectInset(r, r.size.width/2.0 - 10, r.size.height/2.0 - 10);
UIImage* maskim;
{
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
layer.mask = mask;
}
So, if that layer is a view's layer, and if the UIImageView is that view's subview, a hole is punched in the UIImageView.
Here's a screen shot of the result:

How do I clip or change alpha of an image (pixels) in Quartz?

I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...