I have the following code in my viewWillAppear in a modal uiviewcontroller.
I am including UIImage+ImageEffects.h to do the blurring of the background image in this example.
-(void) viewWillAppear:(BOOL)animated
{
[super viewWillAppear:NO];
// grab an image of our parent view
UIView *parentView = self.presentingViewController.view;
UIImage *parentViewImage = [self takeSnapshotOfView:parentView];
UIImage *blurredImage = nil;
//BLUR THE IMAGE
blurredImage = [self blurWithImageEffects:parentViewImage];
// insert an image view with a picture of the parent view at the back of our view's subview stack...
UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.view.bounds];
imageView.image = blurredImage;
[self.view insertSubview:imageView atIndex:0];
}
[EDIT] Adding Blur Methods
- (UIImage *)takeSnapshotOfView:(UIView *)view
{
CGFloat reductionFactor = 1.5;
UIGraphicsBeginImageContext(CGSizeMake(view.frame.size.width/reductionFactor, view.frame.size.height/reductionFactor));
[view drawViewHierarchyInRect:CGRectMake(0, 0, view.frame.size.width/reductionFactor, view.frame.size.height/reductionFactor) afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (UIImage *)blurWithImageEffects:(UIImage *)image
{
return [image applyBlurWithRadius:10 tintColor:[UIColor colorWithWhite:1 alpha:0.2] saturationDeltaFactor:1.5 maskImage:nil];
}
The code works fine and the background shows up blurry but it is really noticeably slow on iPad 3 with iOS 8. When the button to display this viewcontroller is tapped there is a pause before the viewcontroller slides up from the bottom. If I remove the blur the view controller slides up faster.
I tried putting the code in the viewDidAppear but then there is a noticeable white background displayed for a few seconds before the blur appears. But when in viewDidAppear the viewcontroller slides up immediately.
If I reduce the applyWithBlurRadius value this does not seem to reduce the time it takes to apply the blur.
Is there any way I can make it run faster?
GPUImage might solve your issue, Its fast and it doesn't put too much load on processing.
Link To Framework : https://github.com/BradLarson/GPUImage
Link to use GPUImage Blur
http://blog.bubbly.net/tag/gpu-image/
http://www.raywenderlich.com/60968/ios-7-blur-effects-gpuimage
However a reference is shown below which will give you the basic idea:
First prepare your view which is to blurred:
- (NSData *)PhotoForBlurring:(UIView *)view{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"aName.png" atomically:YES];
return data; }
-(void)sharingScrapPage{
GPUImageiOSBlurFilter *blur = [[GPUImageiOSBlurFilter alloc]init];
blur.blurRadiusInPixels=2.0;
blur.downsampling=6.0;
[blurview removeFromSuperview];
blurview = [[UIImageView alloc]initWithFrame:getRectDisplay(0, 0, 480, 320)]; // Landscape Iphone 4S
UIImage *imageForBlurring=[UIImage imageWithData:[self PhotoForBlurring:self.view]];
NSData* pngdataForBlurring = UIImagePNGRepresentation (imageForBlurring);
UIImage* blurImage = [UIImage imageWithData:pngdataForBlurring];
blurImage=[blur imageByFilteringImage:blurImage];
blurview.image=blurImage;
[self.view addSubview:blurview];
[progressHUD removeFromSuperview];
progressHUD = [[ProgressHUD alloc]initWithFrame:getRectDisplay(190, 110, 100, 90)];
[blurview addSubview:progressHUD]; }
Related
I want to convert UIView to UIImage, in which the view is in background and when I checked view hierarchy, it is there as white space, probably it wont get loaded..
What I wanted to do is load the view into memory and convert into image.
you just need to initialize your view with your required frame and then pass to below code as targetView. Initialize your view before you call screen capture method like below
UIView *targetView = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 568)]; //Initilization
UIImage *capturedImage = [self captureScreen:targetView]; //Call method
- (UIImage *)captureScreen:(UIView*)targetView
{
UIView* captureView = targetView;
UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, captureView.opaque, 0.0);
[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect cropRect = CGRectMake(0 ,0 ,captureView.frame.size.width ,captureView.frame.size.height);
UIGraphicsBeginImageContextWithOptions(cropRect.size, captureView.opaque, 1.2f);
[screenshot drawInRect:cropRect];
UIImage * customScreenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return customScreenShot;
}
Use this:
- (UIImage*)snapShot:(UIView*)myView {
UIGraphicsBeginImageContext(myView.frame.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
I am working on an iOS app. I need crop an image which is generate from PDF.
Sometimes, the image resolution can be very big.
I use follow code to generate the cropped image. My problem is the memory is increasing all the time. Never released.
- (UIImage *)croppedImageWithFrame:(CGRect)frame angle:(NSInteger)angle{
UIImage *croppedImage = nil;
CGPoint drawPoint = CGPointZero;
UIGraphicsBeginImageContextWithOptions(frame.size, YES, self.scale);
{
CGContextRef context = UIGraphicsGetCurrentContext();
//To conserve memory in not needing to completely re-render the image re-rotated,
//map the image to a view and then use Core Animation to manipulate its rotation
if (angle != 0) {
UIImageView *imageView = [[UIImageView alloc] initWithImage:self];
imageView.layer.minificationFilter = #"nearest";
imageView.layer.magnificationFilter = #"neareset";
imageView.transform = CGAffineTransformRotate(CGAffineTransformIdentity, angle * (M_PI/180.0f));
CGRect rotatedRect = CGRectApplyAffineTransform(imageView.bounds, imageView.transform);
UIView *containerView = [[UIView alloc] initWithFrame:(CGRect){CGPointZero, rotatedRect.size}];
[containerView addSubview:imageView];
imageView.center = containerView.center;
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[containerView.layer renderInContext:context];
}
else {
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[self drawAtPoint:drawPoint];
}
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return croppedImage;
}
When I debug,
It burns 100MB in line
UIGraphicsBeginImageContextWithOptions(frame.size, YES, self.scale);
then when run into follow line, it burn another 150MB
[self drawAtPoint:drawPoint];
When run to follow line, it release 100MB
UIGraphicsEndImageContext();
after it's done, the 150MB never released
I thought UIGraphicsEndImageContext() should release all the 250MB. Why it's not?
I have 2 images of same size and I need to make 3d picture in which 2nd is above 1st
I have UIImage class extension with following method
+(UIImage*)imageFrom2Images:(UIImage *)img1 with:(UIImage *)img2 {
UIGraphicsBeginImageContext(img1.size);
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsPopContext();
UIGraphicsEndImageContext();
return resultingImage;
}
I've tried to save resulting image in Photo Album but it calls error and not working..It seems that this method is wrong.
UIImage*t1=[self.mw.imageFIFO lastObject];
UIImage* test=[UIImage imageFrom2Images:self.imageView.image with:t1];
UIImageWriteToSavedPhotosAlbum(test, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
The error message I am getting is:
Error Domain=ALAssetsLibraryErrorDomain Code=-3304 "Failed to encode image for saved photos."
UserInfo=0x9398960 {NSUnderlyingError=0x935e760 "Failed to encode image for saved photos.",
NSLocalizedDescription=Failed to encode image for saved photos.}
You should never need to get the current graphics context and then push it - you're just duplicating the top of the context stack. It would be nice if it still worked, but it doesn't. Remove the calls to UIGraphicsPushContext and UIGraphicsPopContext, and it works as intended.
+(UIImage*)imageFrom2Images:(UIImage *)img1 with:(UIImage *)img2 {
UIGraphicsBeginImageContext(img1.size);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
To detect these problems in future, check the output from imageFrom2Images. As originally implemented, it returned nil, so it's not surprising that the following calls didn't know what to do. (If it had returned an image object, the next step would be to display it inside a UIImageView, to make sure it's the correct image.)
This method should do it:
-(UIImage *)drawFirstImage:(UIImage*)firstImage afterSecondImage:(UIImage *)secondImage
{
float finalWidth=MAX(firstImage.size.width,secondImage.size.width);
float finalHeight=firstImage.size.height + secondImage.size.height;
CGSize finalSize=CGSizeMake(finalWidth, finalHeight);
UIGraphicsBeginImageContext(finalSize);
[firstImage drawInRect:CGRectMake(0, 0, firstImage.size.width, firstImage.size.height)];
[secondImage drawInRect:CGRectMake(0, firstImage.size.height, secondImage.size.width, secondImage.size.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
You can use it as:
UIImage *resultImage= [self drawFirstImage:image1 afterSecondImage:image2];
Here it is a category implementation:
UIImage+MyExtensions.h
#import <UIKit/UIKit.h>
#interface UIImage (MyExtensions)
-(UIImage *)attachImageBelow:(UIImage *)secondImage;
#end
UIImage+MyExtensions.m
#import "UIImage+MyExtensions.h"
#implementation UIImage (MyExtensions)
-(UIImage *)attachImageBelow:(UIImage *)secondImage{
float finalWidth=MAX(self.size.width,secondImage.size.width);
float finalHeight=self.size.height + secondImage.size.height;
CGSize finalSize=CGSizeMake(finalWidth, finalHeight);
UIGraphicsBeginImageContext(finalSize);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height)];
[secondImage drawInRect:CGRectMake(0, self.size.height, secondImage.size.width, secondImage.size.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
You can use it like this:
#import "UIImage+MyExtensions.h"
UIImage *resultImage= [image1 attachImageBelow:image2];
My iPad app has a navigation where I show screenshots of the different pages and because I want to show more than one screenshot at once I scale the container to around 24% of the original screenshots (1024x768).
- (void) loadView
{
// get landscape screen frame
CGRect screenFrame = [UIScreen mainScreen].bounds;
CGRect landscapeFrame = CGRectMake(0, 0, screenFrame.size.height, screenFrame.size.width);
UIView *view = [[UIView alloc] initWithFrame:landscapeFrame];
view.backgroundColor = [UIColor grayColor];
self.view = view;
// add container view for 2 images
CGRect startFrame = CGRectMake(-landscapeFrame.size.width/2, 0, landscapeFrame.size.width*2, landscapeFrame.size.height);
container = [[UIView alloc] initWithFrame:startFrame];
container.backgroundColor = [UIColor whiteColor];
// add image 1 (1024x768)
UIImage *img1 = [UIImage imageNamed:#"01.jpeg"];
UIImageView *img1View = [[UIImageView alloc] initWithImage:img1];
[container addSubview:img1View];
// add image 2 (1024x768)
UIImage *img2 = [UIImage imageNamed:#"02.jpeg"];
UIImageView *img2View = [[UIImageView alloc] initWithImage:img2];
// move img2 to the right of img1
CGRect newFrame = img2View.frame;
newFrame.origin.x = 1024.0;
img2View.frame = newFrame;
[container addSubview:img2View];
// scale to 24%
container.transform = CGAffineTransformMakeScale(0.24, 0.24);
[self.view addSubview:container];
}
but when I scale images with "small" text it looks sth like this:
I have to use the big screenshots because if a user taps the image it should scale to 100% and be crispy clear.
is there a way how I can scale the images "smoothly" (on the fly) without ruining performance?
it would be enough to have two versions: the full-px one and another for the 24% version.
The reason the scaled-down image looks crappy is it's being scaled in OpenGL, which is using fast-but-low-quality linear interpolation. As you probably know, UIView is built on top of CALayer, which is in turn a sort of wrapper for OpenGL textures. Because the contents of the layer reside in the video card, CALayer can do all of its magic on the GPU, independent of whether the CPU is busy loading a web site, blocked on disk access, or whatever. I mention this only because it's useful to pay attention to what's actually in the textures inside your layers. In your case, the UIImageView's layer has the full 1024x768 bitmap image on its texture, and that isn't affected by the container's transform: The CALayer inside the UIImageView doesn't see that it's going to be (let's see..) 246x185 on-screen and re-scale its bitmap, it just lets OpenGL do its thing and scale down the bitmap every time it updates the display.
To get better scaling, we'll need to do it in CoreGraphics instead of OpenGL. Here's one way to do it:
- (UIImage*)scaleImage:(UIImage*)image by:(float)scale
{
CGSize size = CGSizeMake(image.size.width * scale, image.size.height * scale);
UIGraphicsBeginImageContextWithOptions(size, YES, 0.0);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
- (void)loadView
{
// get landscape screen frame
CGRect screenFrame = [UIScreen mainScreen].bounds;
CGRect landscapeFrame = CGRectMake(0, 0, screenFrame.size.height, screenFrame.size.width);
UIView *view = [[UIView alloc] initWithFrame:landscapeFrame];
view.backgroundColor = [UIColor grayColor];
self.view = view;
// add container view for 2 images
CGRect startFrame = CGRectMake(-landscapeFrame.size.width/2, 0, landscapeFrame.size.width*2, landscapeFrame.size.height);
container = [[UIView alloc] initWithFrame:startFrame];
container.backgroundColor = [UIColor whiteColor];
// add image 1 (1024x768)
UIImage *img1 = [UIImage imageNamed:#"01.png"];
img1View = [[TapImageView alloc] initWithFrame:CGRectMake(0, 0, 1024, 768)];
img1View.userInteractionEnabled = YES; // important!
img1View.image = [self scaleImage:img1 by:0.24];
[container addSubview:img1View];
// add image 2 (1024x768)
UIImage *img2 = [UIImage imageNamed:#"02.png"];
img2View = [[TapImageView alloc] initWithFrame:CGRectMake(1024, 0, 1024, 768)];
img2View.userInteractionEnabled = YES;
img2View.image = [self scaleImage:img2 by:0.24];
[container addSubview:img2View];
// scale to 24% and layout subviews
zoomed = YES;
container.transform = CGAffineTransformMakeScale(0.24, 0.24);
[self.view addSubview:container];
}
- (void)viewTapped:(id)sender
{
zoomed = !zoomed;
[UIView animateWithDuration:0.5 animations:^
{
if ( zoomed )
{
container.transform = CGAffineTransformMakeScale(0.24, 0.24);
}
else
{
img1View.image = [UIImage imageNamed:#"01.png"];
img2View.image = [UIImage imageNamed:#"02.png"];
container.transform = CGAffineTransformMakeScale(1.0, 1.0);
}
}
completion:^(BOOL finished)
{
if ( zoomed )
{
UIImage *img1 = [UIImage imageNamed:#"01.png"];
img1View.image = [self scaleImage:img1 by:0.24];
UIImage *img2 = [UIImage imageNamed:#"02.png"];
img2View.image = [self scaleImage:img2 by:0.24];
}
}];
}
And here's TapImageView, a UIImageView subclass that tells us when it's been tapped by sending an action up the responder chain:
#interface TapImageView : UIImageView
#end
#implementation TapImageView
- (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
[[UIApplication sharedApplication] sendAction:#selector(viewTapped:) to:nil from:self forEvent:event];
}
#end
Instead of scaling the container and all of its subviews. Create a UIImageView from the contents of the container and adjust its frame size to 24% of the original.
UIGraphicsBeginImageContext(container.bounds.size);
[container renderInContext:UIGraphicsGetCurrentContext()];
UIImage *containerImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *containerImageView = [[UIImageView alloc] initWithImage:containerImage];
CGRectFrame containerFrame = startFrame;
containerFrame.size.with *= 0.24;
containerFrame.size.height *= 0.24;
containerImageView.frame = containerFrame;
[self.view addSubView:containerImageView];
I have been using this method to convert a UIView into UIImage. i.e. screen snapshot of a view -
#interface UIView(Extended)
- (UIImage *) imageByRenderingView;
#end
#implementation UIView(Extended)
- (UIImage *)imageByRenderingView
{
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
#end
To use it, I do this -
UIImage *currImage = [self.view imageByRenderingView];
This gives the image representation of the entire UIView. Now I want 2 images, one is of the top half of the UIView and the other is the bottom half. How do I do that?
You can split your UIImage in two by using this code:
CGImageRef topOfImageCG =
CGImageCreateWithImageInRect(currImage.CGImage,
CGRectMake(0,
0,
currImage.size.width,
currImage.size.height / 2.0));
UIImage *topOfImage = [UIImage imageWithCGImage:topOfImageCG];
CGImageRelease(topOfImageCG);