Best pratice to draw clear image in UIScrollView - objective-c

I have a UIscrollView I'm displaying an image in it in this way
- (UIImage *)pageControlImageWithIndex:(NSInteger)index{
NSString *imagePath = [NSHomeDirectory() stringByAppendingFormat:#"/Documents/t%d.jpg",index];
NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingAtPath:imagePath];
return [UIImage imageWithData:[fileHandle readDataToEndOfFile]];
}
UIImage *image = [self pageControlImageWithIndex:pageNumber];
UIGraphicsBeginImageContext(CGSizeMake(320.0, 164));
[image drawInRect:CGRectMake(0, 0, 320.0, 164.0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.view.backgroundColor = [UIColor colorWithPatternImage:newImage];
but the image appear with some blur
while displaying the same image in this way in other View (inside UIImageView)
NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingAtPath:[NSHomeDirectory() stringByAppendingFormat:#"/Documents/t%d.jpg",topicNum]];
[topicPicImageView setImage:[UIImage imageWithData:[fileHandle readDataToEndOfFile]]];
produce this quality
HOW CAN I GET THE SAME QUALITY for the image in the UIScrollView ?
The image dimensions is 640 * 360
The Two image containers dimensions is 320 * 164

UIGraphicsBeginImageContext will create a context with a scale of 1.0, or non-retina. But you most likely want to create a context that matches the scale of your device. Use the new UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) function.
Replace
UIGraphicsBeginImageContext(CGSizeMake(320.0, 164));
with
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320.0, 164), YES, 0.0);
0.0 as scale parameter means "use device main screen scale". If your image has transparency you want to use NO as second parameter.

Related

Showing only a portion of the original image in a UIImageView

How can i show only a portion of the original image in a UIImageView
This question may be very familiar and old, But the reason for asking again is,i could not find a workable idea with the help of those answers
Many said set image.contentMode = UIViewContentModeCenter; (but not working)
I need almost a rectangle containing the center of the original image,How do I get this ?
I do make this working,when i am displaying a static image to my app and setting Content mode of UIImageVie as Aspect Fill.
But this is not workable in the case when i am displaying an image from url and using NSData
Adding my code and the images
NSString *weburl = #"http://topnews.in/files/Sachin-Tendulkar_15.jpg";
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(10, 50, 108, 86)];
NSURL *url = [NSURL URLWithString:weburl];
NSData *data = [NSData dataWithContentsOfURL:url];
imageView.image = [UIImage imageWithData:data];
[self.view addSubview:imageView];
If you added the UIImageView form XIB, you can find a "mode" property there and then you can see and set different modes from there (from Interface builder). Or by programatically, you can set different modes by
imageView.contentMode = UIViewContentModeCenter;
imageView.clipsToBounds = YES;
Try this:
self.imageView.layer.contentsRect = CGRectMake(0.25, 0.25, 0.5, 0.5);
self.imageView will display middle part of image. You can calculate itself the required values of CGRect ​
For this kind of output, you need to crop the image according to your requirement.
Cropping code as below which can be used.
-(UIImage *) CropImageFromTop:(UIImage *)image
{
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], CGRectMake(0, 12, image.size.width, image.size.height - 12));
UIImage *cropimage = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
CGImageRelease(imageRef);
return cropimage;
}
you try to scale image and then add image in UiImageView and set center that image then it is in center. code of scale image is
-(UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
then you set the center of that image and add in the image view i hope this is work

How to crop an image into polygon shape in iOS?

I want to crop an image which is on the UIImageview into any shape
You set the clipping path and voila:
// Load image thumbnail
NSString *imageName = [self.picArray objectAtIndex:indexPath.row];
UIImage *image = [UIImage imageNamed:imageName];
CGSize imageSize = image.size;
CGRect imageRect = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
// Create the clipping path and add it
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:imageRect cornerRadius:5.0f];
[path addClip];
[image drawInRect:imageRect];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This code loads an image and creates a path by rounding the rectangle, the result is that the final image has been clipped, i.e. rounded corners. RoundedImage is the result.
You can use a CGImageMask.
A sample exists in the class QuartzMaskingView of Apple's QuartzDemo.

Get the rotated image from uiimageview

I created an application, in which I can rotate, re-size, translate an image using gestures. Then I need to get the image from the UIImageView. I found this part of the code at some where in Stack-overflow. Although the smiler question is answered here, but it requires the input of the angle. The same person wrote somewhere else the better solution, which I'm using. But it have a problem. Often it returns a blank image. or truncated image (often from top side). So there is something wrong with the code and it requires some changes. My problem is that, I'm new to Core-graphics and badly stuck in this problem.
UIGraphicsBeginImageContext(imgView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = imgView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, imgView.image.size.width, imgView.image.size.height), imgView.image.CGImage);
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
EDIT 1.1
Thanks for the sample code, but again it have the problem. Let me explain in more detail, I'm using gestures for scaling, translating and resizing the image using imageview. So all this data is saved in the transform property of the imageview. I fond another method in core-image. So I changed my code to:
CGRect bounds = CGRectMake(0, 0, imgTop.size.width, imgTop.size.height);
CIImage *ciImage = [[CIImage alloc] initWithCGImage:imageView.image.CGImage options:nil];
CGAffineTransform transform = imgView.transform;
ciImage = [ciImage imageByApplyingTransform:transform];
return [UIImage imageWithCIImage:ciImage] ;
Now I'm getting the squeezed and wrong size mirrored image. Sorry to disturbing you again. Can you guide me how to get the proper image using imageview's transform in coreimage?
CIImage *ciImage = [[CIImage alloc] initWithCGImage:fximage.CGImage options:nil];
CGAffineTransform transform = fxobj.transform;
float angle = atan2(transform.b, transform.a);
transform = CGAffineTransformRotate(transform, - 2 * angle);
ciImage = [ciImage imageByApplyingTransform:transform];
UIImage *screenfxImage = [UIImage imageWithCIImage:ciImage];
Do remember to add code : transform = CGAffineTransformRotate(transform, - 2 * angle); coz the rotation direction is opposite
I created an objective-C class just for this sort of thing. You can check it out on GitHub ANImageBitmapRep. Here's how you would do rotation:
ANImageBitmapRep * ibr = [myImage image];
[ibr rotate:anAngle];
UIImage * rotated = [ibr image];
Note that here, anAngle is in radians.
Here is the link to Documentation:-
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Sample code to rotate image:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}
else {
CGImageRef imageRef = [context createCGImage:displayImage fromRect:displayImage.extent];
photoEditView.image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
context = nil;
[inputImage release];
I created sample app to do this (minus the scaling part) in objective C. If anybody is interested, you can download it here: https://github.com/gene-code/coregraphics-drawing/tree/master/coregraphics-drawing/test

How to darken a PNG?

I may point out that Drawing and Rendering in Objective-C is my weakness. Now, here's my problem.
I want to add a 'Day/Night' feature to my game. It has got lots of objects on a map. Every object is a UIView containing some data in variables and some UIImageViews: the sprite, and some of the objects have a hidden ring (used to show selection).
I want to be able to darken the content of the UIView, but I can't figure out how. The sprite is a PNG with transparency. I've just managed to add a black rectangle behind the sprite using this:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
As I've read, this should be done in the drawRect method. Help please!
If you want to understand better my scenario, the App where I'm trying to do this is called 'Kipos', at the App Store.
Floris497's approach is a good strategy for a blanket darkening for more than one image at a time (probably more what you're after in this case). But here's a general purpose method to generate darker UIImages (while respecting alpha pixels):
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
The best way would be to add a core image filter to the layer that darkened it. You could use CIExposureAdjust.
CIFilter *filter = [CIFilter filterWithName:#"CIExposureAdjust"];
[filter setDefaults];
[filter setValue:[NSNumber numberWithFloat:-2.0] forKey:#"inputEV"];
view.layer.filters = [NSArray arrayWithObject:filter];
Here is how to do it:
// inputEV controlls the exposure, the lower the darker (e.g "-1" -> dark)
-(UIImage*)adjustImage:(UIImage*)image exposure:(float)inputEV
{
CIImage *inputImage = [[CIImage alloc] initWithCGImage:[image CGImage]];
UIImageOrientation originalOrientation = image.imageOrientation;
CIFilter* adjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[adjustmentFilter setDefaults];
[adjustmentFilter setValue:inputImage forKey:#"inputImage"];
[adjustmentFilter setValue:[NSNumber numberWithFloat:-1.0] forKey:#"inputEV"];
CIImage *outputImage = [adjustmentFilter valueForKey:#"outputImage"];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef imgRef = [context createCGImage:outputImage fromRect:outputImage.extent] ;
UIImage* img = [[UIImage alloc] initWithCGImage:imgRef scale:1.0 orientation:originalOrientation];
CGImageRelease(imgRef);
return img;
}
Remember to import:
#import <QuartzCore/Quartzcore.h>
And add CoreGraphics and CoreImage frameworks to your project.
Tested on iPhone 3GS with iOS 5.1
CIFilter is available starting from iOS 5.0.
draw a UIView (a black one) over it and set "User interaction enabled" to NO
hope you can do something with this.
then use this to make it dark
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.4;}
completion:^(BOOL finished){ NSLog(#"done making it dark"); ]; }];
to make it light
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.0;}
completion:^(BOOL finished){ NSLog(#"done making it light again"); ]; }];

Saving UIImage with UIImagePNGRepresentation not preserving my layer modification

I'm loading an image from NSData. The original image was created as a JPG. When I attempt to set a corner radius and save to disk, I'm losing the changes I made. I'm saving to disk as a PNG as I presumed that an alpha channel would be created on the layers which needs to be preserved.
UIImageView *imageView = [[UIImageView alloc] initWithImage:[UIImage imageWithData:data]];
[imageView.layer setMasksToBounds:YES];
[imageView.layer setCornerRadius:10.0f];
data = UIImagePNGRepresentation(imageView.image);
// save data to file with .png extension
The issue is that when I reload the image from the file system back into a UIImage, the corner radius is not visible.
CGContext can take a "screenshot".
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Now I use the "screenshot"
NSString *path = [NSHomeDirectory() stringByAppendingString:#"/image.jpg"];
if ([UIImageJPEGRepresentation(image, 1) writeToFile:path atomically:YES]) {
NSLog(#"save ok");
}
else {
NSLog(#"save failed");
}
You're setting the corner radius on the image view, not the image. The PNG representation is from the image, not the image view, so the corner radius is lost.