I may point out that Drawing and Rendering in Objective-C is my weakness. Now, here's my problem.
I want to add a 'Day/Night' feature to my game. It has got lots of objects on a map. Every object is a UIView containing some data in variables and some UIImageViews: the sprite, and some of the objects have a hidden ring (used to show selection).
I want to be able to darken the content of the UIView, but I can't figure out how. The sprite is a PNG with transparency. I've just managed to add a black rectangle behind the sprite using this:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0, 0, 0, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
As I've read, this should be done in the drawRect method. Help please!
If you want to understand better my scenario, the App where I'm trying to do this is called 'Kipos', at the App Store.
Floris497's approach is a good strategy for a blanket darkening for more than one image at a time (probably more what you're after in this case). But here's a general purpose method to generate darker UIImages (while respecting alpha pixels):
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
The best way would be to add a core image filter to the layer that darkened it. You could use CIExposureAdjust.
CIFilter *filter = [CIFilter filterWithName:#"CIExposureAdjust"];
[filter setDefaults];
[filter setValue:[NSNumber numberWithFloat:-2.0] forKey:#"inputEV"];
view.layer.filters = [NSArray arrayWithObject:filter];
Here is how to do it:
// inputEV controlls the exposure, the lower the darker (e.g "-1" -> dark)
-(UIImage*)adjustImage:(UIImage*)image exposure:(float)inputEV
{
CIImage *inputImage = [[CIImage alloc] initWithCGImage:[image CGImage]];
UIImageOrientation originalOrientation = image.imageOrientation;
CIFilter* adjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[adjustmentFilter setDefaults];
[adjustmentFilter setValue:inputImage forKey:#"inputImage"];
[adjustmentFilter setValue:[NSNumber numberWithFloat:-1.0] forKey:#"inputEV"];
CIImage *outputImage = [adjustmentFilter valueForKey:#"outputImage"];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef imgRef = [context createCGImage:outputImage fromRect:outputImage.extent] ;
UIImage* img = [[UIImage alloc] initWithCGImage:imgRef scale:1.0 orientation:originalOrientation];
CGImageRelease(imgRef);
return img;
}
Remember to import:
#import <QuartzCore/Quartzcore.h>
And add CoreGraphics and CoreImage frameworks to your project.
Tested on iPhone 3GS with iOS 5.1
CIFilter is available starting from iOS 5.0.
draw a UIView (a black one) over it and set "User interaction enabled" to NO
hope you can do something with this.
then use this to make it dark
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.4;}
completion:^(BOOL finished){ NSLog(#"done making it dark"); ]; }];
to make it light
[UIView animateWithDuration:2
animations:^{nightView.alpha = 0.0;}
completion:^(BOOL finished){ NSLog(#"done making it light again"); ]; }];
Related
I am working on an iOS app. I need crop an image which is generate from PDF.
Sometimes, the image resolution can be very big.
I use follow code to generate the cropped image. My problem is the memory is increasing all the time. Never released.
- (UIImage *)croppedImageWithFrame:(CGRect)frame angle:(NSInteger)angle{
UIImage *croppedImage = nil;
CGPoint drawPoint = CGPointZero;
UIGraphicsBeginImageContextWithOptions(frame.size, YES, self.scale);
{
CGContextRef context = UIGraphicsGetCurrentContext();
//To conserve memory in not needing to completely re-render the image re-rotated,
//map the image to a view and then use Core Animation to manipulate its rotation
if (angle != 0) {
UIImageView *imageView = [[UIImageView alloc] initWithImage:self];
imageView.layer.minificationFilter = #"nearest";
imageView.layer.magnificationFilter = #"neareset";
imageView.transform = CGAffineTransformRotate(CGAffineTransformIdentity, angle * (M_PI/180.0f));
CGRect rotatedRect = CGRectApplyAffineTransform(imageView.bounds, imageView.transform);
UIView *containerView = [[UIView alloc] initWithFrame:(CGRect){CGPointZero, rotatedRect.size}];
[containerView addSubview:imageView];
imageView.center = containerView.center;
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[containerView.layer renderInContext:context];
}
else {
CGContextTranslateCTM(context, -frame.origin.x, -frame.origin.y);
[self drawAtPoint:drawPoint];
}
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
}
UIGraphicsEndImageContext();
return croppedImage;
}
When I debug,
It burns 100MB in line
UIGraphicsBeginImageContextWithOptions(frame.size, YES, self.scale);
then when run into follow line, it burn another 150MB
[self drawAtPoint:drawPoint];
When run to follow line, it release 100MB
UIGraphicsEndImageContext();
after it's done, the 150MB never released
I thought UIGraphicsEndImageContext() should release all the 250MB. Why it's not?
In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I created an application, in which I can rotate, re-size, translate an image using gestures. Then I need to get the image from the UIImageView. I found this part of the code at some where in Stack-overflow. Although the smiler question is answered here, but it requires the input of the angle. The same person wrote somewhere else the better solution, which I'm using. But it have a problem. Often it returns a blank image. or truncated image (often from top side). So there is something wrong with the code and it requires some changes. My problem is that, I'm new to Core-graphics and badly stuck in this problem.
UIGraphicsBeginImageContext(imgView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = imgView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, imgView.image.size.width, imgView.image.size.height), imgView.image.CGImage);
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
EDIT 1.1
Thanks for the sample code, but again it have the problem. Let me explain in more detail, I'm using gestures for scaling, translating and resizing the image using imageview. So all this data is saved in the transform property of the imageview. I fond another method in core-image. So I changed my code to:
CGRect bounds = CGRectMake(0, 0, imgTop.size.width, imgTop.size.height);
CIImage *ciImage = [[CIImage alloc] initWithCGImage:imageView.image.CGImage options:nil];
CGAffineTransform transform = imgView.transform;
ciImage = [ciImage imageByApplyingTransform:transform];
return [UIImage imageWithCIImage:ciImage] ;
Now I'm getting the squeezed and wrong size mirrored image. Sorry to disturbing you again. Can you guide me how to get the proper image using imageview's transform in coreimage?
CIImage *ciImage = [[CIImage alloc] initWithCGImage:fximage.CGImage options:nil];
CGAffineTransform transform = fxobj.transform;
float angle = atan2(transform.b, transform.a);
transform = CGAffineTransformRotate(transform, - 2 * angle);
ciImage = [ciImage imageByApplyingTransform:transform];
UIImage *screenfxImage = [UIImage imageWithCIImage:ciImage];
Do remember to add code : transform = CGAffineTransformRotate(transform, - 2 * angle); coz the rotation direction is opposite
I created an objective-C class just for this sort of thing. You can check it out on GitHub ANImageBitmapRep. Here's how you would do rotation:
ANImageBitmapRep * ibr = [myImage image];
[ibr rotate:anAngle];
UIImage * rotated = [ibr image];
Note that here, anAngle is in radians.
Here is the link to Documentation:-
http://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Sample code to rotate image:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}
else {
CGImageRef imageRef = [context createCGImage:displayImage fromRect:displayImage.extent];
photoEditView.image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
context = nil;
[inputImage release];
I created sample app to do this (minus the scaling part) in objective C. If anybody is interested, you can download it here: https://github.com/gene-code/coregraphics-drawing/tree/master/coregraphics-drawing/test
I'm using BCTabBarController in my app, and I'm trying to customize it so that it uses Core Graphics to highlight the images automatically, so that I don't need four copies of each image. (Retina, Retina-selected, Legacy, Legacy-selected)
User Ephraim has posted a great starting point for this, but it returns legacy sized images. I've played with some of the settings, but I'm not very familiar with Core Graphics, so I'm shooting in the dark.
Ephraim's Code:
- (UIImage *) imageWithBackgroundColor:(UIColor *)bgColor
shadeAlpha1:(CGFloat)alpha1
shadeAlpha2:(CGFloat)alpha2
shadeAlpha3:(CGFloat)alpha3
shadowColor:(UIColor *)shadowColor
shadowOffset:(CGSize)shadowOffset
shadowBlur:(CGFloat)shadowBlur {
UIImage *image = self;
CGColorRef cgColor = [bgColor CGColor];
CGColorRef cgShadowColor = [shadowColor CGColor];
CGFloat components[16] = {1,1,1,alpha1,1,1,1,alpha1,1,1,1,alpha2,1,1,1,alpha3};
CGFloat locations[4] = {0,0.5,0.6,1};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGGradientRef colorGradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, (size_t)4);
CGRect contextRect;
contextRect.origin.x = 0.0f;
contextRect.origin.y = 0.0f;
contextRect.size = [image size];
//contextRect.size = CGSizeMake([image size].width+5,[image size].height+5);
// Retrieve source image and begin image context
UIImage *itemImage = image;
CGSize itemImageSize = [itemImage size];
CGPoint itemImagePosition;
itemImagePosition.x = ceilf((contextRect.size.width - itemImageSize.width) / 2);
itemImagePosition.y = ceilf((contextRect.size.height - itemImageSize.height) / 2);
UIGraphicsBeginImageContext(contextRect.size);
CGContextRef c = UIGraphicsGetCurrentContext();
// Setup shadow
CGContextSetShadowWithColor(c, shadowOffset, shadowBlur, cgShadowColor);
// Setup transparency layer and clip to mask
CGContextBeginTransparencyLayer(c, NULL);
CGContextScaleCTM(c, 1.0, -1.0);
CGContextClipToMask(c, CGRectMake(itemImagePosition.x, -itemImagePosition.y, itemImageSize.width, -itemImageSize.height), [itemImage CGImage]);
// Fill and end the transparency layer
CGContextSetFillColorWithColor(c, cgColor);
contextRect.size.height = -contextRect.size.height;
CGContextFillRect(c, contextRect);
CGContextDrawLinearGradient(c, colorGradient,CGPointZero,CGPointMake(contextRect.size.width*1.0/4.0,contextRect.size.height),0);
CGContextEndTransparencyLayer(c);
//CGPointMake(contextRect.size.width*3.0/4.0, 0)
// Set selected image and end context
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGColorSpaceRelease(colorSpace);
CGGradientRelease(colorGradient);
return resultImage;
}
To implement this code, I've added a category to UIImage in my project, and then made the following changes to BCTab.h:
- (id)initWithIconImageName:(NSString *)imageName {
if (self = [super init]) {
self.adjustsImageWhenHighlighted = NO;
self.background = [UIImage imageNamed:#"BCTabBarController.bundle/tab-background.png"];
self.rightBorder = [UIImage imageNamed:#"BCTabBarController.bundle/tab-right-border.png"];
self.backgroundColor = [UIColor clearColor];
// NSString *selectedName = [NSString stringWithFormat:#"%#-selected.%#",
// [imageName stringByDeletingPathExtension],
// [imageName pathExtension]];
UIImage *defImage = [UIImage imageNamed:imageName];
[self setImage:[defImage imageWithBackgroundColor:[UIColor lightGrayColor] shadeAlpha1:0.4 shadeAlpha2:0.0 shadeAlpha3:0.6 shadowColor:[UIColor blackColor] shadowOffset:CGSizeMake(0.0, -1.0f) shadowBlur:3.0] forState:UIControlStateNormal];
[self setImage:[defImage imageWithBackgroundColor:[UIColor redColor] shadeAlpha1:0.4 shadeAlpha2:0.0 shadeAlpha3:0.6 shadowColor:[UIColor blackColor] shadowOffset:CGSizeMake(0.0, -1.0f) shadowBlur:3.0] forState:UIControlStateSelected];
}
return self;
}
How can I use Ephraim's code to work correctly with Retina display?
After digging around the internet, a Google search lead me back to StackOverflow. I found this answer to this question which discusses a different method which should be used to set the scale factor of the UIImageGraphicsContext when it is initialized.
UIGraphicsBeginImageContext(contextRect.size); needs to be changed to UIGraphicsBeginImageContextWithOptions(contextRect.size, NO, scale);, where "scale" is the
value of the scale you want to use. I grabbed it from [[UIScreen mainScreen] scale].
I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.