I am using QREncoder library found here: https://github.com/jverkoey/ObjQREncoder
Basically, i looked at the example code by this author, and when he creates the QRCode it comes out perfectly with no pixelation. The image itself that the library provides is 33 x 33 pixels, but he uses kCAFilterNearest to magnify and make it very clear (no pixilation). Here is his code:
UIImage* image = [QREncoder encode:#"http://www.google.com/"];
UIImageView* imageView = [[UIImageView alloc] initWithImage:image];
CGFloat qrSize = self.view.bounds.size.width - kPadding * 2;
imageView.frame = CGRectMake(kPadding, (self.view.bounds.size.height - qrSize) / 2,
qrSize, qrSize);
[imageView layer].magnificationFilter = kCAFilterNearest;
[self.view addSubview:imageView];
I have a UIImageView in a xib, and I am setting it's image like this:
[[template imageVQRCode] setImage:[QREncoder encode:ticketNum]];
[[[template imageVQRCode] layer] setMagnificationFilter:kCAFilterNearest];
but the qrcode is really blurry. In the example, it comes out crystal clear.
What am i doing wrong?
Thanks!
UPDATE:I found out that the problem isn't with scaling or anything to do with kCAFFilterNearest. It has to do with generating the PNG image from the view. Here's how it looks on the deive vs how it looks like when i save the UIView to the PNG representation (Notice the QRCodes quality):
UPDATE 2: This is how I am generating the PNG file from UIView:
UIGraphicsBeginImageContextWithOptions([[template view] bounds].size, YES, 0.0);
[[[template view] layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(viewImage) writeToFile:plistPath atomically:YES];
I have used below function for editing image.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality
{
BOOL drawTransposed;
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
return [self resizedImage:newSize
transform:[self transformForOrientation:newSize]
drawTransposed:drawTransposed
interpolationQuality:quality];
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if((bitmapInfo == kCGImageAlphaLast) || (bitmapInfo == kCGImageAlphaNone))
bitmapInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitmapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImageWriteToSavedPhotosAlbum([image resizedImage:CGSizeMake(300, 300) interpolationQuality:kCGInterpolationNone], nil, nil, nil);
Please find below image and let me know if you need any help.
Related
In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Here's my code:
- (void)shareButtonTapped
{
CGSize newImageSize = CGSizeMake(850.0, 850.0);
UIGraphicsBeginImageContextWithOptions(newImageSize, YES, _aImageView.image.scale);
[self.aImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.aImageView.image drawInRect:CGRectMake(0, 0, newImageSize.width, newImageSize.height)];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// UIActivityViewController code here
}
Right now if I tap on the Mail button, for example, the image is the newImageSize, but my _aImageView.image is only displayed in the top left corner while the rest of the image is solid black.
How do I edit this code to scale the image to the 850 x 850 or to a high resolution size so it will look sharp when shared via Email or Social Media?
Thanks!
EDIT *
Still stuck. I feel like I am almost there.
Here's my new code:
- (void)shareButtonTapped
{
_topBarImageView.hidden = YES;
_bottomBarImageView.hidden = YES;
CGSize newImageSize = CGSizeMake(600.0, 600.0);
UIGraphicsBeginImageContextWithOptions(newImageSize, YES, _aImageView.image.scale);
[_aImageView.layer = renderInContext:UIGraphicsGetCurrentContext()];
[_aLabel drawTextInRect:CGRectMake(20, 250, 280, 140)];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIImage * newImage = [UIImage imageWithImage:image scaledToSize:newImageSize];
UIGraphicsEndImageContext();
_topBarImageView.hidden = NO;
_bottomBarImageView.hidden = NO;
// Share Code
}
So I'm hiding a couple of custom imageViews that are on this screen. The only thing I want shared is the aImageView and the aLabel.
Right now when I open my email for example, I can see the newImageSize, but the majority of the box is still black. I still can't figure out what I am missing.
Please try below code :
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
specify your new size as (850,850) and pass your original image, but make sure your height and width ratio of original image should be same as converted image ratio.
Hope it will works for you.Thanks
EDIT :
For Capturing screenshot of your view follow below code :
- (UIImage*)captureView:(UIView *)yourView {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I'm facing the following problem : I have to merge two images A and B to create a new image C as a result of the merging.
I already know how to merge two images but in this case my goal is a little bit different.
I would like that image A will be the background for Image B.
For instance if image A size is 500x500 and image B size is 460x460 I would like that image C (the image result of the merging) will be 500x500, with image B (460x460) centered in it.
Thanks in advance for any help or suggestion
This is what I've done in my app, but without using UIImageView:
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"]; //background image
UIImage *image = [UIImage imageNamed:#"top.png"]; //foreground image
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If the image already has opacity, you do not need to set it (as in bottomImage) otherwise you can set it (as with image).
After this UIImage is created then you can embed it in your UIImageView
UPDATE: Thanks to Ahmet AkkoK - for Swift (2.2) users blend mode macro has changed. CGBlendMode .kCGBlendModeNormal is replaced with CGBlendMode.Normal
Hey i got multiple images add same background with different foreground
This is my code
UIImage *bottomImage = [UIImage imageNamed:#"photo 2.JPG"]; //background image
UIImage *image = [UIImage imageNamed:#"photo 3.JPG"]; //foreground image
UIImage *image1 = [UIImage imageNamed:#"photo 4.JPG"]; //foreground image
UIImage *image2 = [UIImage imageNamed:#"photo 5.JPG"]; //foreground image
CGSize newSize = CGSizeMake(320, 480);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.4];
[image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.3];
[image2 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.2];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resultView = [[UIImageView alloc] initWithImage:newImage];
resultView.frame = CGRectMake(0, 0,320,460);
[self.view addSubview:resultView];
Swift version
Copy/Paste to Playground
var bottomImage:UIImage = UIImage(named:"avatar_4.png") //background image
var imageTop:UIImage = UIImage(named:"group_4.png") //top image
var newSize = CGSizeMake(bottomImage.size.width, bottomImage.size.height)
UIGraphicsBeginImageContext( newSize )
bottomImage.drawInRect(CGRectMake(0,0,newSize.width,newSize.height))
// decrease top image to 36x36
imageTop.drawInRect(CGRectMake(18,18,36,36), blendMode:kCGBlendModeNormal, alpha:1.0)
var newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
var imageData = UIImagePNGRepresentation(newImage)
To load images from playground:
Open playground file and create there folder Resources
copy images to this folder
Just made quick paste function for those of you who wanted to use Srikar Appal answer. (if in case background & foreground images are of different sizes)
- (UIImage *) mergeImages:(NSString *)bgImageFileName foreGround:(NSString *)fgImageFileName {
UIImage *bottomImage = [UIImage imageNamed:bgImageFileName]; //background image
UIImage *image = [UIImage imageNamed:fgImageFileName]; //foreground image
CGSize newSize = CGSizeMake(bottomImage.size.width, bottomImage.size.height);
UIGraphicsBeginImageContext(newSize);
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
// Change xPos, yPos if applicable
[image drawInRect:CGRectMake(11,11,image.size.width,image.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
in Swift:
let bottomImage = UIImage(named: "Bottom_Image.png")
let frontImage = UIImage (named: "Front_Image.png")
let size = CGSize(width: 67, height: 55)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let frontImageSize = CGRect(x: 14, y: 3, width: 40, height: 40)
bottomImage!.drawInRect(areaSize, blendMode: CGBlendMode.Normal, alpha: 1.0)
frontImage!.drawInRect(frontImageSize, blendMode: CGBlendMode.Normal, alpha: 1.0)
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Thanks #Srikar Appal for iOS soultion.
For anyone who is looking for merge in OS X:
- (NSImage *) mergeImages:(NSString *)bgImageFileName foreGround:(NSString *)fgImageFileName {
NSImage *bottomImage = [NSImage imageNamed:bgImageFileName];
NSImage *overlayedImage = [NSImage imageNamed:fgImageFileName];
NSSize newSize = NSMakeSize(bottomImage.size.width, bottomImage.size.height);
NSSize overlaySize = NSMakeSize(newSize.width/2, newSize.height/2); //change the size according to your requirements
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage lockFocus];
[bottomImage drawInRect:NSMakeRect(0, 0, newSize.width, newSize.height)];
[overlayedImage drawInRect:NSMakeRect(newSize.width-overlaySize.width, 0, overlaySize.width, overlaySize.height)]; //set the position as required
[newImage unlockFocus];
return newImage;
}
You can go with another trick as described below:
Add first image to a imageView.
Add second image to another imageView.
Add both the above imageViews in a single main imageView and access the combined image by property of imageView : mainImageView.image
Have a look at the code below :
CGRect rect= investmentDetailTblView.frame;
int rows = investmentDetailArray.count;
CGFloat heightFinal = 5;
CGRect frame1;
for (int i=0; i<rows; i++)
{
frame1 = [investmentDetailTblView rectForRowAtIndexPath:[NSIndexPath indexPathForRow:i inSection:0]];
CGFloat height = frame1.size.height;
heightFinal = heightFinal + height;
}
rect.size.height = heightFinal;
investmentDetailTblView.frame=rect;
UIImageView *imageViewTable = [[UIImageView alloc] init];
[imageViewTable setFrame:CGRectMake(0, 0, frame1.size.width, heightFinal)];
[investmentDetailTblView reloadData];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(investmentDetailTblView.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(investmentDetailTblView.bounds.size);
[investmentDetailTblView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageViewTable.image = image; //Adding the table image to the image view.
CGRect frame=CGRectMake(0, heightFinal+5, investmentDetailTblView.frame.size.width, 20) ;
UIView *footerView=[DataStore kkrLogoView];
footerView.frame=frame;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(footerView.frame.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(footerView.frame.size);
[footerView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *kkrLogoImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageViewFooter = [[UIImageView alloc] init];
[imageViewFooter setFrame:CGRectMake(0, heightFinal, footerView.frame.size.width, footerView.frame.size.height)];
imageViewFooter.image = kkrLogoImage; //Adding the footer image to the image view.
UIImageView *mainImageView = [[UIImageView alloc] init];
[mainImageView setFrame:CGRectMake(0, 0, frame1.size.width, (heightFinal+footerView.frame.size.height))];
[mainImageView addSubview:imageViewTable];
[mainImageView addSubview:imageViewFooter];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(mainImageView.frame.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(mainImageView.frame.size);
[mainImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I'm using BCTabBarController in my app, and I'm trying to customize it so that it uses Core Graphics to highlight the images automatically, so that I don't need four copies of each image. (Retina, Retina-selected, Legacy, Legacy-selected)
User Ephraim has posted a great starting point for this, but it returns legacy sized images. I've played with some of the settings, but I'm not very familiar with Core Graphics, so I'm shooting in the dark.
Ephraim's Code:
- (UIImage *) imageWithBackgroundColor:(UIColor *)bgColor
shadeAlpha1:(CGFloat)alpha1
shadeAlpha2:(CGFloat)alpha2
shadeAlpha3:(CGFloat)alpha3
shadowColor:(UIColor *)shadowColor
shadowOffset:(CGSize)shadowOffset
shadowBlur:(CGFloat)shadowBlur {
UIImage *image = self;
CGColorRef cgColor = [bgColor CGColor];
CGColorRef cgShadowColor = [shadowColor CGColor];
CGFloat components[16] = {1,1,1,alpha1,1,1,1,alpha1,1,1,1,alpha2,1,1,1,alpha3};
CGFloat locations[4] = {0,0.5,0.6,1};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGGradientRef colorGradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, (size_t)4);
CGRect contextRect;
contextRect.origin.x = 0.0f;
contextRect.origin.y = 0.0f;
contextRect.size = [image size];
//contextRect.size = CGSizeMake([image size].width+5,[image size].height+5);
// Retrieve source image and begin image context
UIImage *itemImage = image;
CGSize itemImageSize = [itemImage size];
CGPoint itemImagePosition;
itemImagePosition.x = ceilf((contextRect.size.width - itemImageSize.width) / 2);
itemImagePosition.y = ceilf((contextRect.size.height - itemImageSize.height) / 2);
UIGraphicsBeginImageContext(contextRect.size);
CGContextRef c = UIGraphicsGetCurrentContext();
// Setup shadow
CGContextSetShadowWithColor(c, shadowOffset, shadowBlur, cgShadowColor);
// Setup transparency layer and clip to mask
CGContextBeginTransparencyLayer(c, NULL);
CGContextScaleCTM(c, 1.0, -1.0);
CGContextClipToMask(c, CGRectMake(itemImagePosition.x, -itemImagePosition.y, itemImageSize.width, -itemImageSize.height), [itemImage CGImage]);
// Fill and end the transparency layer
CGContextSetFillColorWithColor(c, cgColor);
contextRect.size.height = -contextRect.size.height;
CGContextFillRect(c, contextRect);
CGContextDrawLinearGradient(c, colorGradient,CGPointZero,CGPointMake(contextRect.size.width*1.0/4.0,contextRect.size.height),0);
CGContextEndTransparencyLayer(c);
//CGPointMake(contextRect.size.width*3.0/4.0, 0)
// Set selected image and end context
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGColorSpaceRelease(colorSpace);
CGGradientRelease(colorGradient);
return resultImage;
}
To implement this code, I've added a category to UIImage in my project, and then made the following changes to BCTab.h:
- (id)initWithIconImageName:(NSString *)imageName {
if (self = [super init]) {
self.adjustsImageWhenHighlighted = NO;
self.background = [UIImage imageNamed:#"BCTabBarController.bundle/tab-background.png"];
self.rightBorder = [UIImage imageNamed:#"BCTabBarController.bundle/tab-right-border.png"];
self.backgroundColor = [UIColor clearColor];
// NSString *selectedName = [NSString stringWithFormat:#"%#-selected.%#",
// [imageName stringByDeletingPathExtension],
// [imageName pathExtension]];
UIImage *defImage = [UIImage imageNamed:imageName];
[self setImage:[defImage imageWithBackgroundColor:[UIColor lightGrayColor] shadeAlpha1:0.4 shadeAlpha2:0.0 shadeAlpha3:0.6 shadowColor:[UIColor blackColor] shadowOffset:CGSizeMake(0.0, -1.0f) shadowBlur:3.0] forState:UIControlStateNormal];
[self setImage:[defImage imageWithBackgroundColor:[UIColor redColor] shadeAlpha1:0.4 shadeAlpha2:0.0 shadeAlpha3:0.6 shadowColor:[UIColor blackColor] shadowOffset:CGSizeMake(0.0, -1.0f) shadowBlur:3.0] forState:UIControlStateSelected];
}
return self;
}
How can I use Ephraim's code to work correctly with Retina display?
After digging around the internet, a Google search lead me back to StackOverflow. I found this answer to this question which discusses a different method which should be used to set the scale factor of the UIImageGraphicsContext when it is initialized.
UIGraphicsBeginImageContext(contextRect.size); needs to be changed to UIGraphicsBeginImageContextWithOptions(contextRect.size, NO, scale);, where "scale" is the
value of the scale you want to use. I grabbed it from [[UIScreen mainScreen] scale].
I want to rotate a image with UIslider control.
I have done that with the below function
- (void)rotateImage:(UIImageView *)image duration:(NSTimeInterval)duration
curve:(int)curve degrees:(CGFloat)degrees
{
// Setup the animation
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:duration];
[UIView setAnimationCurve:curve];
[UIView setAnimationBeginsFromCurrentState:YES];
// The transform matrix
CGAffineTransform transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(degrees));
image.transform = transform;
// Commit the changes
[UIView commitAnimations];}
by using this function it will work perfect
but the problem is that i can not save the image reference that is rotated.
i have to use that rotated image for further processing.
So how can i save the image that is in Rotated position?
please Help me over this issue
Thanks
i found the solution of my question
use This below method for This
- (UIImage*)upsideDownBunny:(CGFloat)radians withImage:(UIImage*)testImage {
__block CGImageRef cgImg;
__block CGSize imgSize;
__block UIImageOrientation orientation;
dispatch_block_t createStartImgBlock = ^(void) {
// UIImages should only be accessed from the main thread
UIImage *img =testImage
imgSize = [img size]; // this size will be pre rotated
orientation = [img imageOrientation];
cgImg = CGImageRetain([img CGImage]); // this data is not rotated
};
if([NSThread isMainThread]) {
createStartImgBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), createStartImgBlock);
}
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
// in iOS4+ you can let the context allocate memory by passing NULL
CGContextRef context = CGBitmapContextCreate( NULL,
imgSize.width,
imgSize.height,
8,
imgSize.width * 4,
colorspace,
kCGImageAlphaPremultipliedLast);
// rotate so the image respects the original UIImage's orientation
switch (orientation) {
case UIImageOrientationDown:
CGContextTranslateCTM(context, imgSize.width, imgSize.height);
CGContextRotateCTM(context, -radians);
break;
case UIImageOrientationLeft:
CGContextTranslateCTM(context, 0.0, imgSize.height);
CGContextRotateCTM(context, 3.0 * -radians / 2.0);
break;
case UIImageOrientationRight:
CGContextTranslateCTM(context,imgSize.width, 0.0);
CGContextRotateCTM(context, -radians / 2.0);
break;
default:
// there are mirrored modes possible
// but they aren't generated by the iPhone's camera
break;
}
// rotate the image upside down
CGContextTranslateCTM(context, +(imgSize.width * 0.5f), +(imgSize.height * 0.5f));
CGContextRotateCTM(context, -radians);
//CGContextDrawImage( context, CGRectMake(0.0, 0.0, imgSize.width, imgSize.height), cgImg );
CGContextDrawImage(context, (CGRect){.origin.x = -imgSize.width* 0.5f , .origin.y = -imgSize.width* 0.5f , .size.width = imgSize.width, .size.height = imgSize.width}, cgImg);
// grab the new rotated image
CGContextFlush(context);
CGImageRef newCgImg = CGBitmapContextCreateImage(context);
__block UIImage *newImage;
dispatch_block_t createRotatedImgBlock = ^(void) {
// UIImages should only be accessed from the main thread
newImage = [UIImage imageWithCGImage:newCgImg];
};
if([NSThread isMainThread]) {
createRotatedImgBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), createRotatedImgBlock);
}
CGColorSpaceRelease(colorspace);
CGImageRelease(newCgImg);
CGContextRelease(context);
return newImage;
}
call this method with
UIImage *rotated2 = [self upsideDownBunny:rotation];
where rotation is sliderValue between 0 to 360.
now we can save the rotated state.
I tried the code above, it's working but only for a SQUARE image, here is another working solution, which will redraw the image and keep the right width/height :
- (UIImage *) rotatedImage:(UIImage *)imageRotation and:(CGFloat) rotation
{
// Calculate Destination Size
CGAffineTransform t = CGAffineTransformMakeRotation(rotation);
CGRect sizeRect = (CGRect) {.size = imageRotation.size};
CGRect destRect = CGRectApplyAffineTransform(sizeRect, t);
CGSize destinationSize = destRect.size;
// Draw image
UIGraphicsBeginImageContext(destinationSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, destinationSize.width / 2.0f, destinationSize.height / 2.0f);
CGContextRotateCTM(context, rotation);
[imageRotation drawInRect:CGRectMake(-imageRotation.size.width / 2.0f, -imageRotation.size.height / 2.0f, imageRotation.size.width, imageRotation.size.height)];
// Save image
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The rotation parameter here is to give in Radian. You can perform the conversion by adding this above your function (or in you .m file directly)
#define M_PI 3.14159265358979323846264338327950288 /* pi */
#define DEGREES_TO_RADIANS(angle) (angle / 180.0 * M_PI)
Finally, I called this with an animation :
[UIView animateWithDuration:0.5f delay:0 options:UIViewAnimationCurveEaseIn animations:^{
//Only used for the ANIMATION of the uiimage
imageView.transform = CGAffineTransformRotate(editorPad.transform, DEGREES_TO_RADIANS(90));
}completion:^(BOOL finished) {
// Workaround : Need to set previous transformation back or the next step will turn the image more
imageView.transform = CGAffineTransformRotate(imageView.transform, DEGREES_TO_RADIANS(-90));
imageView.image = [self imageView.image and:DEGREES_TO_RADIANS(90)];
}];
I hope this could help too !
Cheers.
One approach to further processing is applying more transformation in sequence, like in:
CGAffineTransform transform = CGAffineTransformScale(previousTransform, newScale, newScale);
in this case you would apply a scaling to your rotated image.
If you need saving this information in order to be able to redo the transformation at some later point, you can simply store the angle of the rotation, the scaling factor (in my example), and the build the transform once again.
You could also think of storing your CGAffineTransform in a ivar of your class or other mechanism.
EDIT:
if by saving you mean save to a file, you can convert your view to an image with this code:
NSData *data;
NSBitmapImageRep *rep;
rep = [self bitmapImageRepForCachingDisplayInRect:[self frame]];
[self cacheDisplayInRect:[self frame] toBitmapImageRep:rep];
data = [rep TIFFRepresentation];
then you save the NSData to file
For PNG:
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageData = UIImagePNGRepresentation(image1);
[imageData writeToFile:filePath atomically:YES];
You can render your rotated image into another image:
UIGraphicsBeginImageContext(aCGRectToHoldYourImage);
CALayer* layer = myRotatedImageView.layer;
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
and then get the PNG data from that and save it to a file
NSData* myPNGData = [viewImage UIImagePNGRepresentation];
[myPNGData writeToFile:#"aFileName.png" atomically:YES];
Proviso, I typed this into StackOverflow, not a compiler :-)
Save the transform value of your UIImageView and apply this transform value next time when you want to use this UIImageView or even in anoother UIImageView .