Capturing full screenshot with status bar in iOS programmatically - objective-c

I am using this code to capture a screenshot and to save it to the photo album.
-(void)TakeScreenshotAndSaveToPhotoAlbum
{
UIWindow *window = [UIApplication sharedApplication].keyWindow;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(window.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(window.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
}
But the problem is whenever the screenshot is saved, I see the status bar of iPhone is not captured. Instead a white space appears at the bottom. Like the following image:
What am I doing wrong?

The status bar is actually in its own UIWindow, in your code you are only rendering the view of your viewcontroller which does not include this.
The "official" screenshot method was here but now seems to have been removed by Apple, probably due to it being obsolete.
Under iOS 7 there is now a new method on UIScreen for getting a view holding the contents of the entire screen:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
This will give you a view which you can then manipulate on screen for various visual effects.
If you want to draw the view hierarchy into a context, you need to iterate through the windows of the application ([[UIApplication sharedApplication] windows]) and call this method on each one:
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates
You may be able to combine the two above approaches and take the snapshot view, then use the above method on the snapshot to draw it.

The suggested "official" screenshot method doesn't capture status bar (it is not in the windows list of the application). As tested on iOS 5.
I believe, this is for security reasons, but there is no mention of it in the docs.
I suggest two options:
draw a stub status bar image from resources of your app (optionally update time indicator);
capture only your view, without status bar, or trim image afterwards (image size will differ from standard device resolution); status bar frame is known from corresponding property of application object.

Here is my code to take a screenshot and store it as NSData (inside an IBAction). With the sotred NSData then you can share or email or whatever want to do
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *imageForEmail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *imageDataForEmail = UIImageJPEGRepresentation(imageForEmail, 1.0);

Answer of above question for Objective-C is already write there, here is the Swift version answer of above question.
For Swift 3+
Take screenshot and then use it to display somewhere or to send over web.
extension UIImage {
class var screenShot: UIImage? {
let imageSize = UIScreen.main.bounds.size as CGSize;
UIGraphicsBeginImageContextWithOptions(imageSize, false, 0)
guard let context = UIGraphicsGetCurrentContext() else {return nil}
for obj : AnyObject in UIApplication.shared.windows {
if let window = obj as? UIWindow {
if window.responds(to: #selector(getter: UIWindow.screen)) || window.screen == UIScreen.main {
// so we must first apply the layer's geometry to the graphics context
context.saveGState();
// Center the context around the window's anchor point
context.translateBy(x: window.center.x, y: window.center
.y);
// Apply the window's transform about the anchor point
context.concatenate(window.transform);
// Offset by the portion of the bounds left of and above the anchor point
context.translateBy(x: -window.bounds.size.width * window.layer.anchorPoint.x,
y: -window.bounds.size.height * window.layer.anchorPoint.y);
// Render the layer hierarchy to the current context
window.layer.render(in: context)
// Restore the context
context.restoreGState();
}
}
}
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {return nil}
return image
}
}
Usage of above screenshot
Lets display above screen shot on UIImageView
yourImageView = UIImage.screenShot
Get image Data to save/send over web
if let img = UIImage.screenShot {
if let data = UIImagePNGRepresentation(img) {
//send this data over web or store it anywhere
}
}

Swift, iOS 13:
The code below (and other ways of accessing) will now crash the app with a message:
App called -statusBar or -statusBarWindow on UIApplication: this code must be changed as there's no longer a status bar or status bar window. Use the statusBarManager object on the window scene instead.
The window scenes and statusBarManager's really only give us access to frame - if this is still possible, I am not aware how.
Swift, iOS10-12:
The following works for me, and after profiling all the methods for capturing programmatic screenshots - this is the quickest, and the recommended way from Apple following iOS 10
let screenshotSize = CGSize(width: UIScreen.main.bounds.width * 0.6, height: UIScreen.main.bounds.height * 0.6)
let renderer = UIGraphicsImageRenderer(size: screenshotSize)
let statusBar = UIApplication.shared.value(forKey: "statusBarWindow") as? UIWindow
let screenshot = renderer.image { _ in
UIApplication.shared.keyWindow?.drawHierarchy(in: CGRect(origin: .zero, size: screenshotSize), afterScreenUpdates: true)
statusBar?.drawHierarchy(in: CGRect(origin: .zero, size: screenshotSize), afterScreenUpdates: true)
}
You don't have to scale your screenshot size down (you can use UIScreen.main.bounds directly if you want)

Capture the full screen of iPhone, get the status bar by using KVC:
if let snapView = window.snapshotView(afterScreenUpdates: false) {
if let statusBarSnapView = (UIApplication.shared.value(forKey: "statusBar") as? UIView)?.snapshotView(afterScreenUpdates: false) {
snapView.addSubview(statusBarSnapView)
}
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, true, 0)
snapView.drawHierarchy(in: snapView.bounds, afterScreenUpdates: true)
let snapImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}

The following works for me, capturing the status bar fine (iOS 9, Swift)
let screen = UIScreen.mainScreen()
let snapshotView = screen.snapshotViewAfterScreenUpdates(true)
UIGraphicsBeginImageContextWithOptions(snapshotView.bounds.size, true, 0)
snapshotView.drawViewHierarchyInRect(snapshotView.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

Related

Animating only the image in UIBarButtonItem

Ive seen this effect in 2 apps and I REALLY want to find how to do it.
The animation is in a UIBarButtonItem, and is only to the image. The image is a + symbol, and it rotates to a X.
If you want to see the effect you have to start a conversation with someone and next to the text input theres the + button for images and emoji's. Or heres a video of the effect in another app, after he taps the bar button you see it rotate to a X, http://www.youtube.com/watch?v=S8JW7euuNMo.
I have found out how to do the effect but only on a UIImageView, I have to turn off all the autoresizing and the view mode has to be centered, then apply the rotation transform to it. I have tried many ways of trying to have it work in a bar item and so far the best way is adding a image view instance, then setting it up and setting the view mode centered and autoresizing off and then using that image view for a custom bar item view. But when i do this, the effect works except while its doing it, the image will go off to the side a little bit instead of staying where it already is. Ive tried getting the center before the animation and set it during the animation but that doesnt do anything.
So the answer for this is you have to make a instance of the Image view, then set it up with no resizing and view mode is centered. Then add the image view to a UIButton with custom type, and then use the button as the custom view for the bar item.
- (IBAction)animate {
[UIView animateWithDuration:0.5 delay:0.0 options:UIViewAnimationOptionCurveLinear animations:^{
imageView.transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(45));
} completion:^(BOOL finished) {
imageView.transform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(0));
if ([imageView.image isEqual:[UIImage imageNamed:#"Add.png"]]) {
imageView.image = [UIImage imageNamed:#"Close.png"];
}
else imageView.image = [UIImage imageNamed:#"Add.png"];
}];
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"Add.png"]];
imageView.autoresizingMask = UIViewAutoresizingNone;
imageView.contentMode = UIViewContentModeCenter;
UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom];
button.frame = CGRectMake(0, 0, 40, 40);
[button addSubview:imageView];
[button addTarget:self action:#selector(animate) forControlEvents:UIControlEventTouchUpInside];
imageView.center = button.center;
barItem = [[UIBarButtonItem alloc] initWithCustomView:button];
navItem.rightBarButtonItem = barItem;
}
Recently had to do the same thing in Swift. I created a tutorial that includes starter and final projects, and goes step-by-step with some tips sprinkled in. The code looks like this:
#IBOutlet weak var rightBarButton: UIBarButtonItem! {
didSet {
let icon = UIImage(named: "star")
let iconSize = CGRect(origin: CGPointZero, size: icon!.size)
let iconButton = UIButton(frame: iconSize)
iconButton.setBackgroundImage(icon, forState: .Normal)
rightBarButton.customView = iconButton
rightBarButton.customView!.transform = CGAffineTransformMakeScale(0, 0)
UIView.animateWithDuration(1.0,
delay: 0.5,
usingSpringWithDamping: 0.5,
initialSpringVelocity: 10,
options: .CurveLinear,
animations: {
self.rightBarButton.customView!.transform = CGAffineTransformIdentity
},
completion: nil
)
iconButton.addTarget(self, action: "tappedRightButton", forControlEvents: .TouchUpInside)
}
}
func tappedRightButton(){
rightBarButton.customView!.transform = CGAffineTransformMakeRotation(CGFloat(M_PI * 6/5))
UIView.animateWithDuration(1.0) {
self.rightBarButton.customView!.transform = CGAffineTransformIdentity
}
}
I wanted to keep the expanded tapping size that the native UIBarButtonItem view provides (such as -initWithBarButtonSystemItem:target:action: versus -initWithCustomView:).
Here's a basic implementation of my code.
- (void)setup {
self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:#selector(navigationBarRightAction)];
}
- (void)navigationBarRightAction {
UIView *itemView = [self.navigationItem.rightBarButtonItem performSelector:#selector(view)];
UIImageView *imageView = [itemView.subviews firstObject];
if (self.shouldRotate) {
imageView.contentMode = UIViewContentModeCenter;
imageView.autoresizingMask = UIViewAutoresizingNone;
imageView.clipsToBounds = NO;
imageView.transform = CGAffineTransformMakeRotation(M_PI_4);
} else {
imageView.transform = CGAffineTransformIdentity;
}
}
You don't have to use a button as a custom view, it works in fact with less code using a UIImageView and adding a UITapGestureRecognizer.
I hope my solution below helps someone b/c I struggled with this for a long time until I got the bar button item to receive taps and get it to work with all the features I wanted. In my case, I made an "alert bell" bar button item that jingles when there are notifications, and then segues to a new tableview controller when tapped.
This was my solution (Swift 5):
#IBOutlet weak var notifyBell: UIBarButtonItem!
func updateNumNotesAndAnimateBell(_ numNotes: Int) {
guard let image = UIImage(named: "alertBellFill_\(numNotes)") else { return }
let imageView = UIImageView(image: image)
notifyBell.customView = imageView
notifyBell.customView?.contentMode = .center
let tap = UITapGestureRecognizer(target: self, action: #selector(notifyBellPressed))
notifyBell.customView?.addGestureRecognizer(tap)
let scaleTransformA = CGAffineTransform(scaleX: 0.8, y: 0.8)
let rotateTransformA = CGAffineTransform(rotationAngle: 0.0)
let hybridTransformA = scaleTransformA.concatenating(rotateTransformA)
let rotateTransformB = CGAffineTransform(rotationAngle: -1*CGFloat.pi*20.0/180.0)
let hybridTransformB = scaleTransformA.concatenating(rotateTransformB)
notifyBell.customView?.transform = hybridTransformA
UIView.animate(withDuration: 3,
delay: 1,
usingSpringWithDamping: 0.1,
initialSpringVelocity: 10,
options: [.allowUserInteraction, .curveEaseInOut],
animations: {
self.notifyBell.customView?.transform = numNotes > 0 ? hybridTransformB : scaleTransformA
},
completion: nil
)
}
#objc func notifyBellPressed(_ sender: UIBarButtonItem) {
performSegue(withIdentifier: "goToNotificationsTVC", sender: self)
}
Key discoveries for me were that:
-- .allowUserInteraction must be included in the animate options, otherwise the UIBarButtonItem won't be active until the animation completes.
-- You will likely have to declare YourBarButtonItem.customView?.contentMode = .center when using CGAffineTransform(rotationAngle: ) or else it will distort your image when it tries to rotate.
-- The code above includes a scale animation and rotate animation that is different depending on how many notifications I have. With zero notifications, the image is an empty bell, else, it displays the number of notifications in the bell image. I probably could've done this with an updating label, but I had already gone the route of making separate PNGs for each so this worked nicely.

Creating retina screenshot programmatically resulting in non retina image

I am trying to take a retina screenshot programmatically and I have tried every approach found online, but I was not able to get the screenshot to be retina.
I understand the following private API:
UIGetScreenImage();
cannot be used as Apple will reject your app. However, this method returns exactly what I need (640x960 screenshot of the screen).
I have tried this method on my iPhone 4 as well as the iPhone 4 simulator on retina hardware, but the resulting image is always 320x480.
-(UIImage *)captureView
{
AppDelegate *appdelegate = [[UIApplication sharedApplication]delegate];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(appdelegate.window.bounds.size, NO, 0.0);
else
UIGraphicsBeginImageContext(appdelegate.window.bounds.size);
[appdelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"SIZE: %#", NSStringFromCGSize(image.size));
NSLog(#"scale: %f", [UIScreen mainScreen].scale);
return image;
}
I have also tried the Apple recommended way:
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"Size: %#", NSStringFromCGSize(image.size));
return image;
}
But it also returns a non retina image:
2012-12-23 19:57:45.205 PostCard[3351:707] size: {320, 480}
Is there something obvious I'm missing? How come there methods that are suppose to take retina screenshot return me non retina screenshots?
Thanks in advance!
I don't see anything wrong in your code. Apart from image.size, have you tried logging image.scale? Is it 1 or 2? If it's 2, it is actually a retina image.
UIImage.scale represents the scale of the image. So an image with UIImage.size being 320×480 and UIImage.scale being 2 has an actual size of 640×960. From Apple's doc:
If you multiply the logical size of the image (stored in the size property) by the value in this property, you get the dimensions of the image in pixels.
It's the same idea as when you load an image into a UIImage with the #2x modifier. For example:
a.png (100×80) => size=100×80 scale=1
b#2x.png (200×160) => size=100×80 scale=2

Capturing Screen of QLPreviewController causing problems in iOS 6

On Apples Technical notes site, they posted code to screen capture. Everything works, except it does not capture files that are being previewed using QLPreviewController under iOS 6.
I'm wondering if they are using some new OpenGL ES to render the file preview so that it could not capture? In theory, this piece of code should be able to capture anything on screen right??
From http://developer.apple.com/library/ios/#qa/qa1703/_index.html
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
this doesnt sound similiar but this actually doesnt work for the same reason that this doesnt work: iOS 6 UIGestures (Tap) stops working with QLPreviewController
since ios 6 the QLPreviewController is actually a completely seperate app (seperate process and everything)
=> so when you push that, your whole app moves to the bg, including its window.
=> QlController is never really part of your window

iOS Capture "screenshot" of camera controller

In my app I display the camera and I am taking screenshots of certain perts using UIGetScreenImage, (I tried UIGraphicsGetImageFromCurrentImageContext and it works great for screenshots on almost any part of my app but for the camera view it will just return a blank white image) ... Anyways, I fear Apple will reject my app because of UIGetScreenImage... How can I take a "screenshot" of a 50px by 50px box from the upper left corner of the camera without using this method? I searched and all I could find was "AVCaptureSession" and I couldn't find much about what that does, or if it's even what I'm looking for... Any insight? :) Thanks guys!!!
It doesn't get much clearer than Apple's docs on how to capture the camera's view. Yes, this does involve the class AVCaptureSession.
If you actually need a screenshot of the interface, you should take at the docs for that. Cut-and-paste code from the link (if this does not work, you should submit a bug report to Apple):
Update: It appears that this approach is no longer supported on newer versions of iOS. The second link is now broken as well.
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Since iOS7 you can use :
drawViewHierarchyInRect
UIImage *image;
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

UIButton image rotation issue

I have a [UIButton buttonWithType:UIButtonTypeCustom] that has an image (or a background image - same problem) created by [UIImage imageWithContentsOfFile:] pointing to a JPG file taken by the camera and saved in the documents folder by the application.
If I define the image for UIControlStateNormal only, then when I touch the button the image gets darker as expected, but it also rotates either 90 degrees or 180 degrees. When I remove my finger it returns to normal.
This does not happen if I use the same image for UIControlStateHighlighted, but then I lose the touch indication (darker image).
This only happens with an image read from a file. It does not happen with [UIImage ImageNamed:].
I tried saving the file in PNG format rather than as JPG. In this case the image shows up in the wrong orientation to begin with, and is not rotated again when touched. This is not a good solution anyhow because the PNG is far too large and slow to handle.
Is this a bug or am I doing something wrong?
I was not able to find a proper solution to this and I needed a quick workaround. Below is a function which, given a UIImage, returns a new image which is darkened with a dark alpha fill. The context fill commands could be replaced with other draw or fill routines to provide different types of darkening.
This is un-optimized and was made with minimal knowledge of the graphics api.
You can use this function to set the UIControlStateHighlighted state image so that at least it will be darker.
+ (UIImage *)darkenedImageWithImage:(UIImage *)sourceImage
{
UIImage * darkenedImage = nil;
if (sourceImage)
{
// drawing prep
CGImageRef source = sourceImage.CGImage;
CGRect drawRect = CGRectMake(0.f,
0.f,
sourceImage.size.width,
sourceImage.size.height);
CGContextRef context = CGBitmapContextCreate(NULL,
drawRect.size.width,
drawRect.size.height,
CGImageGetBitsPerComponent(source),
CGImageGetBytesPerRow(source),
CGImageGetColorSpace(source),
CGImageGetBitmapInfo(source)
);
// draw given image and then darken fill it
CGContextDrawImage(context, drawRect, source);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetRGBFillColor(context, 0.f, 0.f, 0.f, 0.5f);
CGContextFillRect(context, drawRect);
// get context result
CGImageRef darkened = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// convert to UIImage and preserve original orientation
darkenedImage = [UIImage imageWithCGImage:darkened
scale:1.f
orientation:sourceImage.imageOrientation];
CGImageRelease(darkened);
}
return darkenedImage;
}
To fix this you need additional normalization function like this:
public extension UIImage {
func normalizedImage() -> UIImage! {
if self.imageOrientation == .Up {
return self
}
UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
self.drawInRect(CGRectMake(0, 0, self.size.width, self.size.height))
let normalized = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalized
}
}
then you can use it like that:
self.photoButton.sd_setImageWithURL(avatarURL,
forState: .Normal,
placeholderImage: UIImage(named: "user_avatar_placeholder")) {
[weak self] (image, error, cacheType, url) in
guard let strongSelf = self else {
return
}
strongSelf.photoButton.setImage(image.normalizedImage(), forState: .Normal
}