Image shows up blurred in UIImageView - objective-c

I am creating an app where a user can load their schedule into the app, and it is subsequently displayed.
When the image is allowed to be edited before loading, it shows up perfectly:
Once the user "chooses" the image, it shows up in the UIImageView blurred:
Here it is slightly zoomed in:
I know that the image resolution is okay because image displays perfectly beforehand. How can I stop this from being blurred?
I am using the basic method of zooming an UIImageView in an UIScrollView.
Here is the code I use to assign the image. zoomscroll is the UIScrollView and myschedule is the UIImageView:
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
[self dismissModalViewControllerAnimated:YES];
//Obtaining saving path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *imagePath = [documentsDirectory stringByAppendingPathComponent:#"myschedule.png"];
//Extracting image from the picker and saving it
NSString *mediaType = [info objectForKey:UIImagePickerControllerMediaType];
if ([mediaType isEqualToString:#"public.image"]){
UIImage *editedImage = [info objectForKey:UIImagePickerControllerEditedImage];
NSData *webData = UIImagePNGRepresentation(editedImage);
[webData writeToFile:imagePath atomically:YES];
myschedule.image = editedImage;
[zoomscroll addSubview:myschedule];
zoomscroll.contentSize = CGSizeMake(myschedule.frame.size.width , myschedule.frame.size.height);
zoomscroll.minimumZoomScale = 1.0;
zoomscroll.maximumZoomScale = 4.0;
}
}
Thank you!

iOS doesn't redraw views when it zooms them, it just scales the view up or down. The underlying implementation is basically a textured OpenGL polygon, which is why zooming is so fast. Regenerating the texture at a higher res is slow, so iOS doesn't do that unless you explicitly tell it to.
There are various ways you can fix this. The simplest is probably to set the contentSize of your scrollview to the actual size of the image and then zoom out initially, so that instead of zooming a small version of the image up to 400% (which results in blurring) the user is zooming back in from 25% up to 100%. Something like this:
myschedule.image = editedImage;
myschedule.frame = CGRectMake(0, 0, editedImage.size.width, editedImage.size.height);
[zoomscroll addSubview:myschedule];
zoomscroll.contentSize = editedImage.size;
zoomscroll.minimumZoomScale = 0.25;
zoomscroll.maximumZoomScale = 1.0;
zoomscroll.zoomScale = zoomscroll.minimumZoomScale;

Related

How to make a perfect crop (without changing the quality) in Objective-c/Cocoa (OSX)

Is there any way in Objective-c/cocoa (OSX) to crop an image without changing the quality of the image?
I am very near to a solution, but there are still some differences that I can detect in the color. I can notice it when zooming into the text. Here is the code I am currently using:
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
target.backgroundColor = [NSColor greenColor];
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//add the NSBitmapImage to the representation list of the target
[target addRepresentation:bmpImageRep];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
NSString *filename = [NSString stringWithFormat:#"%#%#.jpg", panelImagePrefix, panelNumber];
NSLog(#"This is the filename: %#", filename);
//write the data to a file
[data writeToFile:filename atomically:NO];
Here is a zoomed-in comparison of the original and the cropped image:
(Original image - above)
(Cropped image - above)
The difference is hard to see, but if you flick between them, you can notice it. You can use a colour picker to notice the difference as well. For example, the darkest pixel on the bottom row of the image is a different shade.
I also have a solution that works exactly the way I want it in iOS. Here is the code:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
So, is there a way to crop an image in OSX so that the cropped image does not change at all? Perhaps I have to look into a different library, but I would be surprised if I could not do this with Objective-C...
Note, This is a follow up question to my previous question here.
Update I have tried (as per the suggestion) to round the CGRect values to whole numbers, but did not notice a difference. Here is the code in case I used:
[source drawInRect:NSMakeRect(0,0,(int)panelRect.size.width,(int)panelRect.size.height)
fromRect:NSMakeRect((int)panelRect.origin.x , (int)(source.size.height - panelRect.origin.y - panelRect.size.height), (int)panelRect.size.width, (int)panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
Update I have tried mazzaroth code and it works if I save it as a png, but if I try and save it as a jpeg, the image loses quality. So close, but not close enough. Still hoping for a complete answer...
use CGImageCreateWithImageInRect.
// this chunk of code loads a jpeg image into a cgimage
// creates a second crop of the original image with CGImageCreateWithImageInRect
// writes the new cropped image to the desktop
// ensure that the xy origin of the CGRectMake call is smaller than the width or height of the original image
NSURL *originalImage = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:#"lockwood" ofType:#"jpg"]];
CGImageRef imageRef = NULL;
CGImageSourceRef loadRef = CGImageSourceCreateWithURL((CFURLRef)originalImage, NULL);
if (loadRef != NULL)
{
imageRef = CGImageSourceCreateImageAtIndex(loadRef, 0, NULL);
CFRelease(loadRef); // Release CGImageSource reference
}
CGImageRef croppedImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(200., 200., 100., 100.));
CFURLRef saveUrl = (CFURLRef)[NSURL fileURLWithPath:[#"~/Desktop/lockwood-crop.jpg" stringByExpandingTildeInPath]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(saveUrl, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, croppedImage, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", saveUrl);
}
CFRelease(destination);
CFRelease(imageRef);
CFRelease(croppedImage);
I also made a gist:
https://gist.github.com/4259594
Try to change the drawInRect orign to 0.5,0.5. Otherwise Quartz will distribute each pixel color to the adjacent 4 fixels.
Set the color space of the target image. You might be having a different colorspace causing to to look slightly different.
Try the various rendering intents and see which gets the best result, perceptual versus relative colorimetric etc. There are 4 options I think.
You mention that the colors get modified by the saving of JPEG versus PNG.
You can specify the compression level when saving to JPEG. Try with something like 0.8 or 0.9. you can also save JPEG without compression with 1.0, but ther PNG has a distinct advantage. You specify the compression level in the options dictionary for CGImageDesinationAddImage.
Finally - if nothing her helps - you should open a TSI with DTS, they can certainly provide you with the guidance you seek.
The usual problem is that cropping sizes are float, but image pixels are integer.
cocoa interpolates it automatically.
You need to floor, round or ceil the size and coordinates to be sure that they are integer.
This may help.
I am doing EXIF deleting of JPG files and I think I caught the reason:
All losses and changes come from the re-compress of your image during saving to file.
You may notice the change too if you just to save the whole image again.
What I am to do is to read the original JPG and re-compress it to a quality that take equivalent file size.

how load many photos from url in background (asynchronous)

i have this method i use to load many images to scroll view, but when the images load from the url my scroll view is stuck and i can't understand how to load them in the background so the user will not feel it.
this method will call few times (8) in "for" cycle.
- (void)loadPhotosToLeftscroll{
//getting image information from JSON
NSMutableDictionary *photoDict;
photoDict = [leftPhotoArray lastObject];
//getting the photo
NSString *photoPath = [photoDict objectForKey:#"photos_path"];
NSLog(#"photo Path:%#",photoPath);
NSData * imageData = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:photoPath]];
UIImage *image = [UIImage imageWithData:imageData];
// use the image how you like, say, as your button background
//calculating the hight of next photo
UIImageView *leftImage = [leftBlockScroll.subviews lastObject];
//allocating photoView
UIImageView *photoView = [[UIImageView alloc]initWithFrame:CGRectMake(5 , leftImage.frame.origin.y + leftImage.frame.size.height+5, image.size.width/2, image.size.height/2 )];
photoView.userInteractionEnabled=YES;
[photoView.layer setMasksToBounds:YES];
[photoView.layer setCornerRadius:3];
//getting items list
NSDictionary *sh_items = [photoDict objectForKey:#"items"];
//adding image button
UIButton *imageOverButton = [UIButton buttonWithType:UIButtonTypeCustom];
imageOverButton.frame = CGRectMake(0, 0, photoView.frame.size.width, photoView.frame.size.height);
[imageOverButton addTarget:self action:#selector(LeftimagePressed:) forControlEvents:UIControlEventTouchUpInside];
[imageOverButton setTag:[leftPhotoArray count]-1];
[photoView addSubview:imageOverButton];
//adding sh button to imageView
[self addThe_sh_signs:sh_items To_ImageView:photoView];
//subViewing the image to the scrollView
[self insert_Image:image toImageView:photoView in_Scroll:leftBlockScroll];
//calclulating the position of next imageView in scroll.
nextLeftPhotoHight = photoView.frame.size.height + photoView.frame.origin.y + 5;
//calculating the hight of the highest scroll view in both.
leftBlockScroll.contentSize = CGSizeMake(160, [self theSizeOfScrollViewHight]);
rightBlocScroll.contentSize = CGSizeMake(160, [self theSizeOfScrollViewHight]);
isLoadindContant = NO;
[self.view reloadInputViews];
[leftBlockScroll reloadInputViews];
}
please do not send me to some link that trying to explain how to use the asynchronous.
Try to explain according the method you see here.
Im here for any question, that you need to ask to help me.
You will have to do it asynchronously in a proper way. I do not think there is any way around that. I subclassed an UIImageView object and placed many instances of it within the cells of a talbe (in your case within the scroll view). The subclass objects are initialized with an url and load their image asynchronously (with some caching so that the image is not loaded every time).
This tutorial helped me much in the beginning:
http://www.markj.net/iphone-asynchronous-table-image/
You will just have to adopt that to your scroll view. The underlaying principle remains the same.

Using in-memory UIWebView to generate PDF in PhoneGap

I'm trying to work out how to do this.
NOTE: I'm not an experienced objective-c developer (hence why I'm using PhoneGap in the first place)
The short of it: My UIWebView (no, not the PhoneGap one that renders the webapp, a 2nd UIWebView created in-memory and not visible) is not rendering into the PDF. I just get an blank PDF. I'll post some of my thinking and code, and hopefully someone will know what I'm doing wrong.
My starting place is that there is already a print plugin for PhoneGap here:
https://github.com/phonegap/phonegap-plugins/tree/master/iPhone/PrintPlugin
This plugin creates a UIWebView on-the-fly, you pass some HTML to it via JavaScript, and then it calls some print controller to do the printing.
So I borrowed some ideas from that. Then I noticed this awesome blog post on generating PDF's
http://www.ioslearner.com/convert-html-uiwebview-pdf-iphone-ipad/
So I'm trying to combine the two into my own PhoneGap plugin for taking some HTML (from my webapp) and generating a PDF on-the-fly.
HEADER:
#import <Foundation/Foundation.h>
#import <QuartzCore/QuartzCore.h>
#ifdef PHONEGAP_FRAMEWORK
#import <PhoneGap/PGPlugin.h>
#else
#import "PGPlugin.h"
#endif
#interface ExportPlugin : PGPlugin <UIWebViewDelegate> {
NSString* exportHTML;
}
#property (nonatomic, copy) NSString* exportHTML;
//This gets called from my HTML5 app (Javascript):
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options;
#end
MAIN:
#import "ExportPlugin.h"
#interface ExportPlugin (Private)
-(void) doExport;
-(void) drawPdf;
#end
#implementation ExportPlugin
#synthesize exportHTML;
- (void) exportPdf:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options{
NSUInteger argc = [arguments count];
if (argc < 1) {
return;
}
self.exportHTML = [arguments objectAtIndex:0];
[self doExport];
}
int imageName = 0;
double webViewHeight = 0.0;
- (void) doExport{
//Set the base URL to be the www directory.
NSString *dbFilePath = [[NSBundle mainBundle] pathForResource:#"www" ofType:nil ];
NSURL *baseURL = [NSURL fileURLWithPath:dbFilePath];
//Load custom html into a webview
UIWebView *webViewExport = [[UIWebView alloc] init];
webViewExport.delegate = self;
//[webViewExport loadHTMLString:exportHTML baseURL:baseURL];
[webViewExport loadHTMLString:#"<html><body><h1>testing</h1></body></html>" baseURL:baseURL];
}
- (BOOL)webView:(UIWebView *)theWebView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType
{
return YES;
}
- (void)webViewDidFinishLoad:(UIWebView *)webViewExport
{
webViewHeight = [[webViewExport stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
CGRect screenRect = webViewExport.frame;
//WHY DO I HAVE TO SET THE SIZE? OTHERWISE IT IS 0
screenRect.size.width = 768;
screenRect.size.height = 1024;
double currentWebViewHeight = webViewHeight;
while (currentWebViewHeight > 0)
{
imageName ++;
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//[[UIColor blackColor] set];
//CGContextFillRect(ctx, screenRect);
[webViewExport.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png",imageName]];
if(currentWebViewHeight < 960)
{
CGRect lastImageRect = CGRectMake(0, 960 - currentWebViewHeight, webViewExport.frame.size.width, currentWebViewHeight);
CGImageRef imageRef = CGImageCreateWithImageInRect([newImage CGImage], lastImageRect);
newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
[UIImagePNGRepresentation(newImage) writeToFile:pngPath atomically:YES];
[webViewExport stringByEvaluatingJavaScriptFromString:#"window.scrollBy(0,960);"];
currentWebViewHeight -= 960;
}
[self drawPdf];
}
- (void) drawPdf
{
CGSize pageSize = CGSizeMake(612, webViewHeight);
NSString *fileName = #"Demo.pdf";
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pdfFileName = [documentsDirectory stringByAppendingPathComponent:fileName];
UIGraphicsBeginPDFContextToFile(pdfFileName, CGRectZero, nil);
// Mark the beginning of a new page.
UIGraphicsBeginPDFPageWithInfo(CGRectMake(0, 0, pageSize.width, pageSize.height), nil);
double currentHeight = 0.0;
for (int index = 1; index <= imageName ; index++)
{
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png", index]];
UIImage *pngImage = [UIImage imageWithContentsOfFile:pngPath];
[pngImage drawInRect:CGRectMake(0, currentHeight, pageSize.width, pngImage.size.height)];
currentHeight += pngImage.size.height;
}
UIGraphicsEndPDFContext();
}
#end
The first indication something is not right, is above I have to set the UIWebView.frame size:
screenRect.size.width = 768;
screenRect.size.height = 1024;
But why? The PhoneGap PrintPlugin doesn't have to do this. If I don't set it, the size is 0, and then I get lots of context errors.
And then the next problem is that the UIWebView is not rendering anything. A symptom of the first problem perhaps?
How do I go about debugging this and working out what the problem is?
UPDATE
I'm pretty sure that it may be impossible to render the UIWebView layer into the image context, unless that UIWebView is actually visible.
I'm not sure how the PhoneGap PrintPlugin works then. It seems to render it's UIWebView quite fine with it not being visible.
I'm currently experimenting with rendering the actual PhoneGap UIWebView into the PDF (as opposed to my own UIWebView). But this is not ideal.
It means I have to hide all toolbars and whatnot, and then pan the UIWebView around so I capture everything outside the viewport. This is not ideal, because the user will visually see this occurring!
Point 1 above doesn't seem to work anyway, because iPad is too slow to update the screen when dynamically fiddling with the layout. On iPad, if you do visual things very quickly, (like panning the screen around) the iPad is too slow and just wont show it. You end up only seeing the end state. So when I take the screenshots, the screen visually hasn't panned (even though the DOM says it has). (Hope that makes sense).
Agh, frustrating.
I've got a working solution now, but it's not ideal.
What I do is render the phonegap UIWebView into the PDF.
To do this is quite tricky. I have a couple of objective-c functions
- (void) takeScreenshot;
- (void) renderPdf;
that I call from Javascript.
Then I have to write a recursive JS algorithm that pans the screen in every direction whilst calling takeScreenshot.
In between calls to takeScreenshot I use setTimeout which gives a 20 millisecond break in the JS processing - enough time for the iPad to update the screen so the next screenshot can be taken.
It was a royal pain in the arse. Bounty is still open in case someone knows of a better way of dealing with this - I would be very curious to know!
If you want to render a UIWebView into a PDF, I think you could go for this :
1/ use the convertRect:fromView method implemented by your UIWebView object to get the CGRect
2/ see the UIPrintPageRenderer Class Reference to make like a print preview
3/ Use UIGraphicsGetCurrentContext to get the CGContextRef out of it
4/ create the PDF from the CGRect and CGContextRef (you can use the help provided in the Apple sample code zoomingPDFViewer for building PDF using CGPdf
Here is how to render PDF in a UIWebView (webPage being your UIWebView), your delegate (here "self") could implement the UIWebViewDelegate protocol :
- (void)loadingPDFwithURL:(NSURL *)anURL {
CGRect appFrame = [[UIScreen mainScreen] applicationFrame];
appFrame.origin.y = 0;
[self.webPage initWithFrame:appFrame];
[self.webPage setScalesPageToFit:YES];
[self.webPage setDelegate:self];
NSURLRequest *requestObj = [NSURLRequest requestWithURL:anURL];
[self.webPage loadRequest:requestObj];
[self.view addSubview:self.webPage];
}
So you're giving the URL of the PDF (if it's in memory, just write the file in your application as the URL can be a filepath).
Another possibility is to dive into CGPdf but it's getting harder.

Why does capturing images with AVFoundation give me 480x640 images when the preset is 640x480?

I have some pretty basic code to capture a still image using AVFoundation.
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
AVCaptureStillImageOutput *newStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey,
nil];
[newStillImageOutput setOutputSettings:outputSettings];
[outputSettings release];
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession beginConfiguration];
newCaptureSession.sessionPreset = AVCaptureSessionPreset640x480;
[newCaptureSession commitConfiguration];
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddOutput:newStillImageOutput]) {
[newCaptureSession addOutput:newStillImageOutput];
}
self.stillImageOutput = newStillImageOutput;
self.videoInput = newVideoInput;
self.captureSession = newCaptureSession;
[newStillImageOutput release];
[newVideoInput release];
[newCaptureSession release];
My method that captures the still image is also pretty simple and prints out the orientation which is AVCaptureVideoOrientationPortrait:
- (void) captureStillImage
{
AVCaptureConnection *stillImageConnection = [AVCamUtilities connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self stillImageOutput] connections]];
if ([stillImageConnection isVideoOrientationSupported]){
NSLog(#"isVideoOrientationSupported - orientation = %d", orientation);
[stillImageConnection setVideoOrientation:orientation];
}
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL *assetURL, NSError *error) {
if (error) { // HANDLE }
};
if (imageDataSampleBuffer != NULL) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
self.stillImageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
self.stillImage = [UIImage imageWithData:self.stillImageData];
UIImageWriteToSavedPhotosAlbum(self.stillImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
else
completionBlock(nil, error);
}];
}
So the device understands it's in portrait mode as it should be, the exif attachements show me:
PixelXDimension = 640;
PixelYDimension = 480;
so it seems to know that we're in 640x480 and that means WxH (obviously...)
However when I email the photo to myself from Apples Photos app, I get a 480x640 image if I check the properties in Preview. This didn't make any sense to me until I dug further into image properties to find out that the image orientation is set to "6 (Rotated 90 degrees CCW)" I'm sure CCW is counter clockwise
So looking at the image in a browser:
http://tonyamoyal.com/stuff/things_that_make_you_go_hmm/photo.JPG
We see a the image rotated 90 degrees CCW and it is 640x480.
I'm really confused about this behavior. When I take a 640x480 still image using AVFoundation, I would expect the default to have no rotated orientation. I expect a 640x480 image oriented exactly as my eye sees the image in the preview layer. Can someone explain why this is happening and how to configure the capture so that when I save my image to the server to later display in a web view, it is not rotated 90 degrees CCW?
This happens because the orientation set in the metadata of the new image is being affected by the orientation of the AV system that creates it. The layout of the actual image data is, of course, different from the orientation mentioned in your metadata. Some image viewing programs respect the metadata orientation, some ignore it.
You can affect the metadata orientation of the AV system by calling:
AVCaptureConnection *videoConnection = ...;
if ([videoConnection isVideoOrientationSupported])
[videoConnection setVideoOrientation:AVCaptureVideoOrientationSomething];
You can affect the metadata orientation of a UIImage by calling:
UIImage *rotatedImage = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0f orientation:UIImageOrientationSomething];
But the actual data from the AVCapture system will always appear with the wider dimension as X and the narrower dimension as Y, and will appear to be oriented in LandscapeLeft.
If you want the actual data to line up with what your metadata claims, you need to modify the actual data. You can do this by writing the image out to a new image using CGContexts and AffineTransforms. Or there is an easier workaround. Use the UIImage+Resize package as discussed here. And resize the image to it's current size by calling:
UIImage *rotatedImage = [image resizedImage:CGSizeMake(image.size.width, image.size.height) interpolationQuality:kCGInterpolationDefault];
This will rectify the data's orientation as a side effect.
If you don't want to include the whole UIImage+Resize thing you can check out it's code and strip out the parts where the data is transformed.

Who change the image's size? SDWebImage or ContentMode

I use SDWebImage to set image like this:
[self.imageView sd_setImageWithURL:topicModel.large_image]
placeholderImage:nil
options:0
progress:^(NSInteger receivedSize, NSInteger expectedSize) {
} completed:^(UIImage *image,
NSError *error,
SDImageCacheType cacheType,
NSURL *imageURL) {
}];
after this , i set the imageView's contentMode like this:
if (topicModel.isBigPicture) {
self.imageView.contentMode = UIViewContentModeTop;
self.imageView.clipsToBounds = YES;
}else {
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.clipsToBounds = NO;
}
the size of the imageView is (355, 200),
the size of the image is bigger than imageView, both width and height.
As we all know,
If the topicModel is bigPicture, the contentMode will be UIViewContentModeTop.
UIViewContentModeTop remain same size, just positioned adjusted.
But i find a problem?
In my case, the size of image will convert to (300, XXX), XXX seems calculate by (image.realHeight * 300.0 / image.realWidth)
Actually i don't change the size of the image in my project.
Here is my question:
what's wrong with my code?
how to make the image's width same as the imageView's width?
Finally, I find the the answer.
And the reason is that i use the newest version of SDWebImage
Here is the solution :-)
if you want get the original image, you must change the code in SDWebImage/SDWebImageDecoder.m like this:
CGContextRelease(context);
//add this method
UIImage *decompressedImage = [UIImage imageWithCGImage:decompressedImageRef];
//delete this method
UIImage *decompressedImage = [UIImage imageWithCGImage:decompressedImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(decompressedImageRef);
return decompressedImage;
}
if you want to know more details ,You can see the below link
SDWebImage ---- Images from web-url too small when using v. 3.7.4 and above