Problem
I'm posting images to a server, but when I pull them back to the device, they turn sideways.
My thoughts are the Exif data attached to the image are telling the server how to orient the image.
How can I edit the exif data on a UIImage from the camera or library before its sent? I've seen solutions where they rotate the image when its returned but I want a solution for before its even posted.
A clue to what might be happening is that when the imageOrientation is posted it is normal, but return Up from the server no matter what. Does this possibly mean that the server is stripping the Exif data needed to determine the orientation?
Thank you.
Related
I have done a lot of searching for the past couple of weeks for meeting the requirement of my client, annotate the PDF. Lots of pointers but none of them work the way we want. So this question.
Requirement : I want to add notes on to a PDF, these notes should embedded to the PDF as annotations/note which should be viewable on the Desktops as well as the iPads.
We are developing an app, this is one of the biggest hurdles we are facing as of now. Can some one please help us?
You'll have to convert the pdf to some format that can be drawn on, like an image file. There might be libraries for this, if not then you can also do a screen draw capture. Then do whatever you need to for the drawing/typing with textboxes or CGContext. Once you're done with that define a CGRect that encloses your final image. Then call
CGContextRef ctx = CGPDFContextCreateWithURL((__bridge CFURLRef)[NSURL fileURLWithPath:tmpFile], &mediaBox, NULL);
Where mediaBox is the CGRect that encloses your image.
Just so you know, SO is not a place where you post requirements and people will code them for you. You need to show what you've done and ask a specific question. I hope I've been able to point you in a helpful direction.
I'm working on an application that handles image editing, and I'm at the point where I'm trying to integrate twitter. So far it has worked great and I can send a tweet from within the app and attach the image the user is editing. The drawback that I've noticed, is that the image gets auto-compressed. This means that the PNG the user is editing, if it has transparency, no longer will have transparency. This isn't good. Is there a way around this? I would like to be able to send a tweet and attach my PNG image WITH transparency, basically keep it from converting to a JPG once sent.
Here's the code I have so far. Very self-explanatory and straightforward.
SLComposeViewController *tweetSheet = [SLComposeViewController composeViewControllerForServiceType: SLServiceTypeTwitter];
[tweetSheet addImage:self.workingImage];
[self presentModalController:tweetSheet animated:YES completion:nil];
self.workingImage is the image the user is working on.
EDIT: I've updated the above code to work on iOS6, and seem to have the exact same problem (which isn't too surprising I guess). It looks like once the image is on Twitter, it is in JPG format. Is there any way to keep in PNG format?
I'd hate to lose all of this simple code only to go down the route of using a 3rd party image hosting site.
EDIT 2: I've now converted all of my code to no longer use the alpha channel. This means that I no longer care if the image is in the format of PNG or JPEG, because all 3 RGB channels will always exist. Posting a tweet still compresses the image before posting it, no matter what quality the original image was.
I even posted an image to twitter using the app, had it compressed by twitter, saved the image and tried to repeat using the newly compressed image, yet twitter still compressed!
I'm lost on this. Will twitter (or even facebook) compress images no matter what? Will my only option be a third party image hosting site? I'd hate to lose all of the nice social features the iOS6 framework has built into it to instead use a third party site...
It's a twitter side problem. It compress your image regardless. Maybe you should consider uploading the .png to your own server then post a link of it within the tweet.
you can also use other image hosting services..
I am trying resize a UIImage and keep the original EXIF metadata.
However, it looks like no straight forward method available now.
I tried CGImageSourceCopyPropertiesAtIndex to get original EXIF metadata but I have no idea how can I write it into target image.
However, i've notice that ALAsset library function have related stuff like writeVideoAtPathToSavedPhotosAlbum, but I wonder if that would be helpful for my resize requirement ..
Any clue ?
I need to take some images from the iPhone / iPad photo library from within my app and store them in a Core Data entity, and display them as small thumbnail images (48x48 pixels) in a UITableViewCell, and about 80x80 pixels in a detail UIView. I've followed the Recipes sample app, where they use UIImageJPEGRepresentation(value, 0.1) to convert to NSData and store the bytes inside Core Data, and it doesn't end up taking much space, which is good. But when retrieve the data, using UIImage *uiImage = [[UIImage alloc] initWithData:value]; and display it as a thumbnail image with "Aspect Fit", it looks terrible and grainy. I tried changing the image quality variable in the JPEG compression, but even setting it to 0.9 doesn't help.
Is that normal? Is there a better way to compress the image that doesn't cause so much grainee-ness? Since I just want to show a small thumbnail, and then a slightly bigger thumbnail, I feel Core Data would be great for storing this, since it should (theoretically) also support iCloud. But if it's going to look terrible, then I'll have to reconsider.
Two things, are you resizing the image to the right size? Have you tried UIImagePNGRepresentation()? That should compress it without losing quality.
If UIImagePNGRepresentation (which is lossless) is giving you bad images, then the problem is in your image resizing code. Core Data is just giving you what you back what you put in, so if you get bad images out, it's because you put bad images in.
Is one of your iPhone/iPad retina and the other isn't? If so, perhaps the problem is that you don't really want 48x48 pixel images, you want 48x48 point (which means you'll need 2x images 96x96 for retina quality display).
I've tried some code from each of these questions:
How to make one color transparent on a UIImage?
How to mask a UIImage so that white becomes transparent on iphone?
but have come up unsuccessful, unfortunately working with Core Graphics and images is not my strong suit.
How would I go about accessing a UIImage's raw data and changing the white pixels to clear?
How would I go about accessing a UIImage's raw data …?
Look at the documentation.
You'll find that there is no way to get the raw data behind a UIImage. The closest you can get is a CGImage. That will let you get its data provider, which you can ask for a copy of the raw data.
The problem with that solution is that you need to handle every possible configuration (RGBA, ARGB, RGB_, _RGB, RGB, 8-bpc, 16-bpc, etc.) that CGImage supports. That's a lot of work. If you don't do it, then someday, you'll get surprised by an image that somehow doesn't work with your code, or by an OS upgrade changing how the CGImage gets created.
The CGImageCreateWithMaskingColors function, suggested on one of the other questions you linked to, is the correct solution.
One thing that's tripping you up is that the values shown in the accepted answer on that question are generally bogus: They're out of range. The Quartz 2D Programming Guide has more details in at least two.places.
I also argue against including that answer's createMask: method, since it doesn't do what it says it does and is barely useful at all (it's only worth having if the source image may be CMYK, but how likely is that on an iPhone app?). Skip it and create the mask image from the UIImage's CGImage directly.
That answer will probably work just fine once you fix those two problems.