How to change Rotation of image after taking in Landscape mode - objective-c

I'm working on PocketWallet application in which I'm taking any type of card rear and front images and then save them in Database. My problem is that when I take image in portrait mode the rotation of images showing well in my resulting view. But when I take image in landscape mode then the images rotate in 90 degrees. How can I set the rotation of the image in portrait mode after taking in landscape.
I'm using UIimagePickerCamera for taking image.
please help me......thanks in advance

If you want to find out if your image was taken in landscape mode, take a look at the imageOrientation property of UIImage. The UIImagePickerCamera will set that flag correctly. It sounds like your app simply needs to make a correction to it. This post gives you a method for how to do that.

Related

Why does iPad preview use wrong image?

My question is, why does the iPad preview use a different image than the storyboard for wRegular hAny?
I'm trying to set up a universal app with a menu that will use larger buttons for iPad. In the asset catalog, I've specified the standard image size for wAny x hAny, and loaded a larger image for wRegular x hAny.
The story board looks fine for all size classes, with the wRegular x hAny using the iPad image, and everything else using the iPhone image. But the previews all use the iPhone image, including the iPad preview, despite the storyboards showing the correct images. In the screen shots below, the story board is shown to the left, preview to the right.
Can someone tell me what I'm doing wrong here? I'm trying to avoid using explicit image sizes for each class - is that what I should be doing?
Any help would be greatly appreciated. I've read everything I can on using different image sizes, and still cannot figure this out.
enter code here
You can't have a different images by size class for the same button in Interface Builder. It looks to me like you set the image of the button to be the iPad one first in Regular/Any, then set the iPhone one afterward in Any/Any which changed what you did in Regular/Any.
In this image, see how the Font has a + symbol to the left of it, but Image doesn't? That's for specifying the value per size class. Since Image doesn't have that, it's not a value saved for the specific size class.

Titanium Images Wrong Size

I have a Titanium app where I am using the same ImageView for alot of different images. I am changing the image in the image view by setting it's image property.
The problem is some of the images are not showing at their full size sometimes, it is kinda random, but sometimes they show full size sometimes not.
I have width and height set to "auto" on the image view.
Anyone come across this issue.
A potentially more reliable way would be too dynamically resize the image before handing it off to the ImageView by using Titanium.Blob.imageAsReized.
Try to integrate this in your code, first you have to load the image as a blob though.
// Get the blob of your image first, then call this method
imageView.image = imageBlob.imageAsResized(newWidth, newHeight);

iOS objective C converting coordinates from Absolute image location to my views coordinate system

I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.

setBackgroundImage forBarMetrics image size?

I am fairly new in iOS programming and I am creating my first app.
I have been trying to use the following code to change the navigation bar background image (this is using the new iOS 5 method):
[self.navigationController.navigationBar setBackgroundImage:[UIImage imageNamed:#"gradientBackgroundPlain.png"] forBarMetrics: UIBarMetricsDefault];
It works fine, but my image is 640 x 88 and it looks too big for the bar. It's like the system doesn't want to scale it, or do I need to do it manually? If I scale the image and create a smaller one it looks pixelated in the retina display.
Any thoughts on this?
Any help or response will be appreciated.
Thanks,
Jorge.-
Your image gradientBackgroundPlain.png should be 320x44, and create a second image named gradientBackgroundPlain#2x.png with a size of 640x88. Include the #2x image in your bundle, but continue to specify gradientBackgroundPlain.png for the name of the image. The platform automatically chooses the correct size image for use depending on whether there is a retina display present or not.

Camera image size

I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.