For photos taken in Pano mode (which can vary up to 180 degrees depending on when you press stop) I want to load them into a pano viewer app.
But, there isn't anything in the EXIF data that can tell you the real field of view that the photo takes. The only differences between photos I take are the native resolution. But presumably that can change between devices.
Approx 180 Approx 90
---------- ---------
Exif Image Width 10800 4176
Exif Image Height 2332 2462
Apparently Android writes XML meta data into their JPEGS:
http://atterer.org/tech/android-exif-tags-xmp-pano-panorama-exiftool
Any help appreciated!
First you need to calculate the pixels per degree that you are capturing. This can be done using the vertical angle of view and resolution.
The vertical angle of view will be dependent on the model of iphone and the orientation (landscape or portrait). For instance the iPhone 4 has a 55.7 x 43.2 degree angle of view iPhone 4 Camera Specifications - Field of View / Vertical-Horizontal Angle.
Divide the number of vertical pixels by the vertical angle of view, this will give you the pixels per degree. Then divide your panoramas horizontal number of pixels by the pixels per degree. This should give you the horizontal angle of view with enough accuracy for a good representation in a panorama viewer.
Related
I want to set a screen size for my view (Making it for iPhone 6). Problem is, I don't know if the input scale in point or pixel
Is it 600 pixel or 600 point?
Thank
It is in point. In retina devices, 1 point equals two pixels (or 1 point equals three pixels for #3x supported device). In non-retina devices, 1 points equals 1 pixel.
To answer your question, these are points, not pixels.
I am not sure why you want to set a fixed size only for iPhone but I think you might be interested in checking out some Auto Layout tutorials like this one. It will help you build interfaces for multiple devices at a time !
Like KDeogharkar said, there are different factor between points and pixel depending on the device. Usually you don't want to work with pixels.
I am creating an iPhone game in sprite kit. After weeks of research, I am still having trouble understanding how to properly size and implement sprites for each screen size.
I understand that these suffixes determine which image to use (depending on the aspect ratio of the screen)
#2x - 4s,5,6
#3x - 6+
I have read and tooled with different scaling modes in my view controller but had no luck and difficulty understanding them.
If I provide a background of 750x1136 (pixels) as the #2x, it will perfectly fit the iphone 6 but will be too big for the iphone 5. If scaling is the answer, how would "sprite kit" know I provided an image for the iphone 5 that I want scaled up for the 6, or vice versa? Is this a build setting? Same for characters, I need iphone 6 sprites to be proportionally bigger than the iphone 5 sprites.
How would I most appropriately size and scale sprites for the different devices? (easier to discuss in terms of the backgrounds that should be the exact size of the screen)
I am expecting to create one set of sprites for each aspect ratio using the resolution of the biggest screen size. Ex. #2x designed for iPhone 6 and scaled down for the 5 and 4s.
The 3x, 2x and normal images are not really intended to be manipulated that way. The three images should be essentially the same image with the 3x having exactly 3 times the pixel dimensions of the normal, the 2x having double dimensions etc.
If you need to scale the scene to better fit the format of a particular device, you may need to scale that when you create the scene, the way Apple sample code does:
var viewSize = self.view.bounds.size
// On iPhone/iPod touch we want to see a similar amount of the scene as on iPad.
// So, we set the size of the scene to be double the size of the view, which is
// the whole screen, 3.5- or 4- inch. This effectively scales the scene to 50%.
if UIDevice.currentDevice().userInterfaceIdiom == .Phone {
viewSize.height *= 2
viewSize.width *= 2
}
Is there any way to fill video without stretching and cutting the video ?
1- MPMovieScalingModeFill - its stretching video.
2- MPMovieScalingModeAspectFill - its cutting video.
3- MPMovieScalingModeAspectFit - its shooing video in centre and showing black area.
No. Think about it! The MPMoviePlayerController's view is not the same aspect (ratio height to width) as the movie. So to make it fit, either you must cut some off (of either height or width), or you must letterbox it (black area) with one dimension being too small, or you must distort.
You should do the opposite: size the MPMoviePlayerController's view to fit the aspect of the movie!
I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.
I let the user select a photo from the iPhone library, and I grab the UIImage.
I output the size of the image, and it says 320x480, but it doesn't seem to be, because when I draw the image on the screen using CGRectMake(0,0,320,480), it only shows the upper left portion of the image. Aren't the images much bigger than 320x480 because of the high resolution?
I'd like to scale the image to force it to be 320x480. If it is less than 320x480, it should not be rescaled at all. If the width is greater than 320 or the height is greater than 480, it should scale in a way so that it becomes as close to 320x480 as possible, but by keeping the proper proportion of width to height. So, for instance, if it scales to 320x420, that is fine, or 280x480.
How can I do this in Objective-C?
Setting the image view's content mode like this:
myView.contentMode = UIViewContentModeScaleAspectFit;
will preserve the aspect ratio.