Is there any way to fill video without stretching and cutting the video ?
1- MPMovieScalingModeFill - its stretching video.
2- MPMovieScalingModeAspectFill - its cutting video.
3- MPMovieScalingModeAspectFit - its shooing video in centre and showing black area.
No. Think about it! The MPMoviePlayerController's view is not the same aspect (ratio height to width) as the movie. So to make it fit, either you must cut some off (of either height or width), or you must letterbox it (black area) with one dimension being too small, or you must distort.
You should do the opposite: size the MPMoviePlayerController's view to fit the aspect of the movie!
Related
I'm making an application that generates a 2D area (you can think of it as a drawing), with a camera hovering over it. The size of said drawing isn't known in advance, and could change greatly. After the "drawing" is generated, I want to position the camera so that the whole drawing is in view.
My original idea was to calculate the points that are at the top, bottom, left, and right of the drawing and having the camera move back, "zooming out" until they are all in sight, but there has to be a better way, right?
Assuming you are working in 2D (thus orthographic camera mode), you can set the camera's orthographicSize:
Camera.main.orthographicSize = height / 2F; //half of the height of the area
Then, set the aspect ratio (width / height):
Camera.main.aspect = 1F; //for example, a square area
Currently I am designing the UITabbar of my App. I created a Photoshop layout for the Tabbar, it is 84px high and 640px wide. Is it the right way to create one image with the size of 84x640 and one with the size 320x42. And then name the larger image #2x.png.
I am struggling at this point, because when I log the width of the UITabbar it says 320.00, but I am using the Iphone 3.5inch retina simulator.
Any tips for me to realize the tabbar?
Yes. You should have two images. One for normal displays and one for retina.
Xcode works with point, not pixels, so the width will always be 320.
In the case of retina display one point is 2x2 pixels and in normal mode it is 1x1.
by the way, I think the height for the tab bar should be 320x49 for normal and 640x98 for retina.
the retina image should have the same name as the normal one with the #2x at the end
Example:
normal: image.png
retina: image#2x.png
You confused "Points" with "Pixels". The Points are resolution independent. You can normally check your scale factor by calling contentScaleFactor on your UIView.
It should say 2.0 for retina, and 1.0 for non retina.
I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.
I let the user select a photo from the iPhone library, and I grab the UIImage.
I output the size of the image, and it says 320x480, but it doesn't seem to be, because when I draw the image on the screen using CGRectMake(0,0,320,480), it only shows the upper left portion of the image. Aren't the images much bigger than 320x480 because of the high resolution?
I'd like to scale the image to force it to be 320x480. If it is less than 320x480, it should not be rescaled at all. If the width is greater than 320 or the height is greater than 480, it should scale in a way so that it becomes as close to 320x480 as possible, but by keeping the proper proportion of width to height. So, for instance, if it scales to 320x420, that is fine, or 280x480.
How can I do this in Objective-C?
Setting the image view's content mode like this:
myView.contentMode = UIViewContentModeScaleAspectFit;
will preserve the aspect ratio.
I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.