When I record video, it always records in landscape mode, regardless of real device orientation. How to force UIImagePicker to set the Portrait orientation?
AGAIN, UIImagePicker is used -- not AVFoundation recording classes.
1) The various properties (movie source type, dimensions, duration, etc.) of MPMoviePlayerController are only available after the movie has been visually played to the user. Before that they all come back 0. I've tried various things like forcing the system to wait a few seconds (to see if it was just a timing issue), but so far, nothing has worked other than actually playing the movie. Evan at that point, I believe those properties are acting as read-only; it's not like I can adjust them directly.
2) The various CGImageSourceRef calls and routines work only on actual images, not movies, on iOS. On MacOS there is more support for movies, also going through the CV (for Video) as opposed to CI (Image) or CG (Graphic) routines. At least, all the examples I've found so far work only on MacOS, and nothing I've found shows working code on iOS, which matches my results of getting a nil result when I attempt to use it.
Related
I have app, which takes photos, crops them to fit display and uses it as overlay for next photo. I use UIIMagePickerController for that, and it works perfectly for me, however I realized I need to take photos only in landscape orientation. As it's written in UIImagePickerController class reference, it supports portrait mode only. I know there are several workarounds, and it is possible to use it in landscape, but I've red that there's a risk Apple will reject my app.
On the other hand, AVFoundation looks like a bit overkill for my needs.
Do I really need to use AVFoundation?
I'd use AVFoundation since its much more flexible than ImagePicker, it's always a good idea to future proof your design. And it's not all that tough to set up an AVCapture session, just look at the Demo Apps and see how it's done.
I have a situation where I'd like to play 2 video clips back to back using an MPMoviePlayerViewController displayed using presentMoviePlayerViewControllerAnimated.
The problem is that the modal view automatically closes itself as soon as the first movie is complete.
Has anyone found a way to do this?
Three options:
You may use MPMoviePlayerController and start the playback of the 2nd (Nth) item after the previous is complete. This however will introduce a small gap between the videos cause by identification and pre buffering of the content.
You may use AVQueuePlayer; AVQueuePlayer is a subclass of AVPlayer you use to play a number of items in sequence. See its reference for more.
You may use AVComposition for at runtime composing one video out of the two (or N) you need to play back. Note, this works only on locally stored videos and not on remote (streaming or progressive download). Then use AVPlayer for the playback.
It's not possible. If the video assets are in local file system, consider AVComposition.
I'm working on an app that applies Quartz Composer effects to QuickTime movies. Think Photo Booth, except with a QuickTime movie for the input, not a camera. Currently, I am loading a quicktime movie as a QTMovie object, then have an NSTimer firing 30 times a second. At some point I'll switch to a CVDisplayLink, but NSTimer is okay for now. Every time the NSTimer fires, the app grabs one frame of the quicktime movie as an NSImage and passes it to one of the QCRenderer's image inputs. This works, but is extremely slow. I've tried pulling frames from the movie in all of the formats that [QTMovie frameImageAtTime:withAttributes:error:] supports. They are all either really slow, or don't work at all.
I'm assuming that the slowness is caused by moving the image data to main memory, then moving it back for QC to work on it.
Unfortunately, using QC's QuickTime movie patch is out of the question for this project, as I need more control of movie playback than that provides. So the question is, how can I move QuickTime movie images into my QCRenderer without leaving VRAM?
Check out the v002 Movie Player QCPlugin which is open source. Anyway, what more controls do you have exactly?
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.
In a project that I am currently working on I have an transparent NSWindow overlayed on a QTMovieView. At certain points I slide a custom view into this child window with animation so that it is displayed over the movie for a short period of time. The only odd behavior is that the animation is smooth on a Mac Book Pro but on a Mac Book(Same OS-X Version) there is significant flicker. The flicker only occurs on the portion of the window that has the actual QTMovie behind it.
Has anyone seen this behavior before or found a way to work around it?
The older MacBooks don't have real video hardware and used shared memory, so it's probably an issue with a slow video card trying to update # 30fps. Have you tried smaller movies to see if the issue goes away?
You may be better off with a pipeline like in the QTCoreVideo101 sample code from Apple. That would be a bit more work, you'd have to take care of the animation yourself, but you would get ultimate control over what is being drawn.