Choosing the audio format in which AVFoundation audio samples are captured.
I am familiar with processing of video frames coming from the iPhone camera. There, the AVCaptureVideoDataOutput's videoSettings property could be used to specify the format in which the video frames should be receievd.
For audio, the similar class AVCaptureAudioDataOutput does not have such a property.
However, the AVAudioSettings.h file clearly shows that there exist several audio formats.
How can I choose a format for audio data? I'm basically interested in raw PCM samples with a certain specific bit rate.
You can try OpenAL. Here is the Documentation. Best regards ;)
Related
I have a problem where I'm using the Kickflip.io (see code base which is in github) base and building off of it, but after about 10+ seconds the video is getting ahead of the audio. I was able to verify that the audio is playing at the correct rate, so its definitely the video.
I've tried adjusting the sample frame rate for the video down to 10 fps, and this does nothing. Based on all the other audio out of sync with video which is out there for FFMpeg, I'm starting to wonder if there's something in the FFMpeg cocoa pod.
My guess is that the issue is in:
https://github.com/Kickflip/kickflip-ios-sdk/blob/master/Kickflip/Outputs/Muxers/HLS/KFHLSWriter.m
i have here a rendered .mov video file with the raw codec and 10 frames per second. The video shows a camera that rotates around a house. If I open this file with the Quicktime Player I can move around the house by dragging the mouse over the video. It's like an interactive video.
Now I want to embed this function in my website with javascript. The problem is that I want to use HTML5 videos, so I have to convert the .mov file into .avi or .mp4.
My Problem is now, if I do that the video laggs when I drag with the mouse over it. Even if I just play it it laggs. How can I convert this video so that I have the same quality as in the original?
Thanks in advance,
conansc
You could try using a GOP length of 1 (also known as using all I-frames). This makes it easier to play backwards. But you might need to just turn it into a series of still images, like JPEGs, and swap them to the screen as needed. Video formats are meant to be played forwards, at normal speed.
As part of a 64 bit Objective-C program I need to determine some parameters of media files on Lion. For example for a video file, what is the pixel aspect ratio and is the video anamorphic? I've been searching the AVFoundation API with no luck. Any ideas on how to determine this information?
Thanks,
Barrie
Although I am no AVFoundation expert, I would start looking at AVVideoWidthKey and AVVideoHeightKey dictionary keys in the video settings .. see http://developer.apple.com/library/IOs/#documentation/AVFoundation/Reference/AVFoundation_Constants/Reference/reference.html
I'm using the OSX QTKit sample code from here: http://bit.ly/mAaHGI
I'd like to crop the video, both on the screen and the saved file, to simulate different aspect ratios. What is the best way to do this?
It's a bit more involved than just calling a crop method, but Core Video allows you to manipulate the video stream. You can find the Core Video Programming Guide here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/CoreVideo/CVProg_Intro/CVProg_Intro.html
I want to take a QTMovie that I have and export it with the audio fading in and fading out for a predetermined amount of time. I want to do this within Cocoa as much as possible. The movie will likely only have audio in it. My research has turned up a couple of possibilities:
Use the newer Audio Context Insert APIs. http://developer.apple.com/DOCUMENTATION/QuickTime/Conceptual/QT7-2_Update_Guide/NewFeaturesChangesEnhancements/chapter_2_section_11.html. This appears to be the most modern was to accomplish this.
Use the Quicktime audio extraction APIs to pull out the audio track of the movie and process it and then put the processed audio back into the movie replacing the original audio.
Am I missing some much easier method?
Quicktime has the notion of Tween Tracks. A tween track is a track that allows you to modify the properties of another set of tracks properties (such as the volume).
See Creating a Tween Track in the Quicktime docs to see an example of how to do this with an Quicktime audio track's volume.
There is also a more complete example called qtsndtween on the Apple Developer website.
Of course, all of this code requires using the Quicktime C APIs. If you can live with building a 32-bit only application, you can get the underlying Quicktime-C handles from a QTMovie, QTTrack, or QTMedia object using the "movie", "track", or "media" functions respectively.
Hopefully we'll get all the features of the Quicktime C APIs in the next version of QTKit, whenever that may be.