I want to be able to monitor audio on headphones before and during the capture of video.
I have an AVCaptureSession set up to capture video and audio.
My idea is to hook and AVCaptureAudioDataOutput instance up to the AVCaptureSession for this and process the CMSampleBufferRefs with a class implementing the AVCaptureAudioDataOutputSampleBufferDelegate protocol.
But I am not sure how to route the audio to the headphones from there.
What would be the most straighforward way to do this (highest level frameworks, general approach)?
I ended up implementing this Audio Unit. The remote i/o audio unit to be precise.
Apple's aurioTouch sample code provides a clear example of how to do this.
Related
I have a problem where I'm using the Kickflip.io (see code base which is in github) base and building off of it, but after about 10+ seconds the video is getting ahead of the audio. I was able to verify that the audio is playing at the correct rate, so its definitely the video.
I've tried adjusting the sample frame rate for the video down to 10 fps, and this does nothing. Based on all the other audio out of sync with video which is out there for FFMpeg, I'm starting to wonder if there's something in the FFMpeg cocoa pod.
My guess is that the issue is in:
https://github.com/Kickflip/kickflip-ios-sdk/blob/master/Kickflip/Outputs/Muxers/HLS/KFHLSWriter.m
Choosing the audio format in which AVFoundation audio samples are captured.
I am familiar with processing of video frames coming from the iPhone camera. There, the AVCaptureVideoDataOutput's videoSettings property could be used to specify the format in which the video frames should be receievd.
For audio, the similar class AVCaptureAudioDataOutput does not have such a property.
However, the AVAudioSettings.h file clearly shows that there exist several audio formats.
How can I choose a format for audio data? I'm basically interested in raw PCM samples with a certain specific bit rate.
You can try OpenAL. Here is the Documentation. Best regards ;)
This application currently plays audio from onine stations.
Basically it has 2 main features:
Play:
On click of a station name or the play button, the fm station starts playing.
Record :
On click of the record link, the recorders starts recording. Clicking it again, stops and replays the recorded audio.
The problem
The streaming recording is not recording clearly.
The current recorder using AVAudioRecorder though records the audio, the sound is noisy.
Could it be because the streaming audio uses audioQueue while the recorder used is AVAudioRecorder which also records from microphone.We want to record only the streaming content.
Note: The AVAudioRecorder when used for voice recording is clear, but not good with recording streaming audio content.
For playing streaming audio, i used code from Mr. Matt Gallagher link is
here
Can you please suggest a better way to record streaming audio.
Is there an existing API like AVAudioRecorder or am I doing something wrong ?
Please Look at this Framework. It provide data for recording while recording Streaming Kit Streaming Kit
Play an MP3 over HTTP
STKAudioPlayer* audioPlayer = [[STKAudioPlayer alloc] init];
[audioPlayer play:#"http://www.abstractpath.com/files/audiosamples/sample.mp3"];
And append its data to NSMutabledata to play offline by using this delegate.
Intercept PCM data just before its played
[audioPlayer appendFrameFilterWithName:#"MyCustomFilter" block:^(UInt32 channelsPerFrame, UInt32 bytesPerFrame, UInt32 frameCount, void* frames)
{
...
}];
does anyone know of an audio equivalent of QTMovieView. Something that allows playback and scrubbing of audio files. QTMovieView without the movie...
QTMovieView, with its height set to [myMovieView controllerBarHeight].
I want to take a QTMovie that I have and export it with the audio fading in and fading out for a predetermined amount of time. I want to do this within Cocoa as much as possible. The movie will likely only have audio in it. My research has turned up a couple of possibilities:
Use the newer Audio Context Insert APIs. http://developer.apple.com/DOCUMENTATION/QuickTime/Conceptual/QT7-2_Update_Guide/NewFeaturesChangesEnhancements/chapter_2_section_11.html. This appears to be the most modern was to accomplish this.
Use the Quicktime audio extraction APIs to pull out the audio track of the movie and process it and then put the processed audio back into the movie replacing the original audio.
Am I missing some much easier method?
Quicktime has the notion of Tween Tracks. A tween track is a track that allows you to modify the properties of another set of tracks properties (such as the volume).
See Creating a Tween Track in the Quicktime docs to see an example of how to do this with an Quicktime audio track's volume.
There is also a more complete example called qtsndtween on the Apple Developer website.
Of course, all of this code requires using the Quicktime C APIs. If you can live with building a 32-bit only application, you can get the underlying Quicktime-C handles from a QTMovie, QTTrack, or QTMedia object using the "movie", "track", or "media" functions respectively.
Hopefully we'll get all the features of the Quicktime C APIs in the next version of QTKit, whenever that may be.