AVAudioPlayer not playing recording made with AVAudioRecorder - objective-c

This may be a really simple problem but I've made a recording using AVAudioRecorder, then I stopped the recorder. After stopping the AVAudioRecorder, I press another button to play the recording but it doesn't play. The file exists, I can play it on my computer, even the code knows it exists, there is no error, but refuses to play. What can be the issue?
NSError *error = nil;
if ([[NSFileManager defaultManager] fileExistsAtPath:[self.recorder.url path]]) {
self.recordingPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:self.recorder.url error:&error];
self.recordingPlayer.delegate = self;
if (error) {
NSLog(#"error: %#", [error localizedDescription]);
} else {
[self.recordingPlayer play];
}
} else {
NSLog(#"Recording file doesn't exist");
}
EDIT: just tried it on my device and it works fine, plays the recording. It just doesn't work on iOS simulator

The problem was with my record settings, I had the number of AVNumberOfChannelsKey set to 2, for some reason iOS simulator didn't like this, but iPhone was fine with it. Either way, my recording shouldn't have 2 channels in the first place, so good thing I spotted this.
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMax], AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16], AVEncoderBitRateKey,
[NSNumber numberWithInt:1], AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
nil];

Try this link's code. In this you can record an audio and retrieve from document folder.
Record audio file and save locally on iPhone

Related

Creating a CVPixelBufferRef from a IDeckLinkVideoInputFrame

I'm using the BlackMagic DeckLink SDK to try capture frames from a BM device.
I'm trying to grab the pixel data from a IDeckLinkVideoInputFrame in the DeckLinkController::VideoInputFrameArrived callback and convert it to a CVPixelBufferRef to be able to write it to disk with AVFoundation's AVAssetWriterInputPixelBufferAdaptor and AVAssetWriter. The code I'm using seems to be working, apart from the fact that all frames written to disk are green. (BlackMagic's example code that generates a preview on screen does show an image, so the device and device settings should be OK).
The AVAssetWriter is set up as follows:
writer = [[AVAssetWriter assetWriterWithURL:destinationUrl
fileType:AVFileTypeAppleM4V
error:&error] retain];
if(error)
NSLog(#"ERROR: %#", [error localizedDescription]);
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264
forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt:1920]
forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt:1080]
forKey: AVVideoHeightKey];
NSMutableDictionary * compressionProperties = [[NSMutableDictionary alloc] init];
[compressionProperties setObject: [NSNumber numberWithInt: 1000000]
forKey: AVVideoAverageBitRateKey];
[compressionProperties setObject: [NSNumber numberWithInt: 16]
forKey: AVVideoMaxKeyFrameIntervalKey];
[compressionProperties setObject: AVVideoProfileLevelH264Main31
forKey: AVVideoProfileLevelKey];
[outputSettings setObject: compressionProperties
forKey: AVVideoCompressionPropertiesKey];
writerVideoInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings] retain];
NSMutableDictionary * pixBufSettings = [[NSMutableDictionary alloc] init];
[pixBufSettings setObject: [NSNumber numberWithInt: kCVPixelFormatType_422YpCbCr8_yuvs]
forKey: (NSString *) kCVPixelBufferPixelFormatTypeKey];
[pixBufSettings setObject: [NSNumber numberWithInt: 1920]
forKey: (NSString *) kCVPixelBufferWidthKey];
[pixBufSettings setObject: [NSNumber numberWithInt: 1080]
forKey: (NSString *) kCVPixelBufferHeightKey];
writerVideoInput.expectsMediaDataInRealTime = YES;
writer.shouldOptimizeForNetworkUse = NO;
adaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerVideoInput
sourcePixelBufferAttributes:pixBufSettings] retain];
[writer addInput:writerVideoInput];
For reference, these output settings and compression options should be correct, but I have tried several different alternatives.
When a frame comes in from the device, I convert it to a CVPixelBufferRef as follows:
void *videoData;
int64_t frameTime;
int64_t frameDuration;
videoFrame->GetBytes(&videoData);
videoFrame->GetStreamTime(&frameTime, &frameDuration, 3000);
CMTime presentationTime = CMTimeMake(frameDuration, 3000);
CVPixelBufferRef buffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
void *rasterData = CVPixelBufferGetBaseAddress(buffer);
memcpy(rasterData, videoData, (videoFrame->GetRowBytes()*videoFrame->GetHeight()));
CVPixelBufferUnlockBaseAddress(buffer, 0);
if (buffer)
{
if(![adaptor appendPixelBuffer:buffer withPresentationTime:presentationTime]) {
NSLog(#"ERROR appending pixelbuffer: %#", writer.error);
[writerVideoInput markAsFinished];
if(![writer finishWriting])
NSLog(#"ERROR finishing writing: %#", [writer.error localizedDescription]);
}
else {
NSLog(#"SUCCESS");
if(buffer)
CVPixelBufferRelease(buffer);
}
}
This code is appending frames to the AVAssetWriterInputPixelBufferAdaptor, but all the frames are green.
Can anybody see what I'm doing wrong here, or does anybody have any experience using AVFoundation capturing and compressing frames using the BlackMagic Decklink SDK?
When you see 'green' and are working in the YUV color space, you are seeing values of 0 in the buffer. Since AVWriter is writing frames, the odds are that 'buffer' contains values of 0. I see a couple of ways that could happen.
1) The buffer your are appending is most likely initialized with 0, so it is possible your copy is failing. In your code that could happen if (videoFrame->GetRowBytes()*videoFrame->GetHeight()) somehow evaluates to 0. It seems impossible, but I'd check that.
2) The CVPixelBufferGetBaseAddress is either returning the wrong pointer, the PixelBuffer itself is the wrong format or possible invalid (yet didn't crash because of safeguards in the API).
3) 'videoData' is, for whatever reason, itself full of 0. DeckLinkCaptureDelegate returns frames with nothing in them when it doesn't like the input format (usually this is because the BMDDisplayMode passed to EnableVideoInput doesn't match your video source.
int flags=videoFrame->GetFlags();
if (flags & bmdFrameHasNoInputSource)
{
//our input format doesn't match the source
}
Other than changing your source mode and trying again, a quick check would be to change the memcpy line to the following:
memset(rasterData, 0x3f, 1920*1080*2);
If you still see green frames then take a hard look at #2. If you see different colored frames, then your problem is #1 or #3 and most likely the resolution of your video input doesn't match the BMDDisplayMode that you chose.
One other thing to note. I think the line where you create the presentation time is wrong. It probably should be (note changing frameDuration to frameTime:
CMTime presentationTime = CMTimeMake(frameTime, 3000);

AVAudioPlayer returns success but no sound from device

I have a very simple app that I have built to test out AVAudioPlayer. I have a 16s aif file that plays just fine on my Mac. Put it in a simple app and ran on my iPhone but no sound. Sending 'play' to the object returns success and audioPlayerDidFinishPlaying comes back with success after 16 seconds. Still no sound. Have checked the sound volume on the iPhone, just fine. Have checked to make sure my AVAudioPlayer object has volume set at 1.0 - it does. Can anybody help?
Here's my code
NSString *soundPath = [[NSBundle mainBundle] pathForResource:#"Soothing" ofType:#"aif"];
NSURL *playURL = [NSURL fileURLWithPath:soundPath];
NSError *error;
self.myAudioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:playURL error:&error];
NSLog(#"About to play sound");
self.myAudioPlayer.delegate = self;
[self.myAudioPlayer prepareToPlay];
bool success = [self.myAudioPlayer play];
if (!success) {
NSLog(#"Failed to play file");
} else NSLog(#"Looks like it succeded");
OK - got it figured out. I had not configured the AVAudioSession for my app. Did something very simple as follows (copied from Apple's example code):
[[AVAudioSession sharedInstance] setDelegate: self];
NSError *setCategoryError = nil;
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: &setCategoryError];
if (setCategoryError)
NSLog(#"Error setting category! %#", setCategoryError);
Sound now plays - plus learned a lot about how to control playback through screen locks, etc., which will be valuable.
Did you check if you get anything from the error?
if(error) {
NSLog( #"%#", [error localizedDescription] );
}

AVAudioRecorder doesn't record while screen is locked

I've tried to overcome this for a while. I'm trying to record sound, but the AVAudioRecorder doesn't record while screen is locked. It does continue to record once screen is unlocked, but the audio recorded when screen was locked is lost forever. I can't find anything wrong with what I'm doing:
-(void) startRecording
{
// Begin the recording session.
_session = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
NSError *startRecordError;
[_session setActive:YES error:&startRecordError];
[self GKLog:[NSString stringWithFormat:#"recorder session error? :%#", startRecordError]];
[_session setCategory: AVAudioSessionCategoryRecord error: &setCategoryError];
if (setCategoryError) { NSLog(#"some error");}
//set me as delegate
_session.delegate=(id <AVAudioSessionDelegate>) self;
NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue :[NSNumber numberWithInt:kAudioFormatAppleIMA4] forKey:AVFormatIDKey];
[recordSetting setValue :[NSNumber numberWithInt:8] forKey:AVEncoderBitRateKey];
[recordSetting setValue:[NSNumber numberWithFloat:8000.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];
if (!self.currentPath)
{
NSLog(#"can't record, no path set!");
return;
}
NSError *error;
NSURL *url=[NSURL fileURLWithPath:self.currentPath];
//Setup the recorder to use this file and record to it.
_recorder = [[ AVAudioRecorder alloc] initWithURL:url settings:recordSetting error:&error];
[self GKLog:[NSString stringWithFormat:#" recorder:%#",_recorder]];
_recorder.delegate=(id <AVAudioRecorderDelegate>) self;
[_recorder prepareToRecord];
//Start the actual Recording
[_recorder record];
}
Any ideas, please?
Ok, so the answer to my own question, which took me a long time to find out, is the following: The code I posted is good, but to actually work it needs to work in the background after screen was locked. For this one needs to add a UIBackgroundModes array in the app's plist file, and add 'audio' as one of its objects. This tells the system to let the app work with audio in the background.
Here's the not-so-easy to find documentation. Unfortunately apple doesn't specify that in their documentation of the audio session categories where they claim certain categories work in the background. Anyway, hopefully this answer will be available for others who have a similar problem...
You may want to consider setting the category as AVAudioSessionCategoryRecord to the AudioSession
How about disabling the screen lock until you are done recording?
[UIApplication sharedApplication].idleTimerDisabled = YES;
// Do recording here
[UIApplication sharedApplication].idleTimerDisabled = NO;
Just don't forget to re-enable the screen lock when you're done!

Merge two videos without ffmpeg (Cocoa)

I've looked and looked for an answer, but can't seem to find one. Lots have asked, but none have gotten answers. I have an app that have two video paths. Now I just want to merge them into one file that can be saved in a ".mov" format. Does anyone have any clue as to how this can be done?
Note : I want to to this without installing and obviously using ffmpeg.
Please if you have time, some code would be very helpful.
First, obviously you need to make sure that the movie type is readable/playable by the quicktime libraries.
But, assuming that's the case, the procedure is basically like this:
Get a pointer to some memory to store the data:
QTMovie *myCombinedMovie = [[QTMovie alloc] initToWritableData:[NSMutableData data] error:nil];
Next, grab the first movie that you want to use and insert it into myCombinedMovie You can have the parts you want combined in an array and enumerate over them to combine as many parts as you like. Also, if you wanted, you could alter destination range to add an offset:
QTMovie *firstMovie = [QTMovie movieWithURL:firstURL error:nil];
// NOTE THAT THE 3 LINES BELOW WERE CHANGED FROM MY ORIGINAL POST!
QTTimeRange timeRange = QTMakeTimeRange(QTZeroTime, [firstMovie duration]);
QTTime insertionTime = [myCombinedMovie duration];
[myCombinedMovie insertSegmentOfMovie:firstMovie timeRange:timeRange atTime:insertionTime];
Rinse and repeat for the second movie part.
Then, output the flattened movie (flattening makes it self-contained):
NSDictionary *writeAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], QTMovieFlatten, nil]; //note that you can add a QTMovieExport key with an appropriate value here to export as a specific type
[myCombinedMovie writeToFile:destinationPath withAttributes:writeAttributes];
EDITED: I edited the above as insertion times were calculating wrong. This way seems easier. Below is the code all together as one, including enumerating through an array of movies and lots of error logging.
NSError *err = nil;
QTMovie *myCombinedMovie = [[QTMovie alloc] initToWritableData:[NSMutableData data] error:&err];
if (err)
{
NSLog(#"Error creating myCombinedMovie: %#", [err localizedDescription]);
return;
}
NSArray *myMovieURLs = [NSArray arrayWithObjects:[NSURL fileURLWithPath:#"/path/to/the/firstmovie.mov"], [NSURL fileURLWithPath:#"/path/to/the/secondmovie.mov"], nil];
for (NSURL *url in myMovieURLs)
{
QTMovie *theMovie = [QTMovie movieWithURL:url error:&err];
if (err){
NSLog(#"Error loading one of the movies: %#", [err localizedDescription]);
return;
}
QTTimeRange timeRange = QTMakeTimeRange(QTZeroTime, [theMovie duration]);
QTTime insertionTime = [myCombinedMovie duration];
[myCombinedMovie insertSegmentOfMovie:theMovie timeRange:timeRange atTime:insertionTime];
}
NSDictionary *writeAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], QTMovieFlatten, nil];
bool success = [myCombinedMovie writeToFile:#"/path/to/outputmovie.mov" withAttributes:writeAttributes error:&err];
if (!success)
{
NSLog(#"Error writing movie: %#", [err localizedDescription]);
return;
}

4 second lag in AVAudioRecorder record

I'm using AVAudioRecorder to record audio but I'm experiencing a 4 second delay between button press and beginning to record.
Here's my setup code:
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat: 16000.0],AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleIMA4],AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax],AVEncoderAudioQualityKey,nil];
NSError *error = nil;
audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
if (error)
{
NSLog(#"error: %#", [error localizedDescription]);
} else {
NSLog(#"prepare to record");
[audioRecorder prepareToRecord];
}
-(void)record {
NSLog(#"Record");
//[audioRecorder prepareToRecord];
if (!audioRecorder.recording)
{
NSLog(#"Record 2");
[audioRecorder record];
NSLog(#"Record 3");
}
}
Record is the function called on button press. I know prepareToRecord is called implicitly via 'record' but I wanted to see if it would affect the delay at all. It does not.
Here's the console log:
2011-10-18 21:48:06.508 [2949:707] Record
2011-10-18 21:48:06.509 [2949:707] Record 2
2011-10-18 21:48:10.047 [2949:707] Record 3
There's about 3.5 seconds before it starts recording.
Are these settings too much for the iPhone? (iPhone 4). Am I initializing it wrong?
I'm seeing the same issue on the iOS5/iPhone4 combo. iOS4 and iPhone4S are fine. There seems to be an issue when using the new OS on older hardware. I've tried many combinations of settings as well.
Try setting the AVAudioSession category when you initialize the Audio Recorder:
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
I had the same issue on iPod touch 4th generation (iOS4, iOS5 and JailBreak - I used more devices) and on different devices it runs different, but I think #Charles Chase is right, it does depend of device's iOS you are testing on. Your recordSettings dictionary is ok, code also looks right except one thing, you need to add this:
audioRecorder.delegate = self;
right after you alloc and init your recorder:
audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
audioRecorder.delegate = self;
I had the same problem. Also, since I was switching between Playback and Record mode, this was causing long delays constantly. But when I switched to Play and Record it refused to use the loudspeaker on iPhone, it would only use the headset speaker.
Solution:
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
sizeof(audioRouteOverride), &audioRouteOverride);