I'm building a turn-by-turn navigation app that plays periodic, short clips of sound. Sound should play regardless of whether the screen is locked, should mix with other music playing, and should make other music duck when this audio plays.
Apple discusses the turn-by-turn use case in detail in the "WWDC 2010 session 412 Audio Development for iPhone OS part 1" video at minute 29:20. The implementation works great, but there is one problem - when the app is running, pressing the hardware volume controls adjust the ringer volume, not the app volume. If you want to change the app volume, you must press the volume buttons while a prompt is playing.
Apple is very specific in the video that you shouldn't leave the AVAudioSession active, but if the AVAudioSession is inactive, the volume buttons won't control the volume of my app.
Here is the code I'm using to set things up:
UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(allowMixing), &allowMixing);
UInt32 shouldDuck = true;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(shouldDuck), &shouldDuck);
OSStatus activationResult = AudioSessionSetActive(true);
NSError* err = nil;
_player = [[AVAudioPlayer alloc] initWithData:audioData error:&err];
_player.delegate = self;
[_player play];
And I set the session active to NO at the end, as Apple recommends:
OSStatus activationResult = AudioSessionSetActive(false);
NSAssert(activationResult == kAudioSessionNoError, #"Error deactivating audio session");
Is there something I'm missing, or do I have to go against what they recommended in the video?
In your case, you don't want to set the audio session inactive. What you need to do is use two methods, one to set the session up for playing a sound, and the other to set it up for being idle. The first method sets up a mix+duck audio mode, and the second uses a background-audio-friendly mode like ambient.
Something like this:
- (void)setActive {
UInt32 mix = 1;
UInt32 duck = 1;
NSError* errRet;
AVAudioSession* session = [AVAudioSession sharedInstance];
[session setActive:NO error:&errRet];
[session setCategory:AVAudioSessionCategoryPlayback error:&errRet];
NSAssert(errRet == nil, #"setCategory!");
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(mix), &mix);
AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(duck), &duck);
[session setActive:YES error:&errRet];
}
- (void)setIdle {
NSError* errRet;
AVAudioSession* session = [AVAudioSession sharedInstance];
[session setActive:NO error:&errRet];
[session setCategory:AVAudioSessionCategoryAmbient error:&errRet];
NSAssert(errRet == nil, #"setCategory!");
[session setActive:YES error:&errRet];
}
Then to call it:
[self setActive];
[self _playAudio:nil];
To clean up after playing:
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer*)player
successfully:(BOOL)flag {
[self setIdle];
}
To be a good citizen, your app should set the audio session inactive when the it isn't navigating (i.e., performing its main function), but when it is, there is absolutely nothing wrong with keeping the audio session active and using modes to peacefully coexist with other apps. You can duplicate Apple's navigation app functionality using the code above.
Related
I'm creating an app that has a remotely-triggered alarm. Essentially, I'm trying to trigger a looping MP3 file to play (while the app is backgrounded) when a remote push notification arrives with a particular payload.
I've tried using didReceiveRemoteNotification: fetchCompletionHandler:, so that code can be run as a result of receiving a remote notification with a particular userInfo payload.
Here is my attempted didReceiveRemoteNotification: fetchCompletionHandler: from my AppDelegate.m:
- (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo fetchCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler
{
NSString *command = [userInfo valueForKeyPath:#"custom.a.command"];
if (command) {
UIApplicationState applicationState = [[UIApplication sharedApplication] applicationState];
if ([command isEqualToString:#"alarm"] && applicationState != UIApplicationStateActive) {
// Play alarm sound on loop until app is opened by user
NSLog(#"playing alarm.mp3");
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"alarm" ofType:#"mp3"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
NSError *error;
self.player = nil;
self.player = [[AVAudioPlayer alloc] initWithContentsOfURL:soundFileURL error:&error];
self.player.numberOfLoops = -1; // Infinitely loop while self.player is playing
self.player.delegate = self;
[self.player play];
}
}
completionHandler(UIBackgroundFetchResultNewData);
}
I expected the looping audio file to start playing as soon as the push notification arrived (with the app inactive or backgrounded), but it didn't. Instead, the audio playback surprisingly began when I then bring the app to the foreground.
What is missing from this approach, and/or can a different way work better?
You cannot start an audio session with the app in background. Audio sessions have to be initialized/started while the app is in foreground. An audio session properly initialized and running can continue if the app is pushed to background provided another app in foreground does not interrupt it.
Based on this information, I would say that your application likely has to start an audio session while you are in control and in foreground, keep the audio session alive while in background. Upon receiving the push notification, use the existing opened audio session to deliver audio out.
This has serious limitations since any other app, like Netflix, that uses a dedicated audio session may interrupt your app's audio session and prevent it from being able to play the MP3 when it arrives.
You may want to consider pre-packaging and/or downloading the MP3 ahead of time, and refer to them direcly in the Sound parameters of your push notification.
You may follow this tutorial to see how you can play custom sounds using push notifications: https://medium.com/#dmennis/the-3-ps-to-custom-alert-sounds-in-ios-push-notifications-9ea2a2956c11
func pushNotificationHandler(userInfo: Dictionary<AnyHashable,Any>) {
// Parse the aps payload
let apsPayload = userInfo["aps"] as! [String: AnyObject]
// Play custom push notification sound (if exists) by parsing out the "sound" key and playing the audio file specified
// For example, if the incoming payload is: { "sound":"tarzanwut.aiff" } the app will look for the tarzanwut.aiff file in the app bundle and play it
if let mySoundFile : String = apsPayload["sound"] as? String {
playSound(fileName: mySoundFile)
}
}
// Play the specified audio file with extension
func playSound(fileName: String) {
var sound: SystemSoundID = 0
if let soundURL = Bundle.main.url(forAuxiliaryExecutable: fileName) {
AudioServicesCreateSystemSoundID(soundURL as CFURL, &sound)
AudioServicesPlaySystemSound(sound)
}
}
I am using the iPhone/iPad camera to get a video stream and doing recognition on the stream, but with lighting changes it has a negative impact on the robustness. I have tested different settings in different light and can get it to work, but trying to get the settings to adjust at run time is what I need.
I can calculate a simple brightness check on each frame, but the camera adjusts and throws my results off. I can watch for sharp changes and run checks then, but gradual changes would throw my results off as well.
Ideally I'd be like to access the camera/EXIF data for the stream and see what it is registering the unfiltered brightness as, is there a way to do this?
(I am working for devices iOS 5 and above)
Thank you
Available in iOS 4.0 and above. It's possible to get EXIF information from CMSampleBufferRef.
//Import ImageIO & include framework in your project.
#import <ImageIO/CGImageProperties.h>
In your sample buffer delegate toll-free bridging will get a NSDictionary of results from CoreMedia's CMGetAttachment.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSDictionary* dict = (NSDictionary*)CMGetAttachment(sampleBuffer, kCGImagePropertyExifDictionary, NULL);
Complete code, as used in my own app:
- (void)setupAVCapture {
//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
//-- Set preset session size.
[_session setSessionPreset:AVCaptureSessionPreset1920x1080];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error)
assert(0);
[_session addInput:input];
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:dataOutput];
[_session commitConfiguration];
[_session startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL,
sampleBuffer, kCMAttachmentMode_ShouldPropagate);
NSDictionary *metadata = [[NSMutableDictionary alloc]
initWithDictionary:(__bridge NSDictionary*)metadataDict];
CFRelease(metadataDict);
NSDictionary *exifMetadata = [[metadata
objectForKey:(NSString *)kCGImagePropertyExifDictionary] mutableCopy];
self.autoBrightness = [[exifMetadata
objectForKey:(NSString *)kCGImagePropertyExifBrightnessValue] floatValue];
float oldMin = -4.639957; // dark
float oldMax = 4.639957; // light
if (self.autoBrightness > oldMax) oldMax = self.autoBrightness; // adjust oldMax if brighter than expected oldMax
self.lumaThreshold = ((self.autoBrightness - oldMin) * ((3.0 - 1.0) / (oldMax - oldMin))) + 1.0;
NSLog(#"brightnessValue %f", self.autoBrightness);
NSLog(#"lumaThreshold %f", self.lumaThreshold);
}
The lumaThreshold variable is sent as a uniform variable to my fragment shader, which multiplies the Y sampler texture to find the ideal luminosity based on the brightness of the environment. Right now, it uses the back camera; I'll probably switch to the front camera, since I'm only changing the "brightness" of the screen to adjust for indoor/outdoor viewing, and the user's eyes are on the front of the camera (and not the back).
I created a 'mirror'-like view in my app that uses the front camera to show a 'mirror' to the user. The problem I'm having is that I have not touched this code in weeks (and it did work then) but now I'm testing it again and it's not working. The code is the same as before, there are no errors coming up, and the view in the storyboard is exactly the same as before. I have no idea what is going on, so I was hoping that this website would help.
Here is my code:
if([UIImagePickerController isCameraDeviceAvailable:UIImagePickerControllerCameraDeviceFront]) {
//If the front camera is available, show the camera
AVCaptureSession *session = [[AVCaptureSession alloc] init];
AVCaptureOutput *output = [[AVCaptureStillImageOutput alloc] init];
[session addOutput:output];
//Setup camera input
NSArray *possibleDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
//You could check for front or back camera here, but for simplicity just grab the first device
AVCaptureDevice *device = [possibleDevices objectAtIndex:1];
NSError *error = nil;
// create an input and add it to the session
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; //Handle errors
//set the session preset
session.sessionPreset = AVCaptureSessionPresetHigh; //Or other preset supported by the input device
[session addInput:input];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
//Now you can add this layer to a view of your view controller
[cameraView.layer addSublayer:previewLayer];
previewLayer.frame = self.cameraView.bounds;
[session startRunning];
if ([session isRunning]) {
NSLog(#"The session is running");
}
if ([session isInterrupted]) {
NSLog(#"The session has been interupted");
}
} else {
//Tell the user they don't have a front facing camera
}
Thank You in advanced.
Not sure if this is the problem but there is an inconsistency between your code and the comments. The inconsistency is with the following line of code:
AVCaptureDevice *device = [possibleDevices objectAtIndex:1];
In the comment above it says: "...for simplicity just grab the first device". However, the code is grabbing the second device, NSArray is indexed from 0. I believe the comment should be corrected as I think you are assuming the front camera will be the second device in the array.
If you are working on the assumption that the first device is the back camera and the second device is the front camera then this is a dangerous assumption. It would be much safer and more future proof to check the list of possibleDevices for the device that is the front camera.
The following code will enumerate the list of possibleDevices and create input using the front camera.
// Find the front camera and create an input and add it to the session
AVCaptureDeviceInput* input = nil;
for(AVCaptureDevice *device in possibleDevices) {
if ([device position] == AVCaptureDevicePositionFront) {
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error]; //Handle errors
break;
}
}
Update: I have just cut and pasted the code exactly as it is in the question into a simple project and it is working fine for me. I am seeing the video from the front camera. You should probably look elsewhere for the issue. First, I would be inclined to check the cameraView and associated layers.
I'm writing an iOS App using an AudioQueue for recording. I create an input queue configured to get linear PCM, stated this queue and everything works as expected.
To manage interruptions, I implemented the delegate methods of AVAudioSession to catch the begin and the end of an interruption. The method endInterruption looks like the following:
- (void)endInterruptionWithFlags:(NSUInteger)flags;
{
if (flags == AVAudioSessionInterruptionFlags_ShouldResume && audioQueue != 0) {
NSLog(#"Current audio session - category: '%#' mode: '%#'",
[[AVAudioSession sharedInstance] category],
[[AVAudioSession sharedInstance] mode]);
NSError *error = nil;
OSStatus errorStatus;
if ((errorStatus = AudioSessionSetActive(true)) != noErr) {
error = [self errorForAudioSessionServiceWithOSStatus:errorStatus];
NSLog(#"Could not reactivate the audio session: %#",
[error localizedDescription]);
} else {
if ((errorStatus = AudioQueueStart(audioQueue, NULL)) != noErr) {
error = [self errorForAudioQueueServiceWithOSStatus:errorStatus];
NSLog(#"Could not restart the audio queue: %#",
[error localizedDescription]);
}
}
}
// ...
}
If the app gets interrupted while it is in foreground, everything works correct. The problem appears, if the interruption happens in the background. Activating the audio session result in the error !cat:
The specified audio session category cannot be used for the attempted audio operation. For example, you attempted to play or record audio with the audio session category set to kAudioSessionCategory_AudioProcessing.
Starting the queue without activating the session results in the error code: -12985
At that point the category is set to AVAudioSessionCategoryPlayAndRecord and the mode is AVAudioSessionModeDefault.
I couldn't find any documentation for this error message, nor if it is possible to restart an input audio queue in the background.
Yes it is possible, but to reactivate the session in the background, the audio session has to either set AudioSessionProperty kAudioSessionProperty_OverrideCategoryMixWithOthers
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers,
sizeof (allowMixing),
&allowMixing
);
or the app has to receive remote control command events:
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
[self becomeFirstResponder];
At the present there is no way to reactivate if you are in the background.
Have you made your app support backgrounding in the info.plist? I'm not sure if recording is possible in the background, but you probably need to add "Required Background Modes" and then a value in that array of "App plays audio"
Update I just checked and recording in the background is possible.
I am creating an application in which I'm using nstimer and avaudioplayer to play sound,but both sound and timer stops when phone is in deep sleep mode.how to solve this issue?
here is the code to play audio
-(void)PlayTickTickSound:(NSString*)SoundFileName
{
//Get the filename of the sound file:
NSString *path = [NSString stringWithFormat:#"%#%#",[[NSBundle mainBundle] resourcePath],[NSString stringWithFormat:#"/%#",SoundFileName]];// #"/Tick.mp3"];
//Get a URL for the sound file
NSURL *filePath = [NSURL fileURLWithPath:path isDirectory:NO];
NSError *error;
if(self.TickPlayer==nil)
{
self.TickPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:filePath error:&error];
// handle errors here.
self.TickPlayer.delegate=self;
[self.TickPlayer setNumberOfLoops:-1]; // repeat forever
[self.TickPlayer play];
}
else
{
[self.TickPlayer play];
}
}
In order to prevent an app from going to sleep when the screen is locked, you must set your audio session to be of type kAudioSessionCategory_MediaPlayback.
Here's an example:
UInt32 category = kAudioSessionCategory_MediaPlayback;
OSStatus result = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,
sizeof(category), &category);
if (result){
DebugLog(#"ERROR SETTING AUDIO CATEGORY!\n");
}
result = AudioSessionSetActive(true);
if (result) {
DebugLog(#"ERROR SETTING AUDIO SESSION ACTIVE!\n");
}
If you don't set the audio session category, then your app will sleep.
This will only continue to prevent the app from being put to sleep as long as you continue to play audio. If you stop playing audio and the screen is still locked, the app will go to sleep and your timers will be paused.
If you want the app to remain awake indefinitely, you'll need to play a "silent" audio file to keep it awake.
I have a code example of this here: Preventing iPhone Sleep