iOS: Deprecation of AudioSessionInitialize and AudioSessionSetProperty - objective-c

I'm very new to Objective-C, and am trying to update some code that's about 3 years old to work with iOS 7. There are two or two instances of AudioSessionSetProperty and AudioSessionInitialize appearing in the code:
1:
- (void)applicationDidFinishLaunching:(UIApplication *)application {
AudioSessionInitialize(NULL,NULL,NULL,NULL);
[[SCListener sharedListener] listen];
timer = [NSTimer scheduledTimerWithTimeInterval: 0.5 target: self selector: #selector(tick:) userInfo:nil repeats: YES];
// Override point for customization after app launch
[window addSubview:viewController.view];
[window makeKeyAndVisible];
}
And 2:
- (id)init {
if ([super init] == nil){
return nil;
}
AudioSessionInitialize(NULL,NULL,NULL,NULL);
Float64 rate=kSAMPLERATE;
UInt32 size = sizeof(rate);
AudioSessionSetProperty (kAudioSessionProperty_PreferredHardwareSampleRate, size, &rate);
return self;
}
For some reason this code works on iOS7 in the simulator but not a device running iOS7, and I suspect that these deprecations are the cause. I've been reading through the Docs and related questions on this website, and it appears that I need to use AVAudioSession instead. I've been trying to update the code for a long time now, and I'm unsure of how to properly switch over to AVAudioSession. Does anyone know how these two methods above need to look?
Side note: I've managed to hunt down an article that outlines the transition:
https://github.com/software-mariodiana/AudioBufferPlayer/wiki/Replacing-C-functions-deprecated-in-iOS-7
But I can't seem to apply this to the code above.
The code I'm trying to update is a small frequency detection app from git:
https://github.com/jkells/sc_listener
Alternatively, if someone could point me to a sample demo app that can detect frequencies on iOS devices, that would be awesome.

As you have observed, pretty much all of the old Core Audio AudioSession functions have been deprecated in favour of AVAudioSession.
The AVAudioSession is a singleton object which will get initialised when you first call it:
[AVAudioSession sharedInstance]
There is no separate initialize method. But you will want to activate the audio session:
BOOL activated = [[AVAudioSession sharedInstance] setActive:YES error:&error];
As regards setting the hardware sample rate using AVAudioSession, please refer to my answer here:
How can I obtain the native (hardware-supported) audio sampling rates in order to avoid internal sample rate conversion?
For other compares & contrasts between Core Audio audioSession and AVFoundation's AVAudioSession here are some of my other answers around the same topic:
How Do I Route Audio to Speaker without using AudioSessionSetProperty?
use rear microphone of iphone 5
Play audio through upper (phone call) speaker
How to control hardware mic input gain/level on iPhone?

I wrote a short tutorial that discusses how to update to the new AVAudioSession objects. I posted it on GitHub: "Replacing C functions deprecated in iOS 7."

Related

How to implement a solution to change the orientation to landscape and vice versa

I'm from RN background and pretty new to Objective-C/Swift. Though it is RN project, but there are lot of implementation written by previous engineers in Objective-C/Swift. This particular implementation was written in Objective-C which is to lock the screen in both portrait and landscape. But the problem here the landscape mode doesn't work for iOS 16.
Upon reading through the docs I discover that iOS 16 uses requestGeometryUpdateWithPreferences which our code need to be updated based on that. I've been looking around for the solution in Objective-C and I found this partial solution and the solution in Swift.
I thought of using the partial solution above with the my little update to define the deviceOrientation. But I know it won't work and is incorrect with the code below. What can I try next?
UIInterfaceOrientation deviceOrientation = [UIApplication sharedApplication].statusBarOrientation;
UIWindowScene *windowScene = ( UIWindowScene *)[[[ UIApplication sharedApplication] connectedScenes] allObjects].firstObject;
UIWindowSceneGeometryPreferencesIOS *perference = [[ UIWindowSceneGeometryPreferencesIOS alloc] init];
perference.interfaceOrientations = 1 < deviceOrientation;
[windowScene requestGeometryUpdateWithPreferences:perference errorHandler: ^( NSError * _Nonnull error) {
NSLog(# "error--%#", error);
}];

How to detect microphone input permission refused in iOS 7

I would like to detect when a user refused the microphone permission on my iOS application.
I only get this value when I try to record the microphone: -120.000000 db
But before to get this I have to set up an AVAudioSession. Is there another function?
And I got this message in the output:
Microphone input permission refused - will record only silence
Thanks.
If you are still compiling with iOS SDK 6.0 (as I am) you have to be a bit more indirect than #Luis E. Prado, as the requestRecordPermission method doesn't exist.
Here's how I did it. Remove the autorelease bit if you're using ARC. On iOS6 nothing happens, and on iOS7 either the 'microphone is enabled' message is logged or the alert is popped up.
AVAudioSession *session = [AVAudioSession sharedInstance];
if ([session respondsToSelector:#selector(requestRecordPermission:)]) {
[session performSelector:#selector(requestRecordPermission:) withObject:^(BOOL granted) {
if (granted) {
// Microphone enabled code
NSLog(#"Microphone is enabled..");
}
else {
// Microphone disabled code
NSLog(#"Microphone is disabled..");
// We're in a background thread here, so jump to main thread to do UI work.
dispatch_async(dispatch_get_main_queue(), ^{
[[[[UIAlertView alloc] initWithTitle:#"Microphone Access Denied"
message:#"This app requires access to your device's Microphone.\n\nPlease enable Microphone access for this app in Settings / Privacy / Microphone"
delegate:nil
cancelButtonTitle:#"Dismiss"
otherButtonTitles:nil] autorelease] show];
});
}
}];
}
EDIT: It turns out that the withObject block is executed in a background thread, so DO NOT do any UI work in there, or your app may hang. I've adjusted the code above. A client pointed this out on what was thankfully a beta release. Apologies for the mistake.
Please note that this will only work if built with Xcode 5, and not with 4.6
Add the AVFoundation Framework to your project
Then import the AVAudioSession header file, from the AVFoundation framework, where you intend to check if the microphone setting is enabled
#import <AVFoundation/AVAudioSession.h>
Then simply call this method
[[AVAudioSession sharedInstance] requestRecordPermission:^(BOOL granted) {
if (granted) {
// Microphone enabled code
}
else {
// Microphone disabled code
}
}];
The first time this method runs, it will show the prompt to allow microphone access and based on the users response it will execute the completion block. From the second time onwards it will just act based on the stored setting on the device.
Swift answer:
if AVAudioSession.sharedInstance().recordPermission() == .Denied {
print("Microphone permission refused");
}
Or you can use framework like PermissionScope which permit to easily check permissions. https://github.com/nickoneill/PermissionScope
Edit: Swift 3 answer:
import AVFoundation
...
if AVAudioSession.sharedInstance().recordPermission() == .denied {
print("Microphone permission refused");
}
I'm not 100% certain if we're allowed to talk about iOS 7 outside of Apple's devforums, but I found the answer you're looking for there.
In short, you'll find your solution in the AVAudioSession.h header file in the SDK. And if you want to make use of it while still supporting iOS 6, make certain to use "respondsToSelector:" to check for the API availability.

Launching Facetime from your app?

I am seeing that you can launch FaceTime from your app via
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:#"facetime://tel-number"]];
I am also reading that since there is no officially public FaceTime API apple will reject you.
Does anyone know if this rejection talk is true? PAIR has this feature and they have never been rejected.
This is now documented and legal:
https://developer.apple.com/library/ios/featuredarticles/iPhoneURLScheme_Reference/FacetimeLinks/FacetimeLinks.html#//apple_ref/doc/uid/TP40007899-CH2-SW1
My app got rejected for using FaceTime url. This is the part of response i got from Apple in resolution center.
We found the following non-public API/s in your app: Specifically,
your app uses the FaceTime URL scheme, which is undocumented.
If you have defined methods in your source code with the same names as
the above-mentioned APIs, we suggest altering your method names so
that they no longer collide with Apple's private APIs to avoid your
application being flagged in future submissions.
It was an update of a previous release. The first version got accepted without any problem. Now the update has been rejected due to the above mentioned reason. Seems i have to publish the app without the FaceTime thingy now.
Edit:
Its now legal to use FaceTime url in third party apps.
As a general rule, if you use undocumented API calls and apple catches you, they will reject your application. The reason is because they could change the API call that you are using in new IOS updates and thus would cause your application to crash or not work properly. You can try and submit using the undocumented API and hope that apple lets it through but as i said, you run the risk of Apple changing this api call or removing it completely from the OS in the future.
I don't see any reason this would be rejected, especially if there's already an app that uses this functionality. The App Store Review Guidelines are the best way to determine if your app will be rejected, and I don't see anything in there that applies to you situation.
Of course, Apple can do whatever they want, so the only way to be absolutely sure it will be accepted is to submit it, but I highly doubt you will have a problem.
It is official that you can use Native app URL strings for FaceTime video calls:
facetime:// 14085551234
facetime://user#example.com
Please refer to the link: https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/FacetimeLinks/FacetimeLinks.html
Though this feature is supported on all devices, you have to change the code a little bit for iOS 10.0 and above as openURL(_:) is deprecated.
https://developer.apple.com/documentation/uikit/uiapplication/1622961-openurl?language=objc
Please refer code below for the current and fallback mechanism, so this way it will not get rejected by Appstore.
-(void) callFaceTime : (NSString *) contactNumber
{
NSURL *URL = [NSURL URLWithString:[NSString
stringWithFormat:#"facetime://%#", contactNumber]];
if (#available(iOS 10.0, *)) {
[[UIApplication sharedApplication] openURL:URL options:#{}
completionHandler:^(BOOL success)
{
if (success)
{
NSLog(#"inside success");
}
else
{
NSLog(#"error");
}
}];
}
else {
// Fallback on earlier versions
NSString *faceTimeUrlScheme = [#"facetime://"
stringByAppendingString:contactNumber];
NSURL *facetimeURL = [NSURL URLWithString:faceTimeUrlScheme];
// Facetime is available or not
if ([[UIApplication sharedApplication] canOpenURL:facetimeURL])
{
[[UIApplication sharedApplication] openURL:facetimeURL];
}
else
{
// Facetime not available
NSLog(#"Facetime not available");
}
}
}
in contactNumber either pass phone number or appleid.
NSString *phoneNumber = #"9999999999";
NSString *appleId = #"abc#gmail.com";
[self callFaceTime:appleId];
objective-c ios

UIImageWriteToSavedPhotosAlbum and ALAssetsLibrary not saving an image, no error either

I am trying to save an image to the camera roll. This actually used to work wonderfully, but I had to work on other stuff and now I'm returning to the project to update it for iOS 6 and poof this feature no longer works at all on iOS6.
I have tried two approaches, both are failing silently without NSError objects. First, UIImageWriteToSavedPhotosAlbum:
UIImageWriteToSavedPhotosAlbum(img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
// Callback
-(void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo
{
// error == nil
}
... and the ALAssetsLibrary approach:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[img CGImage]
orientation:(ALAssetOrientation)[img imageOrientation]
completionBlock:^(NSURL *assetURL, NSError *error)
{
// assetURL == nil
// error == nil
}
Also, [ALAssetsLibrary authorizationStatus] == ALAuthorizationStatusAuthorized evaluates to true
On the Simulator, the app never shows up in the Settings > Privacy > Photos section, however on an actual iPad they do show that the app has permission to access photos. (Also, just to add: The first approach above was what I previously used - it worked on real devices & simulators alike, no problem).
I have also tried running this on the main thread to see if that changed anything - no difference. I was running it on the background previously and it used to work fine (on both simulator and device).
Can anyone shed some light?
Figured it out... I was doing something stupid. UIImage cannot take raw pixel data, you have to first massage it into a form it can accept, with the proper metadata.
Part of the problem was that I was using Cocos2D to get a UIImage from a CCRenderTexture (getUIImageFromBuffer()) and when I switched to Cocos2D-x that function was no longer available, and I simply was ignorant to the fact that UIImage objects cannot be constructed with raw pixel data, I figured it handled header information & formatting automatically.
This answer helped: iPhone - UIImage imageWithData returning nil
And this example was also helpful:
http://www.wmdeveloper.com/2010/09/create-bitmap-graphics-context-on.html?m=1

Mute an HTTP Live Stream in an AVPlayer

I've been trying to work out this problem for a good 48 hours now and haven't come up with anything. I have 2 AVPlayer objects playing different http live streams. Obviously, I don't want them both playing audio at the same time so I need a way to mute one of the videos.
Apple suggests this for muting an audio track playing in AVPlayer...
NSMutableArray *allAudioParams = [NSMutableArray array];
for (AVPlayerItemTrack *track in [_playerItem tracks]) {
if ([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio]) {
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams setVolume:0.0 atTime:CMTimeMakeWithSeconds(0,1)];
[audioInputParams setTrackID:[track.assetTrack trackID]];
[allAudioParams addObject:audioInputParams];
// Added to what Apple Suggested
[track setEnabled:NO];
}
}
AVMutableAudioMix *audioZeroMix = [AVMutableAudioMix audioMix];
[audioZeroMix setInputParameters:allAudioParams];
[_playerItem setAudioMix:audioZeroMix];
When this didn't work (after many iterations), I found the enabled property of AVPlayerItemTrack and tried setting that to NO. Also nothing. This doesn't even register as doing anything because when I try an NSLog(#"%x",track.enabled), it still shows up as 1.
I'm at a loss and I can't think of another piece of documentation I can read and re-read to get a good answer. If anyone out there can help, that would be fantastic.
*Update: I got a hold of Apple and according to the AVFoundation team, it is impossible to mute or disable a track of an HLS video. I, personally, feel like this is a bug so I submitted a bug report (You should do the same to tell Apple that this is a problem). You can also
try and submit a feature enhancement request via their feedback page.
New iOS 7 answer: AVPlayer now has 2 new properties 'volume' and 'muted'. Use those!
And here is the original answer for life before iOS 7:
I've been dealing with the same thing. We created muted streams and streams with audio. To mute or unmute you call [player replaceCurrentItemWithPlayerItem:muteStream].
I also submitted a bug report. It looks like AVPlayer has this functionality on MacOS 10.7, but it hasn't made it to iOS yet.
AVAudioMix is documented not to work on URL assets here
Of course I tried it anyway, and like you I found it really doesn't work.
The best solution for this would be to actually embed the stream url feed with two audio tracks! One would be with the normal audio and the other audio track would be the muted audio.
It makes more sense to do it this way rather then the way ComPuff suggested as his way your actually creating two separate URL streams - which is not required.
Here is the code that you could use to switch the audio tracks:
float volume = 0.0f;
AVPlayerItem *currentItem = self.player.currentItem;
NSArray *audioTracks = self.player.currentItem.tracks;
DLog(#"%#",currentItem.tracks);
NSMutableArray *allAudioParams = [NSMutableArray array];
for (AVPlayerItemTrack *track in audioTracks)
{
if ([track.assetTrack.mediaType isEqual:AVMediaTypeAudio])
{
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams setVolume:volume atTime:kCMTimeZero];
[audioInputParams setTrackID:[track.assetTrack trackID]];
[allAudioParams addObject:audioInputParams];
}
}
if ([allAudioParams count] > 0) {
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
[audioMix setInputParameters:allAudioParams];
[currentItem setAudioMix:audioMix];
}
The only problem is that my stream url is only display two tracks (one for video and one for audio) when it should actually be three tracks (2 audio tracks). I cant work out if this is a problem with the stream url or my code! Can anyone spot any mistakes in the code?