Setup separate app volume controls - objective-c

How can i control my app's output volume.
I've got an app that uses https://github.com/mattgallagher/AudioStreamer to stream mp3 files from the internet. The AudioStreamer class does not have a way to change output volume and I don't want to change system volume.
Many apps do this:
iTunes
Spotify
MPlayerX
Most Audio Players
Edit: If you're hear about AudioStreamer, I've since switched to Apple's AVPlayer, which i've found far simpler and superior. Easy volume adjustment too!

AudioStreamer and I'm guessing most OSX media players use the AudioToolbox framework. AudioToolbox uses a programming interface called AudioQueue to playback media files. Here is the way to adjust the volume using AudioQueue.
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 0.5);
audioQueue is an AudioQueRef
kAudioQueueParam_Volume tells AudioQueueSetParameter() to change the Volume Parameter
0.5 is the volume from 0.0 - 1.0
More details on AudioQueue: https://developer.apple.com/library/mac/#documentation/MusicAudio/Reference/AudioQueueReference/Reference/reference.html

You can use AVAudioPlayer, it has an instance method setVolume: to set the output volume:
AVAudioPlayer * audioPlayer = ...
float volume = aVolumeValue / 100.f; // aVolumeValue can be 0~100
[audioPlayer setVolume:volume];
[audioPlayer play];
By the way, you can store the aVolumeValue into NSUserDefaults, and control it by UISlider object.

Related

Audio Visualizer with AVQueuePlayer

I'm trying to build a Audio Visualizer when playing from the AVQueuePlayer, I want something similar to this but AVQueuePlayer does not have some of the methods required like [audioPlayer updateMeters], [audioPlayer numberOfChannels] and [audioPlayer averagePowerForChannel:]. Could anyone help me with a workaround to this or link me to a guide to setting up one. I want to use the AVQueuePlayer because of the queue ability however if I can't setup something I would consider managing the queue myself and using AVAudioPlayer.
Thanks for your help.
AVQueuePlayer is a subclass of AVPlayer, not AVAudioPlayer, which is why it's "missing" those methods.
You can just use AVAudioPlayer for playback and a MPMediaPickerController for loading up a queue of songs. Take a look at the AddMusic sample code. It's a bit dated, but should get you going.

Get Volume of AVPlayer in iOS

There are plenty of questions asking how to set the volume of an AVPlayer, but how do you get the current volume of the player in iOS?
For example, I am trying to fade a song out from its current level. I could save the volume elsewhere and refer to it, but would rather read the value directly from the AVPlayer.
AVPlayer is contains one or more AVPlayerItem objects, and it is through these objects that you can get and set audio levels for media played by an AVPlayer. Head to the AVPlayerItem docs and look at the audioMix property, and also check out my answer to a slightly different question that should still provide some info.
Following up after your comment, this is (I think) how you would get the volume values from the - (BOOL)getVolumeRampForTime:(CMTime)time startVolume:(float *)startVolume endVolume:(float *)endVolume timeRange:(CMTimeRange *)timeRange method:
// Get your AVAudioMixInputParameters instance, here called audioMixInputParameters
// currentTime is the current playhead time of your media
float startVolume;
float endVolume;
CMTimeRange timeRange;
bool success = [audioMixInputParameters getVolumeRampForTime: currentTime
startVolume: &startVolume
endVolume: &endVolume
timeRange: &timeRange];
// startVolume and endVolume should now be set
NSLog(#"Start volume: %f | End volume: %f", startVolume, endVolume);
According to Apple's AVPlayer documentation for OS X, it lists a volume property, but the documentation for the same class in iOS doesn't show one listed. Would your project allow you to use AVAudioPlayer instead? That one does have a synthesized volume property for iOS that's much more easily set/retrieved.
You could use the volume property of the AVPlayer class. Here's the AVPlayer class reference link. Quoting from it:
volume
Indicates the current audio volume of the player.
#property(nonatomic) float volume
Discussion
0.0 means “silence all audio,” 1.0 means “play at the full volume of the current item.”
Availability
Available in OS X v10.7 and later.
Declared In
AVPlayer.h
edit:
You could try geting the system volume instead. This link provides 2 ways.

How to create a simultaneously playing audio stack from different audio files in Cocoa?

I'm working on a Mac OS X application using Cocoa in Xcode. One feature involves a simultaneous audio playback:
I want to build some kind of audio stack: An audio file merged in runtime from a set of different source files. The set of source audio files differs each runtime. Each Audio file has the exact same length.
After creating the audio stack (or merged file) I want to play [and store] it.
I'm new to audio frameworks in Cocoa. Is there a high-level API that provides an appropriate functionality? Do I have to look inside the CoreAudio, Audio Unit or QTKit framework? Do you have an implementation idea (or sample implementation)?
If you just want to play a bunch of audio files simultaneously, this is easy; unlikes Windows, OS X acts as if it has an infinite-channel mixer built-in, and all the higher-level functions just grab a new channel if necessary for each sound. For example:
NSURL *u1 = [[NSBundle mainBundle] URLForResource:#"1" withExtension:#"mp3"];
NSURL *u2 = [[NSBundle mainBundle] URLForResource:#"2" withExtension:#"mp3"];
NSSound *s1 = [[NSSound alloc] initWithContentsOfURL:u1 byReference:YES];
NSSound *s2 = [[NSSound alloc] initWithContentsOfURL:u2 byReference:YES];
[s1 play];
[s2 play];
This will start playing MyApp.app/Contents/Resources/1.mp3 and MyApp.app/Contents/Resources/2.mp3 at (very close to) the same time.
If you need a (real or virtual) file with a bunch of audio tracks in them, QTKit is probably the easiest way; create a movie, open all of your audio files, copy their tracks into the movie (with the same start date), and now you can play the movie (or do whatever else you want).
If you want to actually merge the audio into one stereo track (e.g., so you can save a normal audio file), instead of just playing the tracks together at runtime, you could use QTKit as above and create an export session (much like using the "Export…" feature in QuickTime Player), or use CoreAudio, or a variety of different cross-platform open source libraries.

Receive remote control events without audio

Here is some background information, otherwise skip ahead to the question in bold. I am building an app and I would like it to have access to the remote control/lock screen events. The tricky part is that this app does not play audio itself, it controls the audio of another device nearby. The communication between devices is not a problem when the app is in the foreground. As I just found out, an app does not assume control of the remote controls until it has played audio with a playback audio session, and was the last do so. This presents a problem because like I said, the app controls ANOTHER device's audio and has no need to play its own.
My first inclination is to have the app play a silent clip every time it is opened in order to assume control of the remote controls. The fact that I have to do this makes me wonder if I am even going to be allowed to do it by Apple or if there is another way to achieve this without fooling the system with fake audio clips.
QUESTION(S): Will Apple approve an app that plays a silent audio clip in order to assume control of the remote/lock screen controls for the purpose of controlling another device's audio? Is there any way of assuming control of the remote controls without an audio session?
P.S. I would prefer to have this functionality on iOS 4.0 and up.
P.P.S I have seen this similar question and it has gotten me brainstorming but the answer provided is not specific to what I need to know.
NOTE: As of iOS 7.1, you should be using MPRemoteCommandCenter instead of the answer below.
You create various system-provided subclasses of MPRemoteCommand and assign them to properties of the [MPRemoteCommandCenter sharedCommandCenter].
I'm keeping the rest of this around for historical reference, but the following is not guaranteed to work on recent iOS versions. In fact, it just might not.
You definitely do need an audio player but not necessarily an explicit session to take control of the remote control events. (AVAudioSession is implicit to any app that plays audio.) I spent a decent amount of time playing with this to confirm this.
I've seen a lot of confusion on the internet about where to set up the removeControlEventRecievedWithEvent: method and various approaches to the responder chain. I know this method works on iOS 6 and iOS 7. Other methods have not. Don't waste your time handling remote control events in the app delegate (where they used to work) or in a view controller which may go away during the lifecycle of your app.
I made a demo project to show how to do this.
Here's a quick rundown of what has to happen:
You need to create a subclass of UIApplication. When the documentation says UIResponder, it means UIApplication, since your application class is a subclass of UIResponder. In this subclass, you're going to implement the remoteControlReceivedWithEvent: and canBecomeFirstResponder methods. You want to return YES from canBecomeFirstResponder. In the remote control method, you'll probably want to notify your audio player that something's changed.
You need to tell iOS to use your custom class to run the app, instead of the default UIApplication. To do so, open main.m and change this:
return UIApplicationMain(argc, argv, nil, NSStringFromClass([RCAppDel`egate class]));
to look like this:
return UIApplicationMain(argc, argv, NSStringFromClass([RCApplication class]), NSStringFromClass([RCAppDelegate class]));
In my case RCApplication is the name of my custom class. Use the name of your subclass instead. Don't forget to #import the appropriate header.
OPTIONAL: You should configure an audio session. It's not required, but if you don't, audio won't play if the phone is muted. I do this in the demo app's delegate, but do so where appropriate.
Play something. Until you do, the remote controls will ignore your app. I just took an AVPlayer and gave it the URL of a streaming site that I expect to be up. If you find that it fails, put your own URL in there and play with it to your heart's content.
This example has a little bit more code in there to log out remote events, but it's not all that complicated. I just define and pass around some string constants.
I bet that a silent looping MP3 file would help work towards your goal.
Moshe's solution worked great for me! However one issue I noticed is when you paused the audio, the media controls would go away and you won't be able to play it again without going back into the app. If you set the Media Info on the lock screen when you play the audio then this won't happen:
NSDictionary *mediaInfo = #{MPMediaItemPropertyTitle: #"My Title",
MPMediaItemPropertyAlbumTitle: #"My Album Name",
MPMediaItemPropertyPlaybackDuration: [NSNumber numberWithFloat:0.30f]};
[[MPNowPlayingInfoCenter defaultCenter] setNowPlayingInfo:mediaInfo];

Frame synchronization with AVPlayer

I'm having an issue syncing external content in a CALayer with an AVPlayer at high precision.
My first thought was to lay out an array of frames (equal to the number of frames in the video) within a CAKeyframeAnimation and sync with an AVSynchronizedLayer. However, upon stepping through the video frame-by-frame, it appears that AVPlayer and Core Animation redraw on different cycles, as there is a slight (but noticeable) delay between them before they sync up.
Short of processing and displaying through Core Video, is there a way to accurately sync with an AVPlayer on the frame level?
Update: February 5, 2012
So far the best way I've found to do this is to pre-render through AVAssetExportSession coupled with AVVideoCompositionCoreAnimationTool and a CAKeyFrameAnimation.
I'm still very interested in learning of any real-time ways to do this, however.
What do you mean by 'high precision?'
Although the docs claim that an AVAssetReader is not designed for real-time usage, in practice I have had no problems reading video in real-time using it (cf https://stackoverflow.com/a/4216161/42961). The returned frames come with a 'Presentation timestamp' which you can fetch using CMSampleBufferGetPresentationTimeStamp.
You'll want one part of the project to be the 'master' timekeeper here. Assuming your CALayer animation is quick to compute and doesn't involve potentially blocky things like disk access, I'd use that as the master time source. When you need to draw content (eg in the draw selector on your UIView subclass) you should read currentTime from the CALayer animation, if necessary proceed through the AVAssetReader's video frames using copyNextSampleBuffer until CMSampleBufferGetPresentationTimeStamp returns >= currentTime, draw the frame, and then draw the CALayer animation content over the top.
If your player is using an AVURLAsset, did you load it with the precise duration flag set? I.e. something like:
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *urlAsset = [AVURLAsset URLAssetWithURL:aUrl options:options];