Mute an HTTP Live Stream in an AVPlayer - objective-c

I've been trying to work out this problem for a good 48 hours now and haven't come up with anything. I have 2 AVPlayer objects playing different http live streams. Obviously, I don't want them both playing audio at the same time so I need a way to mute one of the videos.
Apple suggests this for muting an audio track playing in AVPlayer...
NSMutableArray *allAudioParams = [NSMutableArray array];
for (AVPlayerItemTrack *track in [_playerItem tracks]) {
if ([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio]) {
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams setVolume:0.0 atTime:CMTimeMakeWithSeconds(0,1)];
[audioInputParams setTrackID:[track.assetTrack trackID]];
[allAudioParams addObject:audioInputParams];
// Added to what Apple Suggested
[track setEnabled:NO];
}
}
AVMutableAudioMix *audioZeroMix = [AVMutableAudioMix audioMix];
[audioZeroMix setInputParameters:allAudioParams];
[_playerItem setAudioMix:audioZeroMix];
When this didn't work (after many iterations), I found the enabled property of AVPlayerItemTrack and tried setting that to NO. Also nothing. This doesn't even register as doing anything because when I try an NSLog(#"%x",track.enabled), it still shows up as 1.
I'm at a loss and I can't think of another piece of documentation I can read and re-read to get a good answer. If anyone out there can help, that would be fantastic.
*Update: I got a hold of Apple and according to the AVFoundation team, it is impossible to mute or disable a track of an HLS video. I, personally, feel like this is a bug so I submitted a bug report (You should do the same to tell Apple that this is a problem). You can also
try and submit a feature enhancement request via their feedback page.

New iOS 7 answer: AVPlayer now has 2 new properties 'volume' and 'muted'. Use those!
And here is the original answer for life before iOS 7:
I've been dealing with the same thing. We created muted streams and streams with audio. To mute or unmute you call [player replaceCurrentItemWithPlayerItem:muteStream].
I also submitted a bug report. It looks like AVPlayer has this functionality on MacOS 10.7, but it hasn't made it to iOS yet.
AVAudioMix is documented not to work on URL assets here
Of course I tried it anyway, and like you I found it really doesn't work.

The best solution for this would be to actually embed the stream url feed with two audio tracks! One would be with the normal audio and the other audio track would be the muted audio.
It makes more sense to do it this way rather then the way ComPuff suggested as his way your actually creating two separate URL streams - which is not required.
Here is the code that you could use to switch the audio tracks:
float volume = 0.0f;
AVPlayerItem *currentItem = self.player.currentItem;
NSArray *audioTracks = self.player.currentItem.tracks;
DLog(#"%#",currentItem.tracks);
NSMutableArray *allAudioParams = [NSMutableArray array];
for (AVPlayerItemTrack *track in audioTracks)
{
if ([track.assetTrack.mediaType isEqual:AVMediaTypeAudio])
{
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams setVolume:volume atTime:kCMTimeZero];
[audioInputParams setTrackID:[track.assetTrack trackID]];
[allAudioParams addObject:audioInputParams];
}
}
if ([allAudioParams count] > 0) {
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
[audioMix setInputParameters:allAudioParams];
[currentItem setAudioMix:audioMix];
}
The only problem is that my stream url is only display two tracks (one for video and one for audio) when it should actually be three tracks (2 audio tracks). I cant work out if this is a problem with the stream url or my code! Can anyone spot any mistakes in the code?

Related

iOS: Deprecation of AudioSessionInitialize and AudioSessionSetProperty

I'm very new to Objective-C, and am trying to update some code that's about 3 years old to work with iOS 7. There are two or two instances of AudioSessionSetProperty and AudioSessionInitialize appearing in the code:
1:
- (void)applicationDidFinishLaunching:(UIApplication *)application {
AudioSessionInitialize(NULL,NULL,NULL,NULL);
[[SCListener sharedListener] listen];
timer = [NSTimer scheduledTimerWithTimeInterval: 0.5 target: self selector: #selector(tick:) userInfo:nil repeats: YES];
// Override point for customization after app launch
[window addSubview:viewController.view];
[window makeKeyAndVisible];
}
And 2:
- (id)init {
if ([super init] == nil){
return nil;
}
AudioSessionInitialize(NULL,NULL,NULL,NULL);
Float64 rate=kSAMPLERATE;
UInt32 size = sizeof(rate);
AudioSessionSetProperty (kAudioSessionProperty_PreferredHardwareSampleRate, size, &rate);
return self;
}
For some reason this code works on iOS7 in the simulator but not a device running iOS7, and I suspect that these deprecations are the cause. I've been reading through the Docs and related questions on this website, and it appears that I need to use AVAudioSession instead. I've been trying to update the code for a long time now, and I'm unsure of how to properly switch over to AVAudioSession. Does anyone know how these two methods above need to look?
Side note: I've managed to hunt down an article that outlines the transition:
https://github.com/software-mariodiana/AudioBufferPlayer/wiki/Replacing-C-functions-deprecated-in-iOS-7
But I can't seem to apply this to the code above.
The code I'm trying to update is a small frequency detection app from git:
https://github.com/jkells/sc_listener
Alternatively, if someone could point me to a sample demo app that can detect frequencies on iOS devices, that would be awesome.
As you have observed, pretty much all of the old Core Audio AudioSession functions have been deprecated in favour of AVAudioSession.
The AVAudioSession is a singleton object which will get initialised when you first call it:
[AVAudioSession sharedInstance]
There is no separate initialize method. But you will want to activate the audio session:
BOOL activated = [[AVAudioSession sharedInstance] setActive:YES error:&error];
As regards setting the hardware sample rate using AVAudioSession, please refer to my answer here:
How can I obtain the native (hardware-supported) audio sampling rates in order to avoid internal sample rate conversion?
For other compares & contrasts between Core Audio audioSession and AVFoundation's AVAudioSession here are some of my other answers around the same topic:
How Do I Route Audio to Speaker without using AudioSessionSetProperty?
use rear microphone of iphone 5
Play audio through upper (phone call) speaker
How to control hardware mic input gain/level on iPhone?
I wrote a short tutorial that discusses how to update to the new AVAudioSession objects. I posted it on GitHub: "Replacing C functions deprecated in iOS 7."

Recorded sound is not playing with CocosDenshion

I recording voice with AVAudioRecorder (with this tutorial http://www.tumblr.com/tagged/avaudiorecorder?before=1300075871 ) and then try to play recorded sound with CocosDenshion:
CDSoundEngine* soundEngine = [[CDSoundEngine alloc] init];
NSArray *defs = [NSArray arrayWithObjects: [NSNumber numberWithInt:1],nil];
[soundEngine defineSourceGroups:defs];
//......
[soundEngine loadBuffer:1 filePath:[recorder.url path]];
[soundEngine playSound:1 sourceGroupId:0 pitch:2.0f pan:0 gain:1.0f loop:NO];
This code works fine on simulator, but on device (iPad 2, iOS 5.1.1)- sound is not playing.
I'll try to play recorded sound with AVAudioPlayer - its playing fine, but I need to pitch sound and found thats using CocosDenshion is simplest way for that.
What settings or something other should I check or fix to play recorded sound?
Problem solved, first I've changed the way to create CDSoundEngine instance, to use CDAudioManager:
CDSoundEngine = [CDAudioManager sharedManager].soundEngine;
Then, before playing recorded sound set mode for CDAudioManager to kAMM_PlayAndRecord:
[CDAudionManager sharedManager] setMode:kAMM_PlayAndRecord];
And finally redirect AudioSession to speakerphone:
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRouteOverride),&audioRouteOverride);
and all worked fine.

Cocoa Add Image At End Of Video

I'm recording a video from the iSight camera using QTCaptureSession.
I would like to add an image at the end of the video, so I've implemented the didFinishRecordingToOutputFileAtURL delegate methods. Here's what I've done so far:
- (void)captureOutput:(QTCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL forConnections:(NSArray *)connections dueToError:(NSError *)error
{
// Prepare final video
QTMovie *originalMovie = [QTMovie movieWithURL:outputFileURL error:nil];
[originalMovie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieEditableAttribute];
NSImage *splashScreen = [NSImage imageNamed:#"video-ending.jpg"];
NSImage *tiffImage = [[NSImage alloc] initWithData:[splashScreen TIFFRepresentation]];
id attr = [NSDictionary dictionaryWithObjectsAndKeys:#"tiff",
QTAddImageCodecType,
[NSNumber numberWithLong:codecHighQuality], QTAddImageCodecQuality,
nil];
[originalMovie addImage:tiffImage forDuration:QTMakeTime(2, 1) withAttributes:attr];
[tiffImage release];
[originalMovie updateMovieFile];
}
The problem with this code is that while quicktime plays it nice, other players don't. I'm sure I'm missing something basic here.
It would also be cool to add the image to the video before it gets saved (to avoid during it two times). Here's how I stop recording right now:
- (void)stopRecording
{
// It would be cool to add an image here
[mCaptureMovieFileOutput recordToOutputFileURL:nil];
}
While I used Cocoa touch this might still apply. I have two tips based on my experience writing images to movies. First, while I'll bet that addImage:forDuration takes care of a lot of things that AVAssetExportSessions do not, I had to make sure that images were added more regularly than a couple times a second or they would not work well with all players. Second, if there is a network streaming option, such as the AVAssetExportSession shouldOptimizeForNetworkUse to move as much metadata and headers forward in the movie, I found that it made the video compatible with more players as well.

AVURLAsset for long sound on ipod

long time reader, first time asker...
I am making a music app which uses AVAssetReader to read mp3 data from the itunes library. I need precise timing, so when I create an AVURLAsset, I use the "AVURLAssetPreferPreciseDurationAndTimingKey" to extract timing data. This has some overhead (and I have no problems when I don't use it, but I need it!)
Every thing works fine on iphone(4) and ipad(1). I would like it to work on my ipod touch (2nd gen). But it doesn't: if the sound file is too long (> ~7 minutes) then the AVAssetReader cannot start reading and throws an error ( AVFoundationErrorDomain error -11800. )
It appears that I am hitting a wall in terms of the scanter resources of the ipod touch. Any ideas what is happening, or how to manage the overhead of creating the AVURLAsset so that it can handle long files?
(I tried running this with the performance tools, and I don't see a major spike in memory).
Thanks, Dan
Maybe you're starting to read too son? As far as I understand, for mp3 it will need to go trough the entire file in order to to enable precise timing. So, try delaying the reading.
You can also try registering as an observer for some of the AVAsset properties. iOS 4.3 has 'readable' property. I've never tried it, but my guess would be it's initially set to NO and as soon as AVAsset has finished loading it gets set to YES.
EDIT:
Actually, just looked into the docs. You're supposed to use AVAsynchronousKeyValueLoading protocol for that and Apple provides an example
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = [NSArray arrayWithObject:#"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus durationStatus = [asset statusOfValueForKey:#"duration" error:&error];
switch (durationStatus) {
case AVKeyValueStatusLoaded:
[self updateUserInterfaceForDuration];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation.
break;
}
}];
If 'duration' won't help try 'readable' (but like I mentioned before 'readable' requires 4.3). Maybe this will solve your issue.

How can I handle separate tracks of an AVURLAsset independently?

Here's my goal: I would like to load a .3gp movie file into an AVURLAsset. I would then like to take the video track and pump the output frames into an OpenGL ES texture. This will be the video playback. I would then like to continue leveraging AVFoundation to play back the audio. The framework is pretty vast, so I'm hoping for some veteran assistance on this one.
I actually have both parts working separately, but something always goes wrong when I try to do both at the same time. Here's my current attempt, in a nutshell (All error handling is omitted for brevity):
I load the .3gp file into the AVURLAsset and load the tracks:
NSURL* fileURL = [[NSBundle mainBundle] URLForResource:someName withExtension:someExtension];
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:^ {/* More Code */}];
In the completion handler, I get a reference to the audio and video track:
// Tracks loaded, grab the audio and video tracks.
AVAssetTrack* videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
Next, I create separate AVMutableCompositions that contain just the audio track and just the video track. I'm not sure if this is completely necessary, but it seems like a good idea and it does also seem to work:
// Make a composition with the video track.
AVMutableComposition* videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* videoCompositionTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[videoCompositionTrack insertTimeRange:[videoTrack timeRange] ofTrack:videoTrack atTime:CMTimeMake(0, 1) error:nil];
// Make a composition with the audio track.
AVMutableComposition* audioComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioCompositionTrack = [audioComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[audioCompositionTrack insertTimeRange:[audioTrack timeRange] ofTrack:audioTrack atTime:CMTimeMake(0, 1) error:nil];
Now I get into specifics of how to handle each track. I'm fairly confident that I have the one-and-only way of handling the video track, which is to create an AVAssetReader for the video composition, and add an AVAssetTrackReaderOutput that was created with the video composition track. By keeping a reference to that track output, I can call its -copyNextSampleBuffer method to get the info I need to pump the video output into an OpenGL ES texture. This works well enough by itself:
// Create Asset Reader and Output for the video track.
NSDictionary* settings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString *)kCVPixelBufferPixelFormatTypeKey];
_assetReader = [[AVAssetReader assetReaderWithAsset:vComposition error:nil] retain];
_videoTrackOutput = [[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:vCompositionTrack outputSettings:settings] retain];
[_assetReader addOutput:_videoTrackOutput];
[_assetReader startReading];
What seems to spoil the whole thing is attempting to play back the audio in any way. I'm not really sure which approach to take for the remaining audio track. Just sticking to the realm of AVFoundation, I see two possible approaches. The first is to use an AVPlayer to play the audio composition:
// Create a player for the audio.
AVPlayerItem* audioPlayerItem = [AVPlayerItem playerItemWithAsset:aComposition];
AVPlayer* audioPlayer = [[AVPlayer playerWithPlayerItem:audioPlayerItem] retain];
[audioPlayer play];
This works, inasmuch as I can hear the desired audio. Unfortunately creating this player guarantees that the AVAssetReaderTrackOutput for the video composition fails with a cryptic error when calling -copyNextSampleBuffer:
AVAssetReaderStatusFailed
Error Domain=AVFoundationErrorDomain
Code=-11800 "The operation could not
be completed" UserInfo=0x456e50
{NSLocalizedFailureReason=An unknown
error occurred (-12785),
NSUnderlyingError=0x486570 "The
operation couldn’t be completed.
(OSStatus error -12785.)",
NSLocalizedDescription=The operation
could not be completed}
I'm confused about how they might be interfering with each other, but regardless, that approach seems to be a dead end.
The other option I considered for the audio playback was the AVAudioPlayer class, but I could not get it to work with an AVAsset as a starting point. I attempted to use its -initWithData:error: method with an NSData built by aggregating the contents of CMSampleBufferRefs taken with an approach identical to the one I use on the video track, but it does not appear to be formatted correctly.
At this point, I feel like I'm flailing around blindly, and would love it so very much if someone could tell me if this approach is even feasible. If it's not I would, of course, appreciate a feasible one.
Creating AVMutableCompositions (basically new AVAssets) for each track seems round-about to me, I'd simply use an AVAssetReader on the audio track. Also, your videoComposition doesn't seem to be used anywhere, so why create it?
In any case, to get either solution to work, set your audio session category to kAudioSessionCategory_MediaPlayback and enable kAudioSessionProperty_OverrideCategoryMixWithOthers.
I've never found any documentation that explains why this is necessary.