I'm recording a video from the iSight camera using QTCaptureSession.
I would like to add an image at the end of the video, so I've implemented the didFinishRecordingToOutputFileAtURL delegate methods. Here's what I've done so far:
- (void)captureOutput:(QTCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL forConnections:(NSArray *)connections dueToError:(NSError *)error
{
// Prepare final video
QTMovie *originalMovie = [QTMovie movieWithURL:outputFileURL error:nil];
[originalMovie setAttribute:[NSNumber numberWithBool:YES] forKey:QTMovieEditableAttribute];
NSImage *splashScreen = [NSImage imageNamed:#"video-ending.jpg"];
NSImage *tiffImage = [[NSImage alloc] initWithData:[splashScreen TIFFRepresentation]];
id attr = [NSDictionary dictionaryWithObjectsAndKeys:#"tiff",
QTAddImageCodecType,
[NSNumber numberWithLong:codecHighQuality], QTAddImageCodecQuality,
nil];
[originalMovie addImage:tiffImage forDuration:QTMakeTime(2, 1) withAttributes:attr];
[tiffImage release];
[originalMovie updateMovieFile];
}
The problem with this code is that while quicktime plays it nice, other players don't. I'm sure I'm missing something basic here.
It would also be cool to add the image to the video before it gets saved (to avoid during it two times). Here's how I stop recording right now:
- (void)stopRecording
{
// It would be cool to add an image here
[mCaptureMovieFileOutput recordToOutputFileURL:nil];
}
While I used Cocoa touch this might still apply. I have two tips based on my experience writing images to movies. First, while I'll bet that addImage:forDuration takes care of a lot of things that AVAssetExportSessions do not, I had to make sure that images were added more regularly than a couple times a second or they would not work well with all players. Second, if there is a network streaming option, such as the AVAssetExportSession shouldOptimizeForNetworkUse to move as much metadata and headers forward in the movie, I found that it made the video compatible with more players as well.
Related
I am trying to save an image to the camera roll. This actually used to work wonderfully, but I had to work on other stuff and now I'm returning to the project to update it for iOS 6 and poof this feature no longer works at all on iOS6.
I have tried two approaches, both are failing silently without NSError objects. First, UIImageWriteToSavedPhotosAlbum:
UIImageWriteToSavedPhotosAlbum(img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
// Callback
-(void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo
{
// error == nil
}
... and the ALAssetsLibrary approach:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[img CGImage]
orientation:(ALAssetOrientation)[img imageOrientation]
completionBlock:^(NSURL *assetURL, NSError *error)
{
// assetURL == nil
// error == nil
}
Also, [ALAssetsLibrary authorizationStatus] == ALAuthorizationStatusAuthorized evaluates to true
On the Simulator, the app never shows up in the Settings > Privacy > Photos section, however on an actual iPad they do show that the app has permission to access photos. (Also, just to add: The first approach above was what I previously used - it worked on real devices & simulators alike, no problem).
I have also tried running this on the main thread to see if that changed anything - no difference. I was running it on the background previously and it used to work fine (on both simulator and device).
Can anyone shed some light?
Figured it out... I was doing something stupid. UIImage cannot take raw pixel data, you have to first massage it into a form it can accept, with the proper metadata.
Part of the problem was that I was using Cocos2D to get a UIImage from a CCRenderTexture (getUIImageFromBuffer()) and when I switched to Cocos2D-x that function was no longer available, and I simply was ignorant to the fact that UIImage objects cannot be constructed with raw pixel data, I figured it handled header information & formatting automatically.
This answer helped: iPhone - UIImage imageWithData returning nil
And this example was also helpful:
http://www.wmdeveloper.com/2010/09/create-bitmap-graphics-context-on.html?m=1
I've been trying to work out this problem for a good 48 hours now and haven't come up with anything. I have 2 AVPlayer objects playing different http live streams. Obviously, I don't want them both playing audio at the same time so I need a way to mute one of the videos.
Apple suggests this for muting an audio track playing in AVPlayer...
NSMutableArray *allAudioParams = [NSMutableArray array];
for (AVPlayerItemTrack *track in [_playerItem tracks]) {
if ([track.assetTrack.mediaType isEqualToString:AVMediaTypeAudio]) {
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams setVolume:0.0 atTime:CMTimeMakeWithSeconds(0,1)];
[audioInputParams setTrackID:[track.assetTrack trackID]];
[allAudioParams addObject:audioInputParams];
// Added to what Apple Suggested
[track setEnabled:NO];
}
}
AVMutableAudioMix *audioZeroMix = [AVMutableAudioMix audioMix];
[audioZeroMix setInputParameters:allAudioParams];
[_playerItem setAudioMix:audioZeroMix];
When this didn't work (after many iterations), I found the enabled property of AVPlayerItemTrack and tried setting that to NO. Also nothing. This doesn't even register as doing anything because when I try an NSLog(#"%x",track.enabled), it still shows up as 1.
I'm at a loss and I can't think of another piece of documentation I can read and re-read to get a good answer. If anyone out there can help, that would be fantastic.
*Update: I got a hold of Apple and according to the AVFoundation team, it is impossible to mute or disable a track of an HLS video. I, personally, feel like this is a bug so I submitted a bug report (You should do the same to tell Apple that this is a problem). You can also
try and submit a feature enhancement request via their feedback page.
New iOS 7 answer: AVPlayer now has 2 new properties 'volume' and 'muted'. Use those!
And here is the original answer for life before iOS 7:
I've been dealing with the same thing. We created muted streams and streams with audio. To mute or unmute you call [player replaceCurrentItemWithPlayerItem:muteStream].
I also submitted a bug report. It looks like AVPlayer has this functionality on MacOS 10.7, but it hasn't made it to iOS yet.
AVAudioMix is documented not to work on URL assets here
Of course I tried it anyway, and like you I found it really doesn't work.
The best solution for this would be to actually embed the stream url feed with two audio tracks! One would be with the normal audio and the other audio track would be the muted audio.
It makes more sense to do it this way rather then the way ComPuff suggested as his way your actually creating two separate URL streams - which is not required.
Here is the code that you could use to switch the audio tracks:
float volume = 0.0f;
AVPlayerItem *currentItem = self.player.currentItem;
NSArray *audioTracks = self.player.currentItem.tracks;
DLog(#"%#",currentItem.tracks);
NSMutableArray *allAudioParams = [NSMutableArray array];
for (AVPlayerItemTrack *track in audioTracks)
{
if ([track.assetTrack.mediaType isEqual:AVMediaTypeAudio])
{
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams setVolume:volume atTime:kCMTimeZero];
[audioInputParams setTrackID:[track.assetTrack trackID]];
[allAudioParams addObject:audioInputParams];
}
}
if ([allAudioParams count] > 0) {
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
[audioMix setInputParameters:allAudioParams];
[currentItem setAudioMix:audioMix];
}
The only problem is that my stream url is only display two tracks (one for video and one for audio) when it should actually be three tracks (2 audio tracks). I cant work out if this is a problem with the stream url or my code! Can anyone spot any mistakes in the code?
I've been developing a music player recently, I'm writing my own pickers.
I'm trying to test my code to it's limits, so I have around 1600 albums in my iPhone.
I'm using AQGridView for albums view, and since MPMediaItemArtwork is a subclass of NSObject, you need to fire up a method on it to get an image from it, and that method scales images.
Scaling for each cell uses too much CPU as you can guess, so my grid album view is laggy, despite all my effort manually driving each cell's includes.
So I thought of start scaling with GCD on app launch, then save them to file, and read that file for each cell.
But, my code
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {
MPMediaQuery *allAlbumsQuery = [MPMediaQuery albumsQuery];
NSArray *albumsArray = allAlbumsQuery.collections;
for (MPMediaItemCollection *collection in albumsArray) {
#autoreleasepool {
MPMediaItem *currentItem = [collection representativeItem];
MPMediaItemArtwork *artwork = [currentItem valueForProperty:MPMediaItemPropertyArtwork];
UIImage *artworkImage = [artwork imageWithSize:CGSizeMake(90, 90)];
if (artworkImage) [toBeCached addObject:artworkImage];
else [toBeCached addObject:blackImage];
NSLog(#"%#", [currentItem valueForProperty:MPMediaItemPropertyAlbumTitle]);
artworkImage = nil;
}
}
dispatch_async(dispatch_get_main_queue(), ^{
[[NSUserDefaults standardUserDefaults] setObject:[NSKeyedArchiver archivedDataWithRootObject:albumsArray] forKey:#"covers"];
});
NSLog(#"finished saving, sir");
});
in AppDelegate's application:didFinishLaunchingWithOptions: method makes my app crash, without any console log etc.
This seems to be a memory problem, so many images are kept in NSArray which is stored on RAM until saving that iOS force closes my app.
Do you have any suggestions on what to do?
Cheers
Take a look at the recently-released SYCache, which combines NSCache and on-disk caching. It's probably a bad idea to get to a memory-warning state as soon as you launch the app, but that's better than force closing.
As far as the commenter above suggested, mapped data is a technique (using mmap or its equivalent) to load data from disk as if it's all in memory at once, which could help with UIImage loading later on down the road. The inverse (with NSMutableData) is also true, that a file is able to be written to as if it's directly in RAM. As a technique, it could be useful.
long time reader, first time asker...
I am making a music app which uses AVAssetReader to read mp3 data from the itunes library. I need precise timing, so when I create an AVURLAsset, I use the "AVURLAssetPreferPreciseDurationAndTimingKey" to extract timing data. This has some overhead (and I have no problems when I don't use it, but I need it!)
Every thing works fine on iphone(4) and ipad(1). I would like it to work on my ipod touch (2nd gen). But it doesn't: if the sound file is too long (> ~7 minutes) then the AVAssetReader cannot start reading and throws an error ( AVFoundationErrorDomain error -11800. )
It appears that I am hitting a wall in terms of the scanter resources of the ipod touch. Any ideas what is happening, or how to manage the overhead of creating the AVURLAsset so that it can handle long files?
(I tried running this with the performance tools, and I don't see a major spike in memory).
Thanks, Dan
Maybe you're starting to read too son? As far as I understand, for mp3 it will need to go trough the entire file in order to to enable precise timing. So, try delaying the reading.
You can also try registering as an observer for some of the AVAsset properties. iOS 4.3 has 'readable' property. I've never tried it, but my guess would be it's initially set to NO and as soon as AVAsset has finished loading it gets set to YES.
EDIT:
Actually, just looked into the docs. You're supposed to use AVAsynchronousKeyValueLoading protocol for that and Apple provides an example
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = [NSArray arrayWithObject:#"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus durationStatus = [asset statusOfValueForKey:#"duration" error:&error];
switch (durationStatus) {
case AVKeyValueStatusLoaded:
[self updateUserInterfaceForDuration];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation.
break;
}
}];
If 'duration' won't help try 'readable' (but like I mentioned before 'readable' requires 4.3). Maybe this will solve your issue.
Here's my goal: I would like to load a .3gp movie file into an AVURLAsset. I would then like to take the video track and pump the output frames into an OpenGL ES texture. This will be the video playback. I would then like to continue leveraging AVFoundation to play back the audio. The framework is pretty vast, so I'm hoping for some veteran assistance on this one.
I actually have both parts working separately, but something always goes wrong when I try to do both at the same time. Here's my current attempt, in a nutshell (All error handling is omitted for brevity):
I load the .3gp file into the AVURLAsset and load the tracks:
NSURL* fileURL = [[NSBundle mainBundle] URLForResource:someName withExtension:someExtension];
AVURLAsset* asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:^ {/* More Code */}];
In the completion handler, I get a reference to the audio and video track:
// Tracks loaded, grab the audio and video tracks.
AVAssetTrack* videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
Next, I create separate AVMutableCompositions that contain just the audio track and just the video track. I'm not sure if this is completely necessary, but it seems like a good idea and it does also seem to work:
// Make a composition with the video track.
AVMutableComposition* videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* videoCompositionTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[videoCompositionTrack insertTimeRange:[videoTrack timeRange] ofTrack:videoTrack atTime:CMTimeMake(0, 1) error:nil];
// Make a composition with the audio track.
AVMutableComposition* audioComposition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioCompositionTrack = [audioComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[audioCompositionTrack insertTimeRange:[audioTrack timeRange] ofTrack:audioTrack atTime:CMTimeMake(0, 1) error:nil];
Now I get into specifics of how to handle each track. I'm fairly confident that I have the one-and-only way of handling the video track, which is to create an AVAssetReader for the video composition, and add an AVAssetTrackReaderOutput that was created with the video composition track. By keeping a reference to that track output, I can call its -copyNextSampleBuffer method to get the info I need to pump the video output into an OpenGL ES texture. This works well enough by itself:
// Create Asset Reader and Output for the video track.
NSDictionary* settings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString *)kCVPixelBufferPixelFormatTypeKey];
_assetReader = [[AVAssetReader assetReaderWithAsset:vComposition error:nil] retain];
_videoTrackOutput = [[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:vCompositionTrack outputSettings:settings] retain];
[_assetReader addOutput:_videoTrackOutput];
[_assetReader startReading];
What seems to spoil the whole thing is attempting to play back the audio in any way. I'm not really sure which approach to take for the remaining audio track. Just sticking to the realm of AVFoundation, I see two possible approaches. The first is to use an AVPlayer to play the audio composition:
// Create a player for the audio.
AVPlayerItem* audioPlayerItem = [AVPlayerItem playerItemWithAsset:aComposition];
AVPlayer* audioPlayer = [[AVPlayer playerWithPlayerItem:audioPlayerItem] retain];
[audioPlayer play];
This works, inasmuch as I can hear the desired audio. Unfortunately creating this player guarantees that the AVAssetReaderTrackOutput for the video composition fails with a cryptic error when calling -copyNextSampleBuffer:
AVAssetReaderStatusFailed
Error Domain=AVFoundationErrorDomain
Code=-11800 "The operation could not
be completed" UserInfo=0x456e50
{NSLocalizedFailureReason=An unknown
error occurred (-12785),
NSUnderlyingError=0x486570 "The
operation couldn’t be completed.
(OSStatus error -12785.)",
NSLocalizedDescription=The operation
could not be completed}
I'm confused about how they might be interfering with each other, but regardless, that approach seems to be a dead end.
The other option I considered for the audio playback was the AVAudioPlayer class, but I could not get it to work with an AVAsset as a starting point. I attempted to use its -initWithData:error: method with an NSData built by aggregating the contents of CMSampleBufferRefs taken with an approach identical to the one I use on the video track, but it does not appear to be formatted correctly.
At this point, I feel like I'm flailing around blindly, and would love it so very much if someone could tell me if this approach is even feasible. If it's not I would, of course, appreciate a feasible one.
Creating AVMutableCompositions (basically new AVAssets) for each track seems round-about to me, I'd simply use an AVAssetReader on the audio track. Also, your videoComposition doesn't seem to be used anywhere, so why create it?
In any case, to get either solution to work, set your audio session category to kAudioSessionCategory_MediaPlayback and enable kAudioSessionProperty_OverrideCategoryMixWithOthers.
I've never found any documentation that explains why this is necessary.