How to set preferredTransform to exported video with AVAssetExportSession - objective-c

I am trying to export a video with the same rotation transform that camera does. For example, I film some video with1920x1080 resolution(landscape) with a CGAffineTransform(0, -1, 1, 0, 1080, 0). After some work I want to export that video with the same naturalSize and preferredTransform that the original video, but the ExportSession is rotating my video.
The code below transform the original video to portrait mode with a portrait rotation, but that is not what I want to achieve.
Any idea? Thanks in advance.
- (void)applyDrawWithCompletion:(void(^)(NSURL *))completion {
//setup video asset layer
AVURLAsset* videoAsset = (AVURLAsset *)self.avPlayer.currentItem.asset;
AVMutableComposition* mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
ofTrack:clipVideoTrack
atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipAudioTrack = [videoAsset tracksWithMediaType:AVMediaTypeAudio][0];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [mixComposition duration]) ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
[compositionVideoTrack setPreferredTransform:clipVideoTrack.preferredTransform];
//new layer with draw image
CGSize videoSize = CGSizeMake(clipVideoTrack.naturalSize.height, clipVideoTrack.naturalSize.width);
CALayer *drawLayer = [CALayer layer];
drawLayer.contents = (id)self.tempDrawImage.image.CGImage;
drawLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
//proper sorting of video layers
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:drawLayer];
AVMutableVideoComposition* videoComp = [AVMutableVideoComposition videoComposition];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
//Adding layer along asset duration
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
//Apply transform to layer instructions
[layerInstruction setTransform:clipVideoTrack.preferredTransform atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];
//Export edited video
AVAssetExportSession *assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetHighestQuality];
assetExport.videoComposition = videoComp;
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:#"editedVideo.mov"];;
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath]) {
[[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}
assetExport.outputFileType = AVFileTypeQuickTimeMovie;
assetExport.outputURL = [NSURL fileURLWithPath:exportPath];
assetExport.shouldOptimizeForNetworkUse = YES;
[assetExport exportAsynchronouslyWithCompletionHandler:^(void) {
if (assetExport.status == AVAssetExportSessionStatusCompleted) {
completion(assetExport.outputURL);
}else if(assetExport.status == AVAssetExportSessionStatusFailed || assetExport.status == AVAssetExportSessionStatusCancelled){
completion(nil);
}
}];
}

I processed the original camera video with the same method and add the following code:
CGAffineTransform t = clipVideoTrack.preferredTransform;
BOOL isPortrait = (t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0) || (t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0);
//new layer with draw image
CGSize videoSize = clipVideoTrack.naturalSize;
if (isPortrait) {
videoSize = CGSizeMake(clipVideoTrack.naturalSize.height, clipVideoTrack.naturalSize.width);
}
This changed the landscape video to portrait with the same transform that other edited and processed videos.
This is not what I was looking for, but it solve my problem. If someone know how to achieve the original question goal please feel free to add an answer.

Related

AVFoundation - AVAssetExportSession - Operation Stopped on Second Export attempt

I am creating a Picture-In-Picture video, this function has worked flawlessly (as far as I know) for 1.5 years. Now it appears in IOS 11 it only works the first time it is called...when it is called to do a second video (without force closing the app first) I get the error Message below.
I found this article on stack, but I am already using the asset track correctly as per this article: AVAssetExportSession export fails non-deterministically with error: “Operation Stopped, NSLocalizedFailureReason=The video could not be composed.”
I have put the exact method I am using. Any help would be greatly appreciated!
Error Message:
Error: Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped"
UserInfo={NSLocalizedFailureReason=The video could not be composed.,
NSLocalizedDescription=Operation Stopped,
NSUnderlyingError=0x1c04521e0
{Error Domain=NSOSStatusErrorDomain Code=-17390 "(null)"}}
Method Below:
- (void) composeVideo:(NSString*)videoPIP onVideo:(NSString*)videoBG
{
#try {
NSError *e = nil;
AVURLAsset *backAsset, *pipAsset;
// Load our 2 movies using AVURLAsset
pipAsset = [AVURLAsset URLAssetWithURL:[NSURL fileURLWithPath:videoPIP] options:nil];
backAsset = [AVURLAsset URLAssetWithURL:[NSURL fileURLWithPath:videoBG] options:nil];
if ([[NSFileManager defaultManager] fileExistsAtPath:videoPIP])
{
NSLog(#"PIP File Exists!");
}
else
{
NSLog(#"PIP File DOESN'T Exist!");
}
if ([[NSFileManager defaultManager] fileExistsAtPath:videoBG])
{
NSLog(#"BG File Exists!");
}
else
{
NSLog(#"BG File DOESN'T Exist!");
}
float scaleH = VIDEO_SIZE.height / [[[backAsset tracksWithMediaType:AVMediaTypeVideo ] objectAtIndex:0] naturalSize].width;
float scaleW = VIDEO_SIZE.width / [[[backAsset tracksWithMediaType:AVMediaTypeVideo ] objectAtIndex:0] naturalSize].height;
float scalePIP = (VIDEO_SIZE.width * 0.25) / [[[pipAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].width;
// Create AVMutableComposition Object - this object will hold our multiple AVMutableCompositionTracks.
AVMutableComposition* mixComposition = [[AVMutableComposition alloc] init];
// Create the first AVMutableCompositionTrack by adding a new track to our AVMutableComposition.
AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Set the length of the firstTrack equal to the length of the firstAsset and add the firstAsset to our newly created track at kCMTimeZero so video plays from the start of the track.
[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, pipAsset.duration) ofTrack:[[pipAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:&e];
if (e)
{
NSLog(#"Error0: %#",e);
e = nil;
}
// Repeat the same process for the 2nd track and also start at kCMTimeZero so both tracks will play simultaneously.
AVMutableCompositionTrack *secondTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[secondTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, backAsset.duration) ofTrack:[[backAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:&e];
if (e)
{
NSLog(#"Error1: %#",e);
e = nil;
}
// We also need the audio track!
AVMutableCompositionTrack *audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, backAsset.duration) ofTrack:[[backAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:&e];
if (e)
{
NSLog(#"Error2: %#",e);
e = nil;
}
// Create an AVMutableVideoCompositionInstruction object - Contains the array of AVMutableVideoCompositionLayerInstruction objects.
AVMutableVideoCompositionInstruction * MainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set Time to the shorter Asset.
MainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, (pipAsset.duration.value > backAsset.duration.value) ? pipAsset.duration : backAsset.duration);
// Create an AVMutableVideoCompositionLayerInstruction object to make use of CGAffinetransform to move and scale our First Track so it is displayed at the bottom of the screen in smaller size.
AVMutableVideoCompositionLayerInstruction *FirstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
//CGAffineTransform Scale1 = CGAffineTransformMakeScale(0.3f,0.3f);
CGAffineTransform Scale1 = CGAffineTransformMakeScale(scalePIP, scalePIP);
// Top Left
CGAffineTransform Move1 = CGAffineTransformMakeTranslation(3.0, 3.0);
[FirstlayerInstruction setTransform:CGAffineTransformConcat(Scale1,Move1) atTime:kCMTimeZero];
// Repeat for the second track.
AVMutableVideoCompositionLayerInstruction *SecondlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:secondTrack];
CGAffineTransform Scale2 = CGAffineTransformMakeScale(scaleW, scaleH);
CGAffineTransform rotateBy90Degrees = CGAffineTransformMakeRotation( M_PI_2);
CGAffineTransform Move2 = CGAffineTransformMakeTranslation(0.0, ([[[backAsset tracksWithMediaType:AVMediaTypeVideo ] objectAtIndex:0] naturalSize].height) * -1);
[SecondlayerInstruction setTransform:CGAffineTransformConcat(Move2, CGAffineTransformConcat(rotateBy90Degrees, Scale2)) atTime:kCMTimeZero];
// Add the 2 created AVMutableVideoCompositionLayerInstruction objects to our AVMutableVideoCompositionInstruction.
MainInstruction.layerInstructions = [NSArray arrayWithObjects:FirstlayerInstruction, SecondlayerInstruction, nil];
// Create an AVMutableVideoComposition object.
AVMutableVideoComposition *MainCompositionInst = [AVMutableVideoComposition videoComposition];
MainCompositionInst.instructions = [NSArray arrayWithObject:MainInstruction];
MainCompositionInst.frameDuration = CMTimeMake(1, 30);
// Set the render size to the screen size.
// MainCompositionInst.renderSize = [[UIScreen mainScreen] bounds].size;
MainCompositionInst.renderSize = VIDEO_SIZE;
NSString *fileName = [NSString stringWithFormat:#"%#%#", NSTemporaryDirectory(), #"fullreaction.MP4"];
// Make sure the video doesn't exist.
if ([[NSFileManager defaultManager] fileExistsAtPath:fileName])
{
[[NSFileManager defaultManager] removeItemAtPath:fileName error:nil];
}
// Now we need to save the video.
NSURL *url = [NSURL fileURLWithPath:fileName];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition
presetName:QUALITY];
exporter.videoComposition = MainCompositionInst;
exporter.outputURL=url;
exporter.outputFileType = AVFileTypeMPEG4;
[exporter exportAsynchronouslyWithCompletionHandler:
^(void )
{
NSLog(#"File Saved as %#!", fileName);
NSLog(#"Error: %#", exporter.error);
[self performSelectorOnMainThread:#selector(runProcessingComplete) withObject:nil waitUntilDone:false];
}];
}
#catch (NSException *ex) {
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Error 3" message:[NSString stringWithFormat:#"%#",ex]
delegate:self cancelButtonTitle:#"OK" otherButtonTitles: nil];
[alert show];
}
}
The Cause:
It ends up the "MainInstruction" timeRange is incorrect.
CMTime objects cannot be compared using "value". Instead, you must use CMTIME_COMPARE_INLINE.
To fix, replace this line:
MainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, (pipAsset.duration.value > backAsset.duration.value) ? pipAsset.duration : backAsset.duration);
With this line:
MainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTIME_COMPARE_INLINE(pipAsset.duration, >, backAsset.duration) ? pipAsset.duration : backAsset.duration);

How to Add text in a video

I have a video that lasts a duration of 5 minutes now I want to add text in that video for a particular time of say 5 - 15 seconds.
Can any one help me out. I have tried below code for adding text to an image
CATextLayer *subtitle1Text = [[CATextLayer alloc] init];
[subtitle1Text setFont:#"Helvetica-Bold"];
[subtitle1Text setFontSize:36];
[subtitle1Text setFrame:CGRectMake(0, 0, size.width, 100)];
[subtitle1Text setString:_subTitle1.text];
[subtitle1Text setAlignmentMode:kCAAlignmentCenter];
[subtitle1Text setForegroundColor:[[UIColor whiteColor] CGColor]];
// 2 - The usual overlay
CALayer *overlayLayer = [CALayer layer];
[overlayLayer addSublayer:subtitle1Text];
overlayLayer.frame = CGRectMake(0, 0, size.width, size.height);
[overlayLayer setMasksToBounds:YES];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, size.width, size.height);
videoLayer.frame = CGRectMake(0, 0, size.width, size.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:overlayLayer];
composition.animationTool = [AVVideoCompositionCoreAnimationTool
videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
Can one one help me how to do it for videos
Let's try the below method to edit a video frame by frame using AVFoundation.framework,
-(void)startEditingVideoAtURL:(NSURL *)resourceURL
{
NSError * error = nil;
// Temp File path to write the edited movie
NSURL * movieURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#%#", NSTemporaryDirectory(), resourceURL.lastPathComponent]];
NSFileManager * fm = [NSFileManager defaultManager];
NSError * error;
BOOL success = [fm removeItemAtPath:movieURL.path error:&error];
// Create AVAssetWriter to convert the images into movie
AVAssetWriter * videoWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
AVAsset * avAsset = [[AVURLAsset alloc] initWithURL:resourceURL options:nil];
// Set your out put video frame here
NSDictionary * videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:480.0], AVVideoWidthKey,
[NSNumber numberWithInt:480.0], AVVideoHeightKey,
nil];
// Create AVAssetWriterInput with video Settings
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
NSDictionary * sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:480.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:480.0], kCVPixelBufferHeightKey,
nil];
AVAssetWriterInputPixelBufferAdaptor * assetWriterPixelBufferAdaptor1 = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
NSError *aerror = nil;
// Create AVAssetReader to read the video files frame by frame
AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:avAsset error:&aerror];
NSArray * vTracks = [avAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetReaderTrackOutput * asset_reader_output = nil;
if (vTracks.count)
{
AVAssetTrack * videoTrack = [vTracks objectAtIndex:0];
videoWriterInput.transform = videoTrack.preferredTransform;
NSDictionary *videoOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
asset_reader_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:videoOptions];
[reader addOutput:asset_reader_output];
}
else
{
[[NSNotificationCenter defaultCenter] postNotificationName:EDITOR_FAILED_NOTI object:resourceURL.path];
[reader cancelReading];
queueInProgress = NO;
[self removeCurrentVideo];
return;
}
//audio setup
AVAssetWriterInput * audioWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:nil];
AVAssetReader * audioReader = [AVAssetReader assetReaderWithAsset:avAsset error:&error];
NSArray * aTrack = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetReaderOutput * readerOutput = nil;
if (aTrack.count)
{
AVAssetTrack * audioTrack = [aTrack objectAtIndex:0];
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:nil];
[audioReader addOutput:readerOutput];
}
NSParameterAssert(audioWriterInput);
NSParameterAssert([videoWriter canAddInput:audioWriterInput]);
audioWriterInput.expectsMediaDataInRealTime = NO;
[videoWriter addInput:audioWriterInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
[reader startReading]; // Here the video reader starts the reading
dispatch_queue_t _processingQueue = dispatch_queue_create("assetAudioWriterQueue", NULL);
[videoWriterInput requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
^{
while ([videoWriterInput isReadyForMoreMediaData])
{
CMSampleBufferRef sampleBuffer;
if ([reader status] == AVAssetReaderStatusReading &&
(sampleBuffer = [asset_reader_output copyNextSampleBuffer]))
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * _ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void * baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRelease(dataProvider);
// Add your text drawing code here
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGColorSpaceRelease(rgbColorSpace);
BOOL result = [assetWriterPixelBufferAdaptor1 appendPixelBuffer:imageBuffer withPresentationTime:currentTime];
CVPixelBufferRelease(imageBuffer);
CFRelease(sampleBuffer);
if (!result)
{
dispatch_async(dispatch_get_main_queue(), ^
{
[[NSNotificationCenter defaultCenter] postNotificationName:EDITOR_FAILED_NOTI object:resourceURL.path];
[reader cancelReading];
queueInProgress = NO;
[self removeCurrentVideo];
});
break;
}
}
else
{
[videoWriterInput markAsFinished]; // This will called, once video write done
switch ([reader status])
{
case AVAssetReaderStatusReading:
// the reader has more for other tracks, even if this one is done
break;
case AVAssetReaderStatusCompleted:
{
if (!readerOutput)
{
if ([videoWriter respondsToSelector:#selector(finishWritingWithCompletionHandler:)])
[videoWriter finishWritingWithCompletionHandler:^
{
dispatch_async(dispatch_get_main_queue(), ^
{
});
}];
else
{
if ([videoWriter finishWriting])
{
dispatch_async(dispatch_get_main_queue(), ^
{
});
}
}
break;
}
[audioReader startReading];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[audioWriterInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^
{
while (audioWriterInput.readyForMoreMediaData) {
CMSampleBufferRef nextBuffer;
if ([audioReader status] == AVAssetReaderStatusReading &&
(nextBuffer = [readerOutput copyNextSampleBuffer]))
{
if (nextBuffer)
{
[audioWriterInput appendSampleBuffer:nextBuffer];
}
CFRelease(nextBuffer);
}else
{
[audioWriterInput markAsFinished];
switch ([audioReader status])
{
case AVAssetReaderStatusCompleted:
if ([videoWriter respondsToSelector:#selector(finishWritingWithCompletionHandler:)])
[videoWriter finishWritingWithCompletionHandler:^
{
}];
else
{
if ([videoWriter finishWriting])
{
}
}
break;
}
}
}
}
];
break;
}
// your method for when the conversion is done
// should call finishWriting on the writer
//hook up audio track
case AVAssetReaderStatusFailed:
{
break;
}
}
break;
}
}
}
];
}
Thanks!

CAKeyframeAnimation not animating CATextLayer when exporting video

I have an application that I am attempting to put a timestamp on a video. To do this I am using AVFoundation and Core Animation to place a CATextLayer over the video layer. If I place text into the CATextLayer's string property, the string is properly displayed in the exported video. If I then add the animation to the CATextLayer the text never changes. I figure I've overlooked something, but I can find what it is.
Thank you in advance for any help.
Here is a code example.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
AVMutableCompositionTrack *videoCompositionTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableCompositionTrack *audioCompositionTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSDictionary *assetOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVAsset *asset = [[AVURLAsset alloc] initWithURL:myUrl options:assetOptions];
AVAssetTrack *audioAssetTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];
AVAssetTrack *videoAssetTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
videoComposition.renderSize = videoCompositionTrack.naturalSize;
AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration);
AVMutableVideoCompositionLayerInstruction *videoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
[videoCompositionLayerInstruction setOpacity:1.0f atTime:kCMTimeZero];
videoCompositionInstruction.layerInstructions = #[videoCompositionLayerInstruction];
videoComposition.instructions = #[videoCompositionInstruction];
CALayer *parentLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0.0f, 0.0f, videoComposition.renderSize.width, videoComposition.renderSize.height);
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0.0f, 0.0f, videoComposition.renderSize.width, videoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
CATextLayer *textLayer = [CATextLayer layer];
textLayer.font = (__bridge CFTypeRef)([UIFont fontWithName:#"Helvetica Neue" size:45.0f]);
textLayer.fontSize = 45.0f;
textLayer.foregroundColor = [UIColor colorWithRed:1.0f green:1.0f blue:1.0f alpha:1.0f].CGColor;
textLayer.shadowColor = [UIColor colorWithRed:0.0f green:0.0f blue:0.0f alpha:1.0f].CGColor;
textLayer.shadowOffset = CGSizeMake(0.0f, 0.0f);
textLayer.shadowOpacity = 1.0f;
textLayer.shadowRadius = 4.0f;
textLayer.alignmentMode = kCAAlignmentCenter;
textLayer.truncationMode = kCATruncationNone;
CAKeyframeAnimation *keyFrameAnimation = [CAKeyframeAnimation animationWithKeyPath:#"string"];
// Step 8: Set the animation values to the object.
keyFrameAnimation.calculationMode = kCAAnimationLinear;
keyFrameAnimation.values = #[#"12:00:00", #"12:00:01", #"12:00:02", #"12:00:03",
#"12:00:04", #"12:00:05", #"12:00:06", #"12:00:07",
#"12:00:08", #"12:00:09"];
keyFrameAnimation.keyTimes = #[[NSNumber numberWithFloat:0.0f], [NSNumber numberWithFloat:0.1f],
[NSNumber numberWithFloat:0.2f], [NSNumber numberWithFloat:0.3f],
[NSNumber numberWithFloat:0.4f], [NSNumber numberWithFloat:0.5f],
[NSNumber numberWithFloat:0.6f], [NSNumber numberWithFloat:0.7f],
[NSNumber numberWithFloat:0.8f], [NSNumber numberWithFloat:0.9f]];
keyFrameAnimation.beginTime = AVCoreAnimationBeginTimeAtZero;
keyFrameAnimation.duration = CMTimeGetSeconds(composition.duration);
keyFrameAnimation.removedOnCompletion = YES;
[textLayer addAnimation:keyFrameAnimation forKey:#"string"];
textLayer.frame = CGRectMake(0.0f, 20.0f, videoComposition.renderSize.width, 55.0f);
[parentLayer addSublayer:textLayer];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPreset1920x1080];
exportSession.videoComposition = videoComposition;
exportSession.audioMix = audioMix;
exportSession.outputFileType = [exportSession.supportedFileTypes objectAtIndex:0];
exportSession.outputURL = mySaveUrl;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
self.delegate = nil;
switch (exportSession.status) {
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export session cancelled.");
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export session completed.");
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Export session failed.");
break;
case AVAssetExportSessionStatusUnknown:
NSLog(#"Export session unknown status.");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"Export session waiting.");
break;
default:
break;
}
NSError *error = exportSession.error;
if (error != nil) {
NSLog(#"An error ocurred while exporting. Error: %#: %#", error.localizedDescription, error.localizedFailureReason);
}
}];
I stay in contact with apple support and told me that it's a bug in his SDK. They are trying to fix the issue.
Also I open an incident in apple bug reporter, If you are apple developer, I recommend you open a new one to do more pressure over this incident.
https://bugreport.apple.com/logon
Best regards.
Finally I fond a solution in CATextLayer and animation. It's probably that it's not the best solution but at least it works fine.
To fix the issue we ought to pass NSString to UIImage or CALayer and then put all this images (CGImage) in NSMutableArray and load this mutable array in CAKeyAnimation and do the modification of contents variable
-(UIImage *)imageFromText:(NSString *)text
{
UIGraphicsBeginImageContextWithOptions(sizeImageFromText,NO,0.0);//Better resolution (antialiasing)
//UIGraphicsBeginImageContext(sizeImageFromText); // iOS is < 4.0
[text drawInRect:aRectangleImageFromText withAttributes:attributesImageFromText];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Adding images to NSMutableArray
[NSMutableArrayAux addObject:(__bridge id)[self imageFromText:#"Hello"].CGImage];
Set CAKeyAnimation to modify contents of image (change image)
CAKeyframeAnimation *overlay = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
overlay.values = [[NSArray alloc] initWithArray:NSMutableArrayAux];
It works really fine, but it takes 1 or 2 seconds to 5000 or 6000 images in array, and also with 15000/16000 images when you export your video it crash with a overflow error. This is another bug in framework.
As you know, this is a issue in framework, and this is a solution at least until apple fix CATextLayer and animated issue, also I give this solution to apple too.

What iOS framework is needed for this particular audio manipulation

Before I go sit down a read an entire book on CoreAudio, I wanted to know if it was the best Framework for me to study or if AVFoundation can do what I need. I want to be able to download a small portion of an MP3 located on a remote server, lets say 20 seconds of the file, preferable without downloading the entire file first then trimming it.
Then I want to layer 2 tracks of audio then bounce them as into one file.
Will I need to delve into CoreAudio or can AVFoundation so the trick? Advise is much appreciated.
The downloading part of the file is up to you, but if you want to mix 2 or more audio files into one, AVFoundation is probably the easiest route to take, using AVAssetExportSession to do the exporting and AVMutableAudioMix to do the mix.. There is some example code for a simple editor floating around in the apple docs but cant seem to find it, if i do I will post the link..
Here is a method that actually does the mix, keep in mind that im adding video here as well, _audioTracks and _videoTracks are mutable arrays with AVAssets in them
-(void)createMix
{
CGSize videoSize = [[_videoTracks objectAtIndex:0] naturalSize];
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableVideoComposition *videoComposition = nil;
AVMutableAudioMix *audioMix = nil;
composition.naturalSize = videoSize;
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAsset *videoAsset=[_videoTracks objectAtIndex:0];
CMTimeRange timeRangeInAsset = CMTimeRangeMake(kCMTimeZero, [videoAsset duration]);
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:timeRangeInAsset ofTrack:clipVideoTrack atTime:kCMTimeZero error:nil];
AVAssetTrack *clipAudioTrack = [[videoAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack insertTimeRange:timeRangeInAsset ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
NSMutableArray *trackMixArray = [NSMutableArray array];
if(_audioTracks && _audioTracks.count>0)
{
for(AVAsset *audio in _audioTracks)
{
// CMTimeRange timeRangeInAsset = CMTimeRangeMake(kCMTimeZero, [audio duration]);
// AVAssetTrack *clipAudioTrack = [[audio tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//[compositionAudioTrack insertTimeRange:timeRangeInAsset ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
NSInteger i;
NSArray *tracksToDuck = [audio tracksWithMediaType:AVMediaTypeAudio]; // before we add the commentary
// Clip commentary duration to composition duration.
CMTimeRange commentaryTimeRange = CMTimeRangeMake(kCMTimeZero, audio.duration);
if (CMTIME_COMPARE_INLINE(CMTimeRangeGetEnd(commentaryTimeRange), >, [composition duration]))
commentaryTimeRange.duration = CMTimeSubtract([composition duration], commentaryTimeRange.start);
// Add the commentary track.
AVMutableCompositionTrack *compositionCommentaryTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionCommentaryTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, commentaryTimeRange.duration) ofTrack:[[audio tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:commentaryTimeRange.start error:nil];
CMTime rampDuration = CMTimeMake(1, 2); // half-second ramps
for (i = 0; i < [tracksToDuck count]; i++) {
AVMutableAudioMixInputParameters *trackMix = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:[tracksToDuck objectAtIndex:i]];
[trackMix setVolumeRampFromStartVolume:1.0 toEndVolume:0.2 timeRange:CMTimeRangeMake(CMTimeSubtract(commentaryTimeRange.start, rampDuration), rampDuration)];
[trackMix setVolumeRampFromStartVolume:0.2 toEndVolume:1.0 timeRange:CMTimeRangeMake(CMTimeRangeGetEnd(commentaryTimeRange), rampDuration)];
[trackMixArray addObject:trackMix];
}
}
}
// audioMix.inputParameters = trackMixArray;
if (videoComposition) {
// Every videoComposition needs these properties to be set:
videoComposition.frameDuration = CMTimeMake(1, 30); // 30 fps
videoComposition.renderSize = videoSize;
}
AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPreset1280x720];
session.videoComposition = videoComposition;
session.audioMix = audioMix;
NSUInteger count = 0;
NSString *filePath;
do {
filePath = NSTemporaryDirectory();
NSString *numberString = count > 0 ? [NSString stringWithFormat:#"-%i", count] : #"";
filePath = [filePath stringByAppendingPathComponent:[NSString stringWithFormat:#"Output-%#.mp4", numberString]];
count++;
} while([[NSFileManager defaultManager] fileExistsAtPath:filePath]);
session.outputURL = [NSURL fileURLWithPath:filePath];
session.outputFileType = AVFileTypeQuickTimeMovie;
[session exportAsynchronouslyWithCompletionHandler:^
{
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"Exported");
if(session.error)
{
NSLog(#"had an error %#", session.error);
}
if(delegate && [delegate respondsToSelector:#selector(didFinishExportingMovie:)])
{
[delegate didFinishExportingMovie:filePath];
}
});
}];
}
hope it helps..
Daniel

CoreAnimation, AVFoundation and ability to make Video export

I'm looking for the correct way to export my pictures sequence into a quicktime video.
I know that AV Foundation have the ability to merge or recombine videos and also to add audio track building a single video Asset.
Now ... my goal is a little bit different. I would to create a video from scratch. I have a set of UIImage and I need to render all of them in a single video.
I read all the Apple Documentation about AV Foundation and i found the AVVideoCompositionCoreAnimationTool class that have the ability to take a CoreAnimation and reencode it as a video. I also checked the AVEditDemo project provided by Apple but something seems not working on my project.
Here my steps:
1) I create the CoreAnimation Layer
CALayer *animationLayer = [CALayer layer];
[animationLayer setFrame:CGRectMake(0, 0, 1024, 768)];
CALayer *backgroundLayer = [CALayer layer];
[backgroundLayer setFrame:animationLayer.frame];
[backgroundLayer setBackgroundColor:[UIColor blackColor].CGColor];
CALayer *anImageLayer = [CALayer layer];
[anImageLayer setFrame:animationLayer.frame];
CAKeyframeAnimation *changeImageAnimation = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
[changeImageAnimation setDelegate:self];
changeImageAnimation.duration = [[albumSettings transitionTime] floatValue] * [uiImagesArray count];
changeImageAnimation.repeatCount = 1;
changeImageAnimation.values = [NSArray arrayWithArray:uiImagesArray];
changeImageAnimation.removedOnCompletion = YES;
[anImageLayer addAnimation:changeImageAnimation forKey:nil];
[animationLayer addSublayer:anImageLayer];
2) Than I instantiate the AVComposition
AVMutableComposition *composition = [AVMutableComposition composition];
composition.naturalSize = CGSizeMake(1024, 768);
CALayer *wrapLayer = [CALayer layer];
wrapLayer.frame = CGRectMake(0, 0, 1024, 768);
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0, 0, 1024, 768);
[wrapLayer addSublayer:animationLayer];
[wrapLayer addSublayer:videoLayer];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMake([imagesFilePath count] * [[albumSettings transitionTime] intValue] * 25, 25));
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
videoCompositionInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:wrapLayer];
videoComposition.frameDuration = CMTimeMake(1, 25); // 25 fps
videoComposition.renderSize = CGSizeMake(1024, 768);
videoComposition.instructions = [NSArray arrayWithObject:videoCompositionInstruction];
3) I export the video to document path
AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetLowQuality];
session.videoComposition = videoComposition;
NSString *filePath = nil;
filePath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
filePath = [filePath stringByAppendingPathComponent:#"Output.mov"];
session.outputURL = [NSURL fileURLWithPath:filePath];
session.outputFileType = AVFileTypeQuickTimeMovie;
[session exportAsynchronouslyWithCompletionHandler:^
{
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"Export Finished: %#", session.error);
if (session.error) {
[[NSFileManager defaultManager] removeItemAtPath:filePath error:NULL];
}
});
}];
At the and of the export a get this error:
Export Finished: Error Domain=AVFoundationErrorDomain Code=-11822 "Cannot Open" UserInfo=0x49a97c0 {NSLocalizedFailureReason=This media format is not supported., NSLocalizedDescription=Cannot Open}
I found it inside documentation: AVErrorInvalidSourceMedia = -11822,
AVErrorInvalidSourceMedia
The operation could not be completed because some source media could not be read.
I'm totally sure that the CoreAnimation build by me is right because I rendered it into a test layer and i could see the animation progress correctly.
Anyone can help me to understand where is my error?
maybe you need a fake movie that contains totally black frame to fill the video layer, and then add a CALayer to manpiulate the images
I found the AVVideoCompositionCoreAnimationTool class that have the ability to take a CoreAnimation and reencode it as a video
My understanding was that this instead was only able to take CoreAnimation and add it to an existing video. I just checked the docs, and the only methods available require a video layer too.
EDIT: yep. Digging in docs and WWDC videos, I think you should be using AVAssetWriter instead, and appending images to the writer. Something like:
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:320], AVVideoWidthKey, [NSNumber numberWithInt:480], AVVideoHeightKey, nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain];
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMTimeMakeWithSeconds(0, 30)]
[writerInput appendSampleBuffer:sampleBuffer];
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:CMTimeMakeWithSeconds(60, 30)];
[videoWriter finishWriting];