How to create video from its frames iPhone - objective-c

I had done R&D and got success in how to get frames in terms of images from video file played in MPMoviePlayerController.
Got all frames from this code, and save all images in one Array.
for(int i= 1; i <= moviePlayerController.duration; i++)
{
UIImage *img = [moviePlayerController thumbnailImageAtTime:i timeOption:MPMovieTimeOptionNearestKeyFrame];
[arrImages addObject:img];
}
Now the question is that, After change some image file, like adding emotions to the images and also adding filters, such as; movie real, black and white, How can we create video again and store the same video in Document directory with the same frame rate and without losing quality of video.
After changing some images I had done following code to save that video again.
- (void) writeImagesAsMovie:(NSString*)path
{
NSError *error = nil;
UIImage *first = [arrImages objectAtIndex:0];
CGSize frameSize = first.size;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
int frameCount = 0;
CVPixelBufferRef buffer = NULL;
for(UIImage *img in arrImages)
{
buffer = [self newPixelBufferFromCGImage:[img CGImage] andFrameSize:frameSize];
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
CMTime frameTime = CMTimeMake(frameCount,(int32_t) kRecordingFPS);
[adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(buffer)
CVBufferRelease(buffer);
}
frameCount++;
}
[writerInput markAsFinished];
[videoWriter finishWriting];
}
- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image andFrameSize:(CGSize)frameSize
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I am new in this topic so please help me solve this question.

You can refer following links hope you get some help :-
http://www.iphonedevsdk.com/forum/iphone-sdk-development-advanced-discussion/77999-make-video-nsarray-uiimages.html
Using FFMPEG library with iPhone SDK for video encoding
Iphone SDK,Create a Video from UIImage
iOS Video Editing - Is it possible to merge (side by side not one after other) two video files into one using iOS 4 AVFoundation classes?

Related

IOS 8 Time Lapse Functionality

I am trying to add Time Lapse Feature on my IOS Application. Can anybody Help me by giving resource/ How to implement this? I want to capture video from my application using Time Lapse mode which is on IOS8 new feature. I didnt Find any resources for Developer. If you can it will be great for my self
You could use the AVFoundation functionality to combine photos into a video. You can do this using AVAssetWriter
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:imageSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:imageSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
double numberOfSecondsPerFrame = 2;
double frameDuration = fps * numberOfSecondsPerFrame;
And then append the images(Buffers) together
for(UIImage * img in imageArray)
{
progress.completedUnitCount++;
buffer = [self pixelBufferFromCGImage:[img CGImage] ImageSize:imageSize];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//print out status:
NSLog(#"Processing video frame (%d,%lu)",frameCount,(unsigned long)[imageArray count]);
CMTime frameTime = CMTimeMake(frameCount*frameDuration,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = videoWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
}
[videoWriterInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{}];
Here is the buffer code
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image ImageSize:(CGSize)size {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
//kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

OS X objective-C app uses excessive memory

I am writing an OS X application that will create a video using a series of images. It was developed using code from here: Make movie file with picture Array and song file, using AVAsset, but not including the audio portion.
The code functions and creates an mpg file.
The problem is the memory pressure. It doesn't appear to free up any memory. Using XCode Instruments I found the biggest culprits are:
CVPixelBufferCreate
[image TIFFRepresentation];
CGImageSourceCreateWithData
CGImageSourceCreateImageAtIndex
I tried adding code to release, but the ARC should already being doing that.
Eventually OS X will hang and or crash.
Not sure how to handle the memory issue. There are no mallocs in the code.
I'm open to suggestions. It appears that many others have used this same code.
This is the code that is based on the link above:
- (void)ProcessImagesToVideoFile:(NSError **)error_p size:(NSSize)size videoFilePath:(NSString *)videoFilePath jpegs:(NSMutableArray *)jpegs fileLocation:(NSString *)fileLocation
{
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoFilePath]
fileType:AVFileTypeMPEG4
error:&(*error_p)];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//Write all picture array in movie file.
int frameCount = 0;
for(int i = 0; i<[jpegs count]; i++)
{
NSString *filePath = [NSString stringWithFormat:#"%#%#", fileLocation, [jpegs objectAtIndex:i]];
NSImage *jpegImage = [[NSImage alloc ]initWithContentsOfFile:filePath];
CMTime frameTime = CMTimeMake(frameCount,(int32_t) 24);
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
if ((frameCount % 25) == 0)
{
NSLog(#"appending %d to %# attemp %d\n", frameCount, videoFilePath, j);
}
buffer = [self pixelBufferFromCGImage:jpegImage andSize:size];
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if (append_ok == NO) //failes on 3GS, but works on iphone 4
{
NSLog(#"failed to append buffer");
NSLog(#"The error is %#", [videoWriter error]);
}
//CVPixelBufferPoolRef bufferPool = adaptor.pixelBufferPool;
//NSParameterAssert(bufferPool != NULL);
if(buffer)
{
CVPixelBufferRelease(buffer);
//CVBufferRelease(buffer);
}
}
else
{
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok)
{
printf("error appending image %d times %d\n", frameCount, j);
}
frameCount++;
//CVBufferRelease(buffer);
jpegImage = nil;
buffer = nil;
}
//Finish writing picture:
[videoWriterInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^(){
NSLog (#"finished writing");
}];
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (NSImage *) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height,
8, 4*size.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGImageRef imageRef = [self nsImageToCGImageRef:image];
CGRect imageRect = CGRectMake(0, 0, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextDrawImage(context, imageRect, imageRef);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
imageRef = nil;
context = nil;
rgbColorSpace = nil;
return pxbuffer;
}
- (CGImageRef)nsImageToCGImageRef:(NSImage*)image;
{
NSData * imageData = [image TIFFRepresentation];// memory hog
CGImageRef imageRef;
if(!imageData) return nil;
CGImageSourceRef imageSource = CGImageSourceCreateWithData((__bridge CFDataRef)imageData, NULL);
imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
imageData = nil;
imageSource = nil;
return imageRef;
}
Your code is using ARC but the libraries you are calling might not be using ARC. They might be relying on the older autorelease pool system to free up memory.
You should have a read how it works, this is fundamental stuff that every Obj-C developer needs to memorise, but basically any object can be added to the current "pool" of objects, which will be released when the pool is released.
By default, the pool on the main thread is emptied each time the app enters an idle state. This usually works fine, since the main thread should never be busy for more than few hundredths of a second and you can't really build up much memory in that amount of time.
When you do a lengthy and memory intensive operation you need to manually setup an autorelease pool, which is most commonly put inside a for or while loop (although you can actually put them anywhere you want, that's just the most useful scenario):
for ( ... ) {
#autoreleasepool {
// do somestuff
}
}
Also, ARC is only for Objective C code. It does not apply to objects created by C functions like CGColorSpaceCreateDeviceRGB() and CVPixelBufferCreate(). Make sure you are manually releasing all of those.
ARC works only for retainable object pointers. ARC documentation defines them as
A retainable object pointer (or “retainable pointer”) is a value of a
retainable object pointer type (“retainable type”). There are three
kinds of retainable object pointer types:
block pointers (formed by applying the caret (^) declarator sigil to a
function type)
Objective-C object pointers (id, Class, NSFoo*, etc.)
typedefs marked with attribute((NSObject)) Other pointer types,
such as int* and CFStringRef, are not subject to ARC’s semantics and
restrictions.
You already explicitly call release here
CGContextRelease(context);
You should do the same for other objects. Like
CVPixelBufferRelease(pxbuffer);
for pxbuffer

How to Add text in a video

I have a video that lasts a duration of 5 minutes now I want to add text in that video for a particular time of say 5 - 15 seconds.
Can any one help me out. I have tried below code for adding text to an image
CATextLayer *subtitle1Text = [[CATextLayer alloc] init];
[subtitle1Text setFont:#"Helvetica-Bold"];
[subtitle1Text setFontSize:36];
[subtitle1Text setFrame:CGRectMake(0, 0, size.width, 100)];
[subtitle1Text setString:_subTitle1.text];
[subtitle1Text setAlignmentMode:kCAAlignmentCenter];
[subtitle1Text setForegroundColor:[[UIColor whiteColor] CGColor]];
// 2 - The usual overlay
CALayer *overlayLayer = [CALayer layer];
[overlayLayer addSublayer:subtitle1Text];
overlayLayer.frame = CGRectMake(0, 0, size.width, size.height);
[overlayLayer setMasksToBounds:YES];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, size.width, size.height);
videoLayer.frame = CGRectMake(0, 0, size.width, size.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:overlayLayer];
composition.animationTool = [AVVideoCompositionCoreAnimationTool
videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
Can one one help me how to do it for videos
Let's try the below method to edit a video frame by frame using AVFoundation.framework,
-(void)startEditingVideoAtURL:(NSURL *)resourceURL
{
NSError * error = nil;
// Temp File path to write the edited movie
NSURL * movieURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#%#", NSTemporaryDirectory(), resourceURL.lastPathComponent]];
NSFileManager * fm = [NSFileManager defaultManager];
NSError * error;
BOOL success = [fm removeItemAtPath:movieURL.path error:&error];
// Create AVAssetWriter to convert the images into movie
AVAssetWriter * videoWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
AVAsset * avAsset = [[AVURLAsset alloc] initWithURL:resourceURL options:nil];
// Set your out put video frame here
NSDictionary * videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:480.0], AVVideoWidthKey,
[NSNumber numberWithInt:480.0], AVVideoHeightKey,
nil];
// Create AVAssetWriterInput with video Settings
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
NSDictionary * sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:480.0], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:480.0], kCVPixelBufferHeightKey,
nil];
AVAssetWriterInputPixelBufferAdaptor * assetWriterPixelBufferAdaptor1 = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
NSError *aerror = nil;
// Create AVAssetReader to read the video files frame by frame
AVAssetReader * reader = [[AVAssetReader alloc] initWithAsset:avAsset error:&aerror];
NSArray * vTracks = [avAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetReaderTrackOutput * asset_reader_output = nil;
if (vTracks.count)
{
AVAssetTrack * videoTrack = [vTracks objectAtIndex:0];
videoWriterInput.transform = videoTrack.preferredTransform;
NSDictionary *videoOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
asset_reader_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:videoOptions];
[reader addOutput:asset_reader_output];
}
else
{
[[NSNotificationCenter defaultCenter] postNotificationName:EDITOR_FAILED_NOTI object:resourceURL.path];
[reader cancelReading];
queueInProgress = NO;
[self removeCurrentVideo];
return;
}
//audio setup
AVAssetWriterInput * audioWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeAudio
outputSettings:nil];
AVAssetReader * audioReader = [AVAssetReader assetReaderWithAsset:avAsset error:&error];
NSArray * aTrack = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetReaderOutput * readerOutput = nil;
if (aTrack.count)
{
AVAssetTrack * audioTrack = [aTrack objectAtIndex:0];
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:nil];
[audioReader addOutput:readerOutput];
}
NSParameterAssert(audioWriterInput);
NSParameterAssert([videoWriter canAddInput:audioWriterInput]);
audioWriterInput.expectsMediaDataInRealTime = NO;
[videoWriter addInput:audioWriterInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
[reader startReading]; // Here the video reader starts the reading
dispatch_queue_t _processingQueue = dispatch_queue_create("assetAudioWriterQueue", NULL);
[videoWriterInput requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
^{
while ([videoWriterInput isReadyForMoreMediaData])
{
CMSampleBufferRef sampleBuffer;
if ([reader status] == AVAssetReaderStatusReading &&
(sampleBuffer = [asset_reader_output copyNextSampleBuffer]))
{
CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * _ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void * baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRelease(dataProvider);
// Add your text drawing code here
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGColorSpaceRelease(rgbColorSpace);
BOOL result = [assetWriterPixelBufferAdaptor1 appendPixelBuffer:imageBuffer withPresentationTime:currentTime];
CVPixelBufferRelease(imageBuffer);
CFRelease(sampleBuffer);
if (!result)
{
dispatch_async(dispatch_get_main_queue(), ^
{
[[NSNotificationCenter defaultCenter] postNotificationName:EDITOR_FAILED_NOTI object:resourceURL.path];
[reader cancelReading];
queueInProgress = NO;
[self removeCurrentVideo];
});
break;
}
}
else
{
[videoWriterInput markAsFinished]; // This will called, once video write done
switch ([reader status])
{
case AVAssetReaderStatusReading:
// the reader has more for other tracks, even if this one is done
break;
case AVAssetReaderStatusCompleted:
{
if (!readerOutput)
{
if ([videoWriter respondsToSelector:#selector(finishWritingWithCompletionHandler:)])
[videoWriter finishWritingWithCompletionHandler:^
{
dispatch_async(dispatch_get_main_queue(), ^
{
});
}];
else
{
if ([videoWriter finishWriting])
{
dispatch_async(dispatch_get_main_queue(), ^
{
});
}
}
break;
}
[audioReader startReading];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[audioWriterInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^
{
while (audioWriterInput.readyForMoreMediaData) {
CMSampleBufferRef nextBuffer;
if ([audioReader status] == AVAssetReaderStatusReading &&
(nextBuffer = [readerOutput copyNextSampleBuffer]))
{
if (nextBuffer)
{
[audioWriterInput appendSampleBuffer:nextBuffer];
}
CFRelease(nextBuffer);
}else
{
[audioWriterInput markAsFinished];
switch ([audioReader status])
{
case AVAssetReaderStatusCompleted:
if ([videoWriter respondsToSelector:#selector(finishWritingWithCompletionHandler:)])
[videoWriter finishWritingWithCompletionHandler:^
{
}];
else
{
if ([videoWriter finishWriting])
{
}
}
break;
}
}
}
}
];
break;
}
// your method for when the conversion is done
// should call finishWriting on the writer
//hook up audio track
case AVAssetReaderStatusFailed:
{
break;
}
}
break;
}
}
}
];
}
Thanks!

CAKeyframeAnimation not animating CATextLayer when exporting video

I have an application that I am attempting to put a timestamp on a video. To do this I am using AVFoundation and Core Animation to place a CATextLayer over the video layer. If I place text into the CATextLayer's string property, the string is properly displayed in the exported video. If I then add the animation to the CATextLayer the text never changes. I figure I've overlooked something, but I can find what it is.
Thank you in advance for any help.
Here is a code example.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
AVMutableCompositionTrack *videoCompositionTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableAudioMix *audioMix = [AVMutableAudioMix audioMix];
AVMutableCompositionTrack *audioCompositionTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSDictionary *assetOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVAsset *asset = [[AVURLAsset alloc] initWithURL:myUrl options:assetOptions];
AVAssetTrack *audioAssetTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];
AVAssetTrack *videoAssetTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
videoComposition.renderSize = videoCompositionTrack.naturalSize;
AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration);
AVMutableVideoCompositionLayerInstruction *videoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
[videoCompositionLayerInstruction setOpacity:1.0f atTime:kCMTimeZero];
videoCompositionInstruction.layerInstructions = #[videoCompositionLayerInstruction];
videoComposition.instructions = #[videoCompositionInstruction];
CALayer *parentLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0.0f, 0.0f, videoComposition.renderSize.width, videoComposition.renderSize.height);
CALayer *videoLayer = [CALayer layer];
videoLayer.frame = CGRectMake(0.0f, 0.0f, videoComposition.renderSize.width, videoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
CATextLayer *textLayer = [CATextLayer layer];
textLayer.font = (__bridge CFTypeRef)([UIFont fontWithName:#"Helvetica Neue" size:45.0f]);
textLayer.fontSize = 45.0f;
textLayer.foregroundColor = [UIColor colorWithRed:1.0f green:1.0f blue:1.0f alpha:1.0f].CGColor;
textLayer.shadowColor = [UIColor colorWithRed:0.0f green:0.0f blue:0.0f alpha:1.0f].CGColor;
textLayer.shadowOffset = CGSizeMake(0.0f, 0.0f);
textLayer.shadowOpacity = 1.0f;
textLayer.shadowRadius = 4.0f;
textLayer.alignmentMode = kCAAlignmentCenter;
textLayer.truncationMode = kCATruncationNone;
CAKeyframeAnimation *keyFrameAnimation = [CAKeyframeAnimation animationWithKeyPath:#"string"];
// Step 8: Set the animation values to the object.
keyFrameAnimation.calculationMode = kCAAnimationLinear;
keyFrameAnimation.values = #[#"12:00:00", #"12:00:01", #"12:00:02", #"12:00:03",
#"12:00:04", #"12:00:05", #"12:00:06", #"12:00:07",
#"12:00:08", #"12:00:09"];
keyFrameAnimation.keyTimes = #[[NSNumber numberWithFloat:0.0f], [NSNumber numberWithFloat:0.1f],
[NSNumber numberWithFloat:0.2f], [NSNumber numberWithFloat:0.3f],
[NSNumber numberWithFloat:0.4f], [NSNumber numberWithFloat:0.5f],
[NSNumber numberWithFloat:0.6f], [NSNumber numberWithFloat:0.7f],
[NSNumber numberWithFloat:0.8f], [NSNumber numberWithFloat:0.9f]];
keyFrameAnimation.beginTime = AVCoreAnimationBeginTimeAtZero;
keyFrameAnimation.duration = CMTimeGetSeconds(composition.duration);
keyFrameAnimation.removedOnCompletion = YES;
[textLayer addAnimation:keyFrameAnimation forKey:#"string"];
textLayer.frame = CGRectMake(0.0f, 20.0f, videoComposition.renderSize.width, 55.0f);
[parentLayer addSublayer:textLayer];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPreset1920x1080];
exportSession.videoComposition = videoComposition;
exportSession.audioMix = audioMix;
exportSession.outputFileType = [exportSession.supportedFileTypes objectAtIndex:0];
exportSession.outputURL = mySaveUrl;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
self.delegate = nil;
switch (exportSession.status) {
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export session cancelled.");
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export session completed.");
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Export session failed.");
break;
case AVAssetExportSessionStatusUnknown:
NSLog(#"Export session unknown status.");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"Export session waiting.");
break;
default:
break;
}
NSError *error = exportSession.error;
if (error != nil) {
NSLog(#"An error ocurred while exporting. Error: %#: %#", error.localizedDescription, error.localizedFailureReason);
}
}];
I stay in contact with apple support and told me that it's a bug in his SDK. They are trying to fix the issue.
Also I open an incident in apple bug reporter, If you are apple developer, I recommend you open a new one to do more pressure over this incident.
https://bugreport.apple.com/logon
Best regards.
Finally I fond a solution in CATextLayer and animation. It's probably that it's not the best solution but at least it works fine.
To fix the issue we ought to pass NSString to UIImage or CALayer and then put all this images (CGImage) in NSMutableArray and load this mutable array in CAKeyAnimation and do the modification of contents variable
-(UIImage *)imageFromText:(NSString *)text
{
UIGraphicsBeginImageContextWithOptions(sizeImageFromText,NO,0.0);//Better resolution (antialiasing)
//UIGraphicsBeginImageContext(sizeImageFromText); // iOS is < 4.0
[text drawInRect:aRectangleImageFromText withAttributes:attributesImageFromText];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Adding images to NSMutableArray
[NSMutableArrayAux addObject:(__bridge id)[self imageFromText:#"Hello"].CGImage];
Set CAKeyAnimation to modify contents of image (change image)
CAKeyframeAnimation *overlay = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
overlay.values = [[NSArray alloc] initWithArray:NSMutableArrayAux];
It works really fine, but it takes 1 or 2 seconds to 5000 or 6000 images in array, and also with 15000/16000 images when you export your video it crash with a overflow error. This is another bug in framework.
As you know, this is a issue in framework, and this is a solution at least until apple fix CATextLayer and animated issue, also I give this solution to apple too.

AVAssetWriterInputPixelBufferAdaptor and CMTime

I'm writing some frames to video with AVAssetWriterInputPixelBufferAdaptor, and the behavior w.r.t. time isn't what I'd expect.
If I write just one frame:
[videoWriter startSessionAtSourceTime:kCMTimeZero];
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:kCMTimeZero];
this gets me a video of length zero, which is what I expect.
But if I go on to add a second frame:
// 3000/600 = 5 sec, right?
CMTime nextFrame = CMTimeMake(3000, 600);
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:nextFrame];
I get ten seconds of video, where I'm expecting five.
What's going on here? Does withPresentationTime somehow set both the start of the frame and the duration?
Note that I'm not calling endSessionAtSourceTime, just finishWriting.
Try looking at this example and reverse engineering to add 1 frame 5 seconds later...
Here is the sample code link: git#github.com:RudyAramayo/AVAssetWriterInputPixelBufferAdaptorSample.git
Here is the code you need:
- (void) testCompressionSession
{
CGSize size = CGSizeMake(480, 320);
NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/Movie.m4v"];
NSError *error = nil;
unlink([betaCompressionDirectory UTF8String]);
//----initialize compression engine
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
if(error)
NSLog(#"error = %#", [error localizedDescription]);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
if ([videoWriter canAddInput:writerInput])
NSLog(#"I can add this input");
else
NSLog(#"i can't add this input");
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//---
// insert demo debugging code to write the same image repeated as a movie
CGImageRef theImage = [[UIImage imageNamed:#"Lotus.png"] CGImage];
dispatch_queue_t dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
int __block frame = 0;
[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
while ([writerInput isReadyForMoreMediaData])
{
if(++frame >= 120)
{
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
break;
}
CVPixelBufferRef buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:theImage size:size];
if (buffer)
{
if(![adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frame, 20)])
NSLog(#"FAIL");
else
NSLog(#"Success:%d", frame);
CFRelease(buffer);
}
}
}];
NSLog(#"outside for loop");
}
- (CVPixelBufferRef )pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
// CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Have you tried using this as your first call
CMTime t = CMTimeMake(0, 600);
[videoWriter startSessionAtSourceTime:t];
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:t];
According to the documentation of -[AVAssetWriterInput appendSampleBuffer:] method:
For track types other than audio tracks, to determine the duration of all samples in the output file other than the very last sample that's appended, the difference between the sample buffer's output DTS and the following sample buffer's output DTS will be used. The duration of the last sample is determined as follows:
If a marker sample buffer with kCMSampleBufferAttachmentKey_EndsPreviousSampleDuration is appended following the last media-bearing sample, the difference between the output DTS of the marker sample buffer and the output DTS of the last media-bearing sample will be used.
If the marker sample buffer is not provided and if the output duration of the last media-bearing sample is valid, it will be used.
if the output duration of the last media-bearing sample is not valid, the duration of the second-to-last sample will be used.
So, basically you are in the situation #3:
The first sample's duration is 5s, based on the PTS different between first and second sample
The second sample's duration is 5s too, because the duration of the second-to-last sample was used