I have server application which serves informations about the videos on server. One of the requests is URL:PORT/video/:id/:time ... which I parse and get the video file, prepare the time and ask method to generate the thumbnail. It works for first 5 minutes really fast (generates image under 200ms), then the process of image generation suddenly takes even 10 seconds...
Do you have any idea why ? Code used:
-(NSImage *)thumbnailAt: (CMTime) time
withSize: (NSSize) size
error: (NSError **) error {
#autoreleasepool {
if ( self.assetChanged ) {
self.generate = [[AVAssetImageGenerator alloc] initWithAsset:_asset];
self.generate.appliesPreferredTrackTransform = TRUE;
self.assetChanged = NO;
}
self.generate.maximumSize = NSSizeToCGSize(size);
CGImageRef imageReference = [self.generate copyCGImageAtTime:time actualTime:NULL error:error];
if ( imageReference != nil ) {
NSImage* ret = [[NSImage alloc] initWithCGImage:imageReference size:size];
CGImageRelease(imageReference);
return ret;
}
return nil;
}
}
Any idea what I am doing wrong or any suggestion how to do it differently (eg. using AVAssetReader) ?
In the end I solved it by using different approach: I used asynchronous images
-(NSImage *)createAsyncThumbnailAtTime:(CMTime)time withSize:(NSSize)size {
if ( self.updated ) {
[_generator cancelAllCGImageGeneration]; // Stop we did not comply in time.
_generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:_asset];
_generator.maximumSize = NSSizeToCGSize(size);
}
NSMutableArray * times = [NSMutableArray array];
[times addObject:[NSValue valueWithCMTime:time]];
__block NSImage * image;
__block BOOL finished = NO;
[_generator generateCGImagesAsynchronouslyForTimes:times completionHandler:^(CMTime requestedTime, CGImageRef imageRef, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error) {
image = nil;
if ( result == AVAssetImageGeneratorCancelled ) {
image = nil;
NSLog(#"CANCELLED %#", error);
finished = YES;
}
else
if ( result == AVAssetImageGeneratorFailed ) {
image = nil;
NSLog(#"FAILDED %#", error);
finished = YES;
}
else /* result == AVAssetImageGeneratorSucessed */
{
image = [[NSImage alloc] initWithCGImage:imageRef size:size];
finished = YES;
}
}];
while ( !finished ) {
[[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]];
}
return thumbnail->image;
}
Related
My requirement in the app is to capture an image without previewing the UIImagePickerController So I have used following code to capture an image without presenting the same.
- (void)clickImage
{
AVCaptureDevice *rearCamera = [self checkIfRearCameraAvailable];
if (rearCamera != nil)
{
photoSession = [[AVCaptureSession alloc] init];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:rearCamera error:&error];
if (!error && [photoSession canAddInput:input])
{
[photoSession addInput:input];
AVCapturePhotoOutput *output = [[AVCapturePhotoOutput alloc] init];
if ([photoSession canAddOutput:output])
{
[photoSession addOutput:output];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in output.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo])
{
videoConnection = connection;
break;
}
}
if (videoConnection)
{
break;
}
}
if (videoConnection)
{
[photoSession startRunning];
[output capturePhotoWithSettings:[AVCapturePhotoSettings photoSettings] delegate:self];
}
}
}
}
}
- (AVCaptureDevice *)checkIfRearCameraAvailable
{
AVCaptureDevice *rearCamera;
AVCaptureDeviceDiscoverySession *captureDeviceDiscoverySession =
[AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:#[AVCaptureDeviceTypeBuiltInWideAngleCamera]
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
NSArray *allCameras = [captureDeviceDiscoverySession devices];
for (int i = 0; i < allCameras.count; i++)
{
AVCaptureDevice *camera = [allCameras objectAtIndex:i];
if (camera.position == AVCaptureDevicePositionBack)
{
rearCamera = camera;
}
}
return rearCamera;
}
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error
{
if (error)
{
NSLog(#"error : %#", error.localizedDescription);
}
if (photoSampleBuffer)
{
NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer
previewPhotoSampleBuffer:previewPhotoSampleBuffer];
UIImage *image = [UIImage imageWithData:data];
_imgView.image = image;
}
}
I have used above code to capture an image But the output image is returning is looking like in Night mode or like Negative Image.
Would you please review code and correct me with this?
I have found following sample code from Apple Developer's site:
https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112
Above sample code helped to get proper image. I have added this answer for other's help as well.
I am uploading the image and some string to server , it is working fine ,now i am want to implement the Progress bar,if i am sending 5 images means ,i want to show the progress bar and i want to show the count of images sended successfully ,like 2 /5 ,Please anyone help me to do this.
The follwoing method is for uploading image ,dictionary , and string
-(void)uploadImage
{
NSString *userCategory = self.UserCategory;
NSDictionary *dict = [self.arrayWithImages objectAtIndex:self.currentIndex];
NSString *notes = [dict objectForKey:#"string"];
UIImage *sample = [dict objectForKey:#"image"];
NSData *sampleData = UIImageJPEGRepresentation(sample, 1.0);
NSMutableDictionary *FinalDict = [self.dictMetaData mutableCopy];
[FinalDict setObject:userCategory forKey:#"user_category"];
if (notes.length > 0) {
[FinalDict setObject:notes forKey:#"note"];
}
for (int i = 0; i<self.arrayWithImages.count; i++) {
[ServerUtility uploadImageWithAllDetails:FinalDict noteResource:sampleData andCompletion:^(NSError *error,id data)
{
if (!error) {
NSString *strResType = [data objectForKey:#"res_type"];
if ([strResType.lowercaseString isEqualToString:#"success"]) {
NSLog(#"Upload Successfully");
self.currentIndex++;
}
else if ([strResType.lowercaseString isEqualToString:#"error"])
{
NSString *strMsg = [data objectForKey:#"msg"];
[self.view makeToast:strMsg duration:1.0 position:CSToastPositionCenter];
}
}
else{
[self.view makeToast:error.localizedDescription duration:1.0 position:CSToastPositionCenter];
}
}];
}
I am trying to perform screen-recording using AVAssetWriter, which also accepts audio input. However, I have been stuck on this error, where AVAssetWriter sometimes becomes AVAssetWriterStatusFailed after a few calls on appendSampleBuffer: (inside encodeAudioFrame:)
Failed: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo=0x32b570 {NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x70d710 "The operation couldn’t be completed. (OSStatus error -12737.)", NSLocalizedFailureReason=An unknown error occurred (-12737)}
Several observations:
Once it enters this state, subsequent recording attempts will also return AVAssetWriterStatusFailed, even if I use a different recorder object.
The error does not appear when I comment out the audio recording blocks.
But the error still appears when I comment out the video recording blocks, and without modifying any incoming CMSampleBufferRef.
Any assistance will be appreciated.
Below is the code I am using, with several parts omitted for brevity. I am currently using OSX 10.9 SDK, with ARC turned off.
- (BOOL) startRecording
{
if (!isRecording)
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self startCapture];
[self setUpWriter];
startedAt = [NSDate date];
isRecording = YES;
while (isRecording)
{
NSAutoreleasePool* pool = [NSAutoreleasePool new];
NSTimeInterval offset = [[NSDate date] timeIntervalSinceDate:startedAt];
CMTime tiem = CMTimeMakeWithSeconds(offset - pauseDelta, 1000);
[self encodeFrameAtTime:tiem];
[pool drain];
sleep(0.05f);
}
[self endCapture];
[self completeRecordingSession];
});
}
return YES;
}
- (void) stopRecording {
isRecording = NO;
}
-(void) startCapture
{
AVCaptureDevice* microphone = x //Device selection code omitted
videoCaptureSession = [[AVCaptureSession alloc] init];
videoCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
//------------------------------------------
NSError* err = nil;
audioInput = [AVCaptureDeviceInput deviceInputWithDevice:microphone error:&err];
[videoCaptureSession addInput:audioInput];
//------------------------------------------
audioOutput = [[AVCaptureAudioDataOutput alloc] init];
queue = dispatch_queue_create("videoQueue", NULL);
[audioOutput setSampleBufferDelegate:self queue:queue];
[videoCaptureSession addOutput:audioOutput];
audioDelta = -1;
[videoCaptureSession startRunning];
}
-(void) endCapture
{
[videoCaptureSession stopRunning];
[videoCaptureSession removeInput:audioInput];
[videoCaptureSession removeOutput:audioOutput];
[audioOutput release];
audioOutput = nil;
audioInput = nil;
[videoCaptureSession release];
videoCaptureSession = nil;
dispatch_release(queue);
}
-(BOOL) setUpWriter
{
//delete the file.
{
NSFileManager* fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:self.moviePath]) {
NSError* error;
if ([fileManager removeItemAtPath:self.moviePath error:&error] == NO) {
NSLog(#"Could not delete old recording file at path: %#", self.moviePath);
}
}
}
mCaptureRect = NSRectToCGRect([screen frame]);
int FWidth = mCaptureRect.size.width;
int FHeight = mCaptureRect.size.height;
int bitRate = FWidth * FHeight * 8;
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:self.moviePath] fileType:AVFileTypeMPEG4 error:nil];
NSParameterAssert(videoWriter);
//Configure video
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:bitRate], AVVideoAverageBitRateKey,
nil];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
codecSettings,AVVideoCompressionPropertiesKey,
[NSNumber numberWithInt:FWidth], AVVideoWidthKey,
[NSNumber numberWithInt:FHeight], AVVideoHeightKey,
nil];
videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(videoWriterInput);
videoWriterInput.expectsMediaDataInRealTime = YES;
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:FWidth], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:FHeight], kCVPixelBufferHeightKey,
nil];
avAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes];
//*
//Configure Audio
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary* audioSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
[NSNumber numberWithInt:64000], AVEncoderBitRateKey,
nil ];
audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSettings];
audioWriterInput.expectsMediaDataInRealTime = YES;
//add input
[videoWriter addInput:videoWriterInput];
[videoWriter addInput:audioWriterInput];
return YES;
}
- (void) cleanupWriter {
[videoWriter release];
videoWriter = nil;
avAdaptor = nil;
videoWriterInput = nil;
startedAt = nil;
audioWriterInput = nil;
}
- (void) encodeFrameAtTime:(CMTime)timestamp
{
if(!isRecording) return;
if(videoWriter == nil) return;
if(videoWriter.status == AVAssetWriterStatusFailed)
{
return;
}
if(videoWriter.status != AVAssetWriterStatusWriting)
{
if(videoWriter.status != AVAssetWriterStatusUnknown)
return;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:timestamp];
startTime = CMTimeGetSeconds(timestamp);
}
timestamp = CMTimeMakeWithSeconds(startTime + CMTimeGetSeconds(timestamp), 1000);
[self writeVideoFrameAtTime:timestamp];
}
-(void) writeVideoFrameAtTime:(CMTime)time {
if (![videoWriterInput isReadyForMoreMediaData])
{
}
else
{
/*
CVPixelBufferRef manipulation omitted...
*/
{
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if(videoWriter.status == AVAssetWriterStatusFailed) NSLog(#"Failed: %#", videoWriter.error);
if (!success) NSLog(#"Warning: Unable to write buffer to video");
}
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(cgImage);
}
}
-(void) encodeAudioFrame:(CMSampleBufferRef)buffer
{
if(!isRecording) return;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(buffer);
if(videoWriter.status != AVAssetWriterStatusWriting)
{
//Wait for video thread to start the writer
return;
}
if(![audioWriterInput isReadyForMoreMediaData])
return;
//*
NSTimeInterval offset = [[NSDate date] timeIntervalSinceDate:startedAt];
if(audioDelta == -1)
{
audioDelta = offset - CMTimeGetSeconds(timestamp);
}
//Adjusts CMSampleBufferRef's timestamp to match the video stream's zero-based timestamp
CMItemCount count;
CMTime newTimestamp = CMTimeMakeWithSeconds(CMTimeGetSeconds(timestamp) + audioDelta - pauseDelta, 1000);
CMSampleBufferGetSampleTimingInfoArray(buffer, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(buffer, count, pInfo, &count);
for(CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = newTimestamp;
pInfo[i].presentationTimeStamp = newTimestamp;
}
CMSampleBufferRef newBuffer;
CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault, buffer, count, pInfo, &newBuffer);
free(pInfo);
timestamp = CMSampleBufferGetPresentationTimeStamp(newBuffer);
BOOL res = [audioWriterInput appendSampleBuffer:newBuffer];
}
- (void) completeRecordingSession {
#autoreleasepool {
if(videoWriter.status != AVAssetWriterStatusWriting)
{
while (videoWriter.status == AVAssetWriterStatusUnknown)
{
[NSThread sleepForTimeInterval:0.5f];
}
int status = videoWriter.status;
while (status == AVAssetWriterStatusUnknown)
{
NSLog(#"Waiting...");
[NSThread sleepForTimeInterval:0.5f];
status = videoWriter.status;
}
}
#synchronized(self)
{
[videoWriter finishWriting];
[self cleanupWriter];
}
}
}
-(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(!CMSampleBufferDataIsReady(sampleBuffer))
return;
#autoreleasepool {
if(captureOutput == audioOutput)
{
if(isRecording && !isPaused)
{
[self encodeAudioFrame:sampleBuffer];
}
}
}
}
I had exactly the same problem with my swift code. I found out that my pc simply ran out of memory. So double check if you have enough free ram.
I previously had this code to capture a single image from a Mac's iSight camera using QTKit:
- (NSError*)takePicture
{
BOOL success;
NSError* error;
captureSession = [QTCaptureSession new];
QTCaptureDevice* device = [QTCaptureDevice defaultInputDeviceWithMediaType: QTMediaTypeVideo];
success = [device open: &error];
if (!success) { return error; }
QTCaptureDeviceInput* captureDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice: device];
success = [captureSession addInput: captureDeviceInput error: &error];
if (!success) { return error; }
QTCaptureDecompressedVideoOutput* captureVideoOutput = [QTCaptureDecompressedVideoOutput new];
[captureVideoOutput setDelegate: self];
success = [captureSession addOutput: captureVideoOutput error: &error];
if (!success) { return error; }
[captureSession startRunning];
return nil;
}
- (void)captureOutput: (QTCaptureOutput*)captureOutput
didOutputVideoFrame: (CVImageBufferRef)imageBuffer
withSampleBuffer: (QTSampleBuffer*)sampleBuffer
fromConnection: (QTCaptureConnection*)connection
{
CVBufferRetain(imageBuffer);
if (imageBuffer) {
[captureSession removeOutput: captureOutput];
[captureSession stopRunning];
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
_result = [[NSImage alloc] initWithSize: [imageRep size]];
[_result addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
_done = YES;
}
}
However, I found today that QTKit has been deprecated and so we must now use AVFoundation.
Can anyone help me convert this code to its AVFoundation equivalent? It seems as though many methods have the same name, but at the same time, a lot is different and I'm at a complete loss here... Any help?
Alright, I found the solution!! Here it is:
- (void)takePicture
{
NSError* error;
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if (!input) {
_error = error;
_done = YES;
return;
}
AVCaptureStillImageOutput* output = [AVCaptureStillImageOutput new];
[output setOutputSettings: #{(id)kCVPixelBufferPixelFormatTypeKey: #(k32BGRAPixelFormat)}];
captureSession = [AVCaptureSession new];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
[captureSession addInput: input];
[captureSession addOutput: output];
[captureSession startRunning];
AVCaptureConnection* connection = [output connectionWithMediaType: AVMediaTypeVideo];
[output captureStillImageAsynchronouslyFromConnection: connection completionHandler: ^(CMSampleBufferRef sampleBuffer, NSError* error) {
if (error) {
_error = error;
_result = nil;
}
else {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
_result = [[NSImage alloc] initWithSize: [imageRep size]];
[_result addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
}
}
_done = YES;
}];
}
I hope this helps whoever has any problems in doing this same thing.
I'm writing a MP4 video file with a AVAssetWriter using a AVAssetWriterInputPixelBufferAdaptor.
The source is a video from a UIImagePickerController, either freshly captured from the camera or from the asset library. Quality right now is UIImagePickerControllerQualityTypeMedium.
Some times the writer fails. It's status is AVAssetWriterStatusFailed and the AVAssetWriter objects error property is:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed"
UserInfo=0xf5d8990 {NSLocalizedFailureReason=An unknown error occurred (-536870210),
NSUnderlyingError=0x4dd8e0 "The operation couldn’t be completed. (OSStatus error -536870210.)",
NSLocalizedDescription=The operation could not be completed
The error occurs approximately 20% of the times the code is run. It seems to fail more frequently on iPhone 4 / 4S than on iPhone 5.
It also occurs more frequently if the source video quality is higher.
Using UIImagePickerControllerQualityTypeLow the error doesn't happen so often.
Using UIImagePickerControllerQualityTypeHigh, the error happens a little more frequently.
I have also noticed something else:
It seems to come in waves. When it fails, the following runs will often fail too, even though I delete the app and reinstall it. That leaves me wondering, whether my program leaks some memory and if that memory stays alive even if the app gets killed (is that even possible?).
Here is the code i use to render my video:
- (void)writeVideo
{
offlineRenderingInProgress = YES;
/* --- Writer Setup --- */
[locationQueue cancelAllOperations];
[self stopWithoutRewinding];
NSError *writerError = nil;
BOOL succes;
succes = [[NSFileManager defaultManager] removeItemAtURL:self.outputURL error:nil];
// DLog(#"Url: %#, succes: %i, error: %#", self.outputURL, succes, fileError);
writer = [AVAssetWriter assetWriterWithURL:self.outputURL fileType:(NSString *)kUTTypeQuickTimeMovie error:&writerError];
//writer.shouldOptimizeForNetworkUse = NO;
if (writerError) {
DLog(#"Writer error: %#", writerError);
return;
}
float bitsPerPixel;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions((__bridge CMVideoFormatDescriptionRef)([readerVideoOutput.videoTracks[0] formatDescriptions][0]));
int numPixels = dimensions.width * dimensions.height;
int bitsPerSecond;
// Assume that lower-than-SD resolutions are intended for streaming, and use a lower bitrate
if ( numPixels < (640 * 480) )
bitsPerPixel = 4.05; // This bitrate matches the quality produced by AVCaptureSessionPresetMedium or Low.
else
bitsPerPixel = 11.4; // This bitrate matches the quality produced by AVCaptureSessionPresetHigh.
bitsPerSecond = numPixels * bitsPerPixel;
NSDictionary *videoCompressionSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithFloat:videoSize.width], AVVideoWidthKey,
[NSNumber numberWithInteger:videoSize.height], AVVideoHeightKey,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey,
nil], AVVideoCompressionPropertiesKey,
nil];
writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];
writerVideoInput.transform = movie.preferredTransform;
writerVideoInput.expectsMediaDataInRealTime = YES;
[writer addInput:writerVideoInput];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
writerPixelAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerVideoInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
BOOL couldStart = [writer startWriting];
if (!couldStart) {
DLog(#"Could not start AVAssetWriter!");
abort = YES;
[locationQueue cancelAllOperations];
return;
}
[self configureFilters];
CIContext *offlineRenderContext = [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer : #NO}];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!self.canEdit) {
[self createVideoReaderWithAsset:movie timeRange:CMTimeRangeFromTimeToTime(kCMTimeZero, kCMTimePositiveInfinity) forOfflineRender:YES];
} else {
[self createVideoReaderWithAsset:movie timeRange:CMTimeRangeWithNOVideoRangeInDuration(self.thumbnailEditView.range, movie.duration) forOfflineRender:YES];
}
CMTime startOffset = reader.timeRange.start;
DLog(#"startOffset: %llu", startOffset.value);
[self.thumbnailEditView removeFromSuperview];
// self.thumbnailEditView = nil;
[glLayer removeFromSuperlayer];
glLayer = nil;
[playerView removeFromSuperview];
playerView = nil;
glContext = nil;
[writerVideoInput requestMediaDataWhenReadyOnQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) usingBlock:^{
#try {
BOOL didWriteSomething = NO;
DLog(#"Preparing to write...");
while ([writerVideoInput isReadyForMoreMediaData]) {
if (abort) {
NSLog(#"Abort == YES");
[locationQueue cancelAllOperations];
[writerVideoInput markAsFinished];
videoConvertCompletionBlock(NO, writer.error.localizedDescription);
}
if (writer.status == AVAssetWriterStatusFailed) {
DLog(#"Writer.status: AVAssetWriterStatusFailed, error: %#", writer.error);
[[NSUserDefaults standardUserDefaults] setObject:[NSNumber numberWithInt:1] forKey:#"QualityOverride"];
[[NSUserDefaults standardUserDefaults] synchronize];
abort = YES;
[locationQueue cancelAllOperations];
videoConvertCompletionBlock(NO, writer.error.localizedDescription);
return;
DLog(#"Source file exists: %i", [[NSFileManager defaultManager] fileExistsAtPath:movie.URL.relativePath]);
}
DLog(#"Writing started...");
CMSampleBufferRef buffer = nil;
if (reader.status != AVAssetReaderStatusUnknown) {
if (reader.status == AVAssetReaderStatusReading) {
buffer = [readerVideoOutput copyNextSampleBuffer];
if (didWriteSomething == NO) {
DLog(#"Copying sample buffers...");
}
}
if (!buffer) {
[writerVideoInput markAsFinished];
DLog(#"Finished...");
CGColorSpaceRelease(colorSpace);
[self offlineRenderingDidFinish];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[writer finishWriting];
if (writer.error != nil) {
DLog(#"Error: %#", writer.error);
} else {
DLog(#"Succes!");
}
if (writer.status == AVAssetWriterStatusCompleted) {
videoConvertCompletionBlock(YES, nil);
}
else {
abort = YES;
videoConvertCompletionBlock(NO, writer.error.localizedDescription);
}
});
return;
}
didWriteSomething = YES;
}
else {
DLog(#"Still waiting...");
//Reader just needs a moment to get ready...
continue;
}
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(buffer);
if (pixelBuffer == NULL) {
DLog(#"Pixelbuffer == NULL");
continue;
}
//DLog(#"Sample call back! Pixelbuffer: %lu", CVPixelBufferGetHeight(pixelBuffer));
//NSDictionary *options = [NSDictionary dictionaryWithObject:(__bridge id)CGColorSpaceCreateDeviceRGB() forKey:kCIImageColorSpace];
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:nil];
CIImage *outputImage = [self filteredImageWithImage:ciimage];
CVPixelBufferRef outPixelBuffer = NULL;
CVReturn status;
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CFDictionarySetValue(attrs,
kCVPixelBufferCGImageCompatibilityKey,
(__bridge const void *)([NSNumber numberWithBool:YES]));
CFDictionarySetValue(attrs,
kCVPixelBufferCGBitmapContextCompatibilityKey,
(__bridge const void *)([NSNumber numberWithBool:YES]));
status = CVPixelBufferCreate(kCFAllocatorDefault, ciimage.extent.size.width, ciimage.extent.size.height, kCVPixelFormatType_32BGRA, attrs, &outPixelBuffer);
//DLog(#"Output image size: %f, %f, pixelbuffer height: %lu", outputImage.extent.size.width, outputImage.extent.size.height, CVPixelBufferGetHeight(outPixelBuffer));
if (status != kCVReturnSuccess) {
DLog(#"Couldn't allocate output pixelBufferRef!");
continue;
}
[offlineRenderContext render:outputImage toCVPixelBuffer:outPixelBuffer bounds:outputImage.extent colorSpace:colorSpace];
CMTime currentSourceTime = CMSampleBufferGetPresentationTimeStamp(buffer);
CMTime currentTime = CMTimeSubtract(currentSourceTime, startOffset);
CMTime duration = reader.timeRange.duration;
if (CMTIME_IS_POSITIVE_INFINITY(duration)) {
duration = movie.duration;
}
CMTime durationConverted = CMTimeConvertScale(duration, currentTime.timescale, kCMTimeRoundingMethod_Default);
float durationFloat = (float)durationConverted.value;
float progress = ((float) currentTime.value) / durationFloat;
//DLog(#"duration : %f, progress: %f", durationFloat, progress);
[self updateOfflineRenderProgress:progress];
if (pixelBuffer != NULL && writerVideoInput.readyForMoreMediaData) {
[writerPixelAdaptor appendPixelBuffer:outPixelBuffer withPresentationTime:currentTime];
} else {
continue;
}
if (writer.status == AVAssetWriterStatusWriting) {
DLog(#"Writer.status: AVAssetWriterStatusWriting");
}
CFRelease(buffer);
CVPixelBufferRelease(outPixelBuffer);
}
}
#catch (NSException *exception) {
DLog(#"Catching exception: %#", exception);
}
}];
}
Ok, I think I solved it myself. The bad guy was this line:
[writerVideoInput requestMediaDataWhenReadyOnQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) usingBlock:^{ ....
The global queue I was passing is a concurrent queue. This allows a new callback to be made before the previous one is finished. The asset writer is not designed to be written to from more than one thread at a time.
Creating and using a new serial queue seems to remedy the problem:
assetWriterQueue = dispatch_queue_create("AssetWriterQueue", DISPATCH_QUEUE_SERIAL);
[writerVideoInput requestMediaDataWhenReadyOnQueue:assetWriterQueue usingBlock:^{...