Capturing iSight image using AVFoundation on Mac - objective-c

I previously had this code to capture a single image from a Mac's iSight camera using QTKit:
- (NSError*)takePicture
{
BOOL success;
NSError* error;
captureSession = [QTCaptureSession new];
QTCaptureDevice* device = [QTCaptureDevice defaultInputDeviceWithMediaType: QTMediaTypeVideo];
success = [device open: &error];
if (!success) { return error; }
QTCaptureDeviceInput* captureDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice: device];
success = [captureSession addInput: captureDeviceInput error: &error];
if (!success) { return error; }
QTCaptureDecompressedVideoOutput* captureVideoOutput = [QTCaptureDecompressedVideoOutput new];
[captureVideoOutput setDelegate: self];
success = [captureSession addOutput: captureVideoOutput error: &error];
if (!success) { return error; }
[captureSession startRunning];
return nil;
}
- (void)captureOutput: (QTCaptureOutput*)captureOutput
didOutputVideoFrame: (CVImageBufferRef)imageBuffer
withSampleBuffer: (QTSampleBuffer*)sampleBuffer
fromConnection: (QTCaptureConnection*)connection
{
CVBufferRetain(imageBuffer);
if (imageBuffer) {
[captureSession removeOutput: captureOutput];
[captureSession stopRunning];
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
_result = [[NSImage alloc] initWithSize: [imageRep size]];
[_result addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
_done = YES;
}
}
However, I found today that QTKit has been deprecated and so we must now use AVFoundation.
Can anyone help me convert this code to its AVFoundation equivalent? It seems as though many methods have the same name, but at the same time, a lot is different and I'm at a complete loss here... Any help?

Alright, I found the solution!! Here it is:
- (void)takePicture
{
NSError* error;
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice: device error: &error];
if (!input) {
_error = error;
_done = YES;
return;
}
AVCaptureStillImageOutput* output = [AVCaptureStillImageOutput new];
[output setOutputSettings: #{(id)kCVPixelBufferPixelFormatTypeKey: #(k32BGRAPixelFormat)}];
captureSession = [AVCaptureSession new];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
[captureSession addInput: input];
[captureSession addOutput: output];
[captureSession startRunning];
AVCaptureConnection* connection = [output connectionWithMediaType: AVMediaTypeVideo];
[output captureStillImageAsynchronouslyFromConnection: connection completionHandler: ^(CMSampleBufferRef sampleBuffer, NSError* error) {
if (error) {
_error = error;
_result = nil;
}
else {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (imageBuffer) {
CVBufferRetain(imageBuffer);
NSCIImageRep* imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: imageBuffer]];
_result = [[NSImage alloc] initWithSize: [imageRep size]];
[_result addRepresentation: imageRep];
CVBufferRelease(imageBuffer);
}
}
_done = YES;
}];
}
I hope this helps whoever has any problems in doing this same thing.

Related

AVCapturePhotoOutput not returning proper image

My requirement in the app is to capture an image without previewing the UIImagePickerController So I have used following code to capture an image without presenting the same.
- (void)clickImage
{
AVCaptureDevice *rearCamera = [self checkIfRearCameraAvailable];
if (rearCamera != nil)
{
photoSession = [[AVCaptureSession alloc] init];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:rearCamera error:&error];
if (!error && [photoSession canAddInput:input])
{
[photoSession addInput:input];
AVCapturePhotoOutput *output = [[AVCapturePhotoOutput alloc] init];
if ([photoSession canAddOutput:output])
{
[photoSession addOutput:output];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in output.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo])
{
videoConnection = connection;
break;
}
}
if (videoConnection)
{
break;
}
}
if (videoConnection)
{
[photoSession startRunning];
[output capturePhotoWithSettings:[AVCapturePhotoSettings photoSettings] delegate:self];
}
}
}
}
}
- (AVCaptureDevice *)checkIfRearCameraAvailable
{
AVCaptureDevice *rearCamera;
AVCaptureDeviceDiscoverySession *captureDeviceDiscoverySession =
[AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:#[AVCaptureDeviceTypeBuiltInWideAngleCamera]
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionBack];
NSArray *allCameras = [captureDeviceDiscoverySession devices];
for (int i = 0; i < allCameras.count; i++)
{
AVCaptureDevice *camera = [allCameras objectAtIndex:i];
if (camera.position == AVCaptureDevicePositionBack)
{
rearCamera = camera;
}
}
return rearCamera;
}
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhotoSampleBuffer:(CMSampleBufferRef)photoSampleBuffer previewPhotoSampleBuffer:(CMSampleBufferRef)previewPhotoSampleBuffer resolvedSettings:(AVCaptureResolvedPhotoSettings *)resolvedSettings bracketSettings:(AVCaptureBracketedStillImageSettings *)bracketSettings error:(NSError *)error
{
if (error)
{
NSLog(#"error : %#", error.localizedDescription);
}
if (photoSampleBuffer)
{
NSData *data = [AVCapturePhotoOutput JPEGPhotoDataRepresentationForJPEGSampleBuffer:photoSampleBuffer
previewPhotoSampleBuffer:previewPhotoSampleBuffer];
UIImage *image = [UIImage imageWithData:data];
_imgView.image = image;
}
}
I have used above code to capture an image But the output image is returning is looking like in Night mode or like Negative Image.
Would you please review code and correct me with this?
I have found following sample code from Apple Developer's site:
https://developer.apple.com/library/content/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112
Above sample code helped to get proper image. I have added this answer for other's help as well.

UIViewReportBrokenSuperviewChain

Application get crashed for barcode scanning using AVFoundation.
following is my code.
_session = [[AVCaptureSession alloc] init];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
_input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
if (_input) {
[_session addInput:_input];
} else {
NSLog(#"Error: %#", error);
}
_output = [[AVCaptureMetadataOutput alloc] init];
[_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_output];
_output.metadataObjectTypes = [_output availableMetadataObjectTypes];
_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_prevLayer.frame = _previewView.bounds;
_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
//[self.view.layer addSublayer:_prevLayer];
[_previewView.layer addSublayer:_prevLayer];
//[self.view];
//[_session startRunning];
[_previewView bringSubviewToFront:_highlightView];
/* code Ends*/
Showing Bad Access.
[_previewView.layer addSublayer:_prevLayer];
This line occurs after the frame is set. Try adding the layer and then setting the frame. I'm sure you've moved on, but this answer could benefit someone else.

AVAssetWriter goes AVAssetWriterStatusFailed after appendSampleBuffer:

I am trying to perform screen-recording using AVAssetWriter, which also accepts audio input. However, I have been stuck on this error, where AVAssetWriter sometimes becomes AVAssetWriterStatusFailed after a few calls on appendSampleBuffer: (inside encodeAudioFrame:)
Failed: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo=0x32b570 {NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x70d710 "The operation couldn’t be completed. (OSStatus error -12737.)", NSLocalizedFailureReason=An unknown error occurred (-12737)}
Several observations:
Once it enters this state, subsequent recording attempts will also return AVAssetWriterStatusFailed, even if I use a different recorder object.
The error does not appear when I comment out the audio recording blocks.
But the error still appears when I comment out the video recording blocks, and without modifying any incoming CMSampleBufferRef.
Any assistance will be appreciated.
Below is the code I am using, with several parts omitted for brevity. I am currently using OSX 10.9 SDK, with ARC turned off.
- (BOOL) startRecording
{
if (!isRecording)
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self startCapture];
[self setUpWriter];
startedAt = [NSDate date];
isRecording = YES;
while (isRecording)
{
NSAutoreleasePool* pool = [NSAutoreleasePool new];
NSTimeInterval offset = [[NSDate date] timeIntervalSinceDate:startedAt];
CMTime tiem = CMTimeMakeWithSeconds(offset - pauseDelta, 1000);
[self encodeFrameAtTime:tiem];
[pool drain];
sleep(0.05f);
}
[self endCapture];
[self completeRecordingSession];
});
}
return YES;
}
- (void) stopRecording {
isRecording = NO;
}
-(void) startCapture
{
AVCaptureDevice* microphone = x //Device selection code omitted
videoCaptureSession = [[AVCaptureSession alloc] init];
videoCaptureSession.sessionPreset = AVCaptureSessionPresetHigh;
//------------------------------------------
NSError* err = nil;
audioInput = [AVCaptureDeviceInput deviceInputWithDevice:microphone error:&err];
[videoCaptureSession addInput:audioInput];
//------------------------------------------
audioOutput = [[AVCaptureAudioDataOutput alloc] init];
queue = dispatch_queue_create("videoQueue", NULL);
[audioOutput setSampleBufferDelegate:self queue:queue];
[videoCaptureSession addOutput:audioOutput];
audioDelta = -1;
[videoCaptureSession startRunning];
}
-(void) endCapture
{
[videoCaptureSession stopRunning];
[videoCaptureSession removeInput:audioInput];
[videoCaptureSession removeOutput:audioOutput];
[audioOutput release];
audioOutput = nil;
audioInput = nil;
[videoCaptureSession release];
videoCaptureSession = nil;
dispatch_release(queue);
}
-(BOOL) setUpWriter
{
//delete the file.
{
NSFileManager* fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:self.moviePath]) {
NSError* error;
if ([fileManager removeItemAtPath:self.moviePath error:&error] == NO) {
NSLog(#"Could not delete old recording file at path: %#", self.moviePath);
}
}
}
mCaptureRect = NSRectToCGRect([screen frame]);
int FWidth = mCaptureRect.size.width;
int FHeight = mCaptureRect.size.height;
int bitRate = FWidth * FHeight * 8;
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:self.moviePath] fileType:AVFileTypeMPEG4 error:nil];
NSParameterAssert(videoWriter);
//Configure video
NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:bitRate], AVVideoAverageBitRateKey,
nil];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
codecSettings,AVVideoCompressionPropertiesKey,
[NSNumber numberWithInt:FWidth], AVVideoWidthKey,
[NSNumber numberWithInt:FHeight], AVVideoHeightKey,
nil];
videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(videoWriterInput);
videoWriterInput.expectsMediaDataInRealTime = YES;
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt:FWidth], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:FHeight], kCVPixelBufferHeightKey,
nil];
avAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes];
//*
//Configure Audio
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary* audioSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
[NSNumber numberWithInt:64000], AVEncoderBitRateKey,
nil ];
audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSettings];
audioWriterInput.expectsMediaDataInRealTime = YES;
//add input
[videoWriter addInput:videoWriterInput];
[videoWriter addInput:audioWriterInput];
return YES;
}
- (void) cleanupWriter {
[videoWriter release];
videoWriter = nil;
avAdaptor = nil;
videoWriterInput = nil;
startedAt = nil;
audioWriterInput = nil;
}
- (void) encodeFrameAtTime:(CMTime)timestamp
{
if(!isRecording) return;
if(videoWriter == nil) return;
if(videoWriter.status == AVAssetWriterStatusFailed)
{
return;
}
if(videoWriter.status != AVAssetWriterStatusWriting)
{
if(videoWriter.status != AVAssetWriterStatusUnknown)
return;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:timestamp];
startTime = CMTimeGetSeconds(timestamp);
}
timestamp = CMTimeMakeWithSeconds(startTime + CMTimeGetSeconds(timestamp), 1000);
[self writeVideoFrameAtTime:timestamp];
}
-(void) writeVideoFrameAtTime:(CMTime)time {
if (![videoWriterInput isReadyForMoreMediaData])
{
}
else
{
/*
CVPixelBufferRef manipulation omitted...
*/
{
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if(videoWriter.status == AVAssetWriterStatusFailed) NSLog(#"Failed: %#", videoWriter.error);
if (!success) NSLog(#"Warning: Unable to write buffer to video");
}
CVPixelBufferRelease(pixelBuffer);
CGImageRelease(cgImage);
}
}
-(void) encodeAudioFrame:(CMSampleBufferRef)buffer
{
if(!isRecording) return;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(buffer);
if(videoWriter.status != AVAssetWriterStatusWriting)
{
//Wait for video thread to start the writer
return;
}
if(![audioWriterInput isReadyForMoreMediaData])
return;
//*
NSTimeInterval offset = [[NSDate date] timeIntervalSinceDate:startedAt];
if(audioDelta == -1)
{
audioDelta = offset - CMTimeGetSeconds(timestamp);
}
//Adjusts CMSampleBufferRef's timestamp to match the video stream's zero-based timestamp
CMItemCount count;
CMTime newTimestamp = CMTimeMakeWithSeconds(CMTimeGetSeconds(timestamp) + audioDelta - pauseDelta, 1000);
CMSampleBufferGetSampleTimingInfoArray(buffer, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(buffer, count, pInfo, &count);
for(CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = newTimestamp;
pInfo[i].presentationTimeStamp = newTimestamp;
}
CMSampleBufferRef newBuffer;
CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault, buffer, count, pInfo, &newBuffer);
free(pInfo);
timestamp = CMSampleBufferGetPresentationTimeStamp(newBuffer);
BOOL res = [audioWriterInput appendSampleBuffer:newBuffer];
}
- (void) completeRecordingSession {
#autoreleasepool {
if(videoWriter.status != AVAssetWriterStatusWriting)
{
while (videoWriter.status == AVAssetWriterStatusUnknown)
{
[NSThread sleepForTimeInterval:0.5f];
}
int status = videoWriter.status;
while (status == AVAssetWriterStatusUnknown)
{
NSLog(#"Waiting...");
[NSThread sleepForTimeInterval:0.5f];
status = videoWriter.status;
}
}
#synchronized(self)
{
[videoWriter finishWriting];
[self cleanupWriter];
}
}
}
-(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if(!CMSampleBufferDataIsReady(sampleBuffer))
return;
#autoreleasepool {
if(captureOutput == audioOutput)
{
if(isRecording && !isPaused)
{
[self encodeAudioFrame:sampleBuffer];
}
}
}
}
I had exactly the same problem with my swift code. I found out that my pc simply ran out of memory. So double check if you have enough free ram.

GCDAsyncSocket not receiving all transmitted data, missing last "Chunk"

I am trying to send some strings and image data from a python script to an objective C application running on OSX.
I am collecting the transmitted data, using GCDAsyncSocket, and appending it to an NSMutableData until the server disconnects. I am then processing that NSData and splitting it into it's original parts.
The transmitted data consists of the following:
ID string, filled out to 16 bytes.
Image number string, filled out to 16 bytes.
Raw image data.
Termination string, filled out to 16 bytes.
The problem is that i am not receiving/getting the last chunk of data, i end up missing the end of the JPEG image, resulting in a corrupt (though mostly displayed) image, and a missing termination string.
Here is the code i am using with GCDAsyncSocket to get the data, and process it:
Socket connection:
- (void)socket:(GCDAsyncSocket *)sock didAcceptNewSocket:(GCDAsyncSocket *)newSocket
{
// This method is executed on the socketQueue (not the main thread)
#synchronized(connectedSockets)
{
[connectedSockets addObject:newSocket];
}
NSString *host = [newSocket connectedHost];
UInt16 port = [newSocket connectedPort];
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
[self logInfo:FORMAT(#"Accepted client %#:%hu", host, port)];
}
});
[newSocket readDataToData:[GCDAsyncSocket CRLFData] withTimeout:-1 tag:0];
}
Socket Data Received
- (void)socket:(GCDAsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag
{
// This method is executed on the socketQueue (not the main thread)
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
NSLog(#"Thread Data Length is %lu", (unsigned long)[data length]);
if (!imageBuffer){
imageBuffer = [[NSMutableData alloc]init];
}
[imageBuffer appendData:[data subdataWithRange:NSMakeRange(0, [data length])]];
NSLog(#"Total Data Length is %lu", (unsigned long)[imageBuffer length]);
}
});
// Echo message back to client
[sock writeData:data withTimeout:-1 tag:ECHO_MSG];
[sock readDataToData:[GCDAsyncSocket CRLFData] withTimeout:-1 tag:0];
}
Socket Disconnected
- (void)socketDidDisconnect:(GCDAsyncSocket *)sock withError:(NSError *)err
{
if (sock != listenSocket)
{
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
[self logInfo:FORMAT(#"Client Disconnected")];
NSData *cameraNumberData;
NSData *imageNumberData;
NSData *imageData;
NSData *endCommandData;
//if ([data length] > 40){
cameraNumberData = [imageBuffer subdataWithRange:NSMakeRange(0, 16)];
imageNumberData = [imageBuffer subdataWithRange:NSMakeRange(16, 16)];
imageData = [imageBuffer subdataWithRange:NSMakeRange(32, [imageBuffer length]-34)];
endCommandData = [imageBuffer subdataWithRange:NSMakeRange([imageBuffer length]-16, 16)];
//}
NSString *cameraNumberString = [[NSString alloc] initWithData:cameraNumberData encoding:NSUTF8StringEncoding];
NSString *imageNumberString = [[NSString alloc] initWithData:imageNumberData encoding:NSUTF8StringEncoding];
NSString *endCommandString = [[NSString alloc] initWithData:endCommandData encoding:NSUTF8StringEncoding];
NSImage* image = [[NSImage alloc]initWithData:imageData];
if (cameraNumberString)
{
NSLog(#"Image recieved from Camera no %#", cameraNumberString);
[self logMessage:cameraNumberString];
}
else
{
[self logError:#"Error converting received data into UTF-8 String"];
}
if (imageNumberString)
{
NSLog(#"Image is number %#", imageNumberString);
[self logMessage:imageNumberString];
}
else
{
[self logError:#"Error converting received data into UTF-8 String"];
}
if (image)
{
NSLog(#"We have an image");
[self.imageView setImage:image];
}
else
{
[self logError:#"Error converting received data into image"];
}
if (endCommandString)
{
NSLog(#"Command String is %#", endCommandString);
[self logMessage:endCommandString];
}
else
{
[self logError:#"No command string"];
}
//self.imageBuffer = nil;
}
});
#synchronized(connectedSockets)
{
[connectedSockets removeObject:sock];
}
}
}
I have used wireshark, and the data is being transmitted, it's just not getting through GCDAsynSocket.
So, i'm obviously missing something. Socket programming and encoding/decoding of data like this is relatively new to me, so i am probably being an idiot.
Help greatly appreciated!
Thanks
Gareth
Ok, so i finally got this working. It involved modifying the transmitting code in Python to send a completion string at the end of the data, and watching for that. The biggest takeaway was that i needed to re-call the readDataToData: method each time the socket read some data, otherwise it would just sit there and wait, and the transmitting socket would also just sit there.
I also had to implement re-calling the second receive with a tag so i could store the received data in the correct NSMutableData object in an NSMutableArray, otherwise i had no way of knowing after the first receive which transmitting socket the data was coming from as the ID was only at the beginning of the first message.
Here is the didReadData code:
- (void)socket:(GCDAsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag
{
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
NSInteger cameraNumberNumber = 0;
NSString *cameraNumberString = [[NSString alloc]init];
if (tag > 10){
cameraNumberNumber = tag-11;
DDLogVerbose(#"Second data loop, tag is %ld", tag);
} else {
NSData *cameraNumberData;
//if ([data length] > 40){
cameraNumberData = [data subdataWithRange:NSMakeRange(0, 16)];
NSString *cameraNumberString = [[NSString alloc] initWithData:cameraNumberData encoding:NSUTF8StringEncoding];
cameraNumberString = [cameraNumberString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
cameraNumberNumber = [cameraNumberString intValue]-1;
}
if (cameraNumberNumber+1 <= self.images.count){
if ([self.images objectAtIndex:cameraNumberNumber] == [NSNull null]){
image* cameraImage = [[image alloc]init];
[self.images replaceObjectAtIndex: cameraNumberNumber withObject:cameraImage];
}
image* cameraImage = [self.images objectAtIndex:cameraNumberNumber];
[cameraImage.imageData appendData:[data subdataWithRange:NSMakeRange(0, [data length])]];
cameraImage.cameraNumber = cameraNumberString;
if (!imageBuffer){
imageBuffer = [[NSMutableData alloc]init];
}
[imageBuffer appendData:[data subdataWithRange:NSMakeRange(0, [data length])]];
DDLogVerbose(#"Total Data Length is %lu", (unsigned long)[imageBuffer length]);
} else {
DDLogInfo(#"Wrong camera quantity!");
NSAlert *testAlert = [NSAlert alertWithMessageText:#"Wrong camera quantity!"
defaultButton:#"Ok"
alternateButton:nil
otherButton:nil
informativeTextWithFormat:#"We have recieved more images than cameras, please set No.Cameras correctly!"];
[testAlert beginSheetModalForWindow:[self window]
modalDelegate:self
didEndSelector:#selector(stop)
contextInfo:nil];
}
[sock readDataToData:[#"end" dataUsingEncoding:NSUTF8StringEncoding] withTimeout:-1 tag:cameraNumberNumber + 11];
}
});
}
and here is the socketDidDisconnect code, a lot of things in here that don't make sense out of context, but it shows how i handled the received data.
- (void)socketDidDisconnect:(GCDAsyncSocket *)sock withError:(NSError *)err
{
if (sock != listenSocket)
{
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
totalCamerasFetched = [NSNumber numberWithInt:1+[totalCamerasFetched intValue]];
if ([totalCamerasFetched integerValue] >= [numberOfCameras integerValue]){
for (image* cameraImage in self.images){
NSData *cameraNumberData;
NSData *imageNumberData;
NSData *imageData;
NSData *endCommandData;
NSInteger cameraNumberNumber = 0;
cameraNumberData = [cameraImage.imageData subdataWithRange:NSMakeRange(0, 16)];
imageNumberData = [cameraImage.imageData subdataWithRange:NSMakeRange(16, 16)];
imageData = [cameraImage.imageData subdataWithRange:NSMakeRange(32, [cameraImage.imageData length]-32)];
endCommandData = [cameraImage.imageData subdataWithRange:NSMakeRange([cameraImage.imageData length]-16, 16)];
NSString *cameraNumberString = [[NSString alloc] initWithData:cameraNumberData encoding:NSUTF8StringEncoding];
cameraNumberString = [cameraNumberString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
NSString *imageNumberString = [[NSString alloc] initWithData:imageNumberData encoding:NSUTF8StringEncoding];
imageNumberString = [imageNumberString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
NSString *endCommandString = [[NSString alloc] initWithData:endCommandData encoding:NSUTF8StringEncoding];
NSImage* image = [[NSImage alloc]initWithData:imageData];
cameraNumberNumber = [cameraNumberString intValue]-1;
if (cameraNumberString)
{
DDLogInfo(#"Image recieved from Camera no %#", cameraNumberString);
}
else
{
DDLogError(#"No Camera number in data");
}
if (imageNumberString)
{
DDLogInfo(#"Image is number %#", imageNumberString);
}
else
{
DDLogError(#"No Image number in data");
}
if (image)
{
DDLogVerbose(#"We have an image");
NSString* dataPath = [[NSString alloc]initWithFormat:#"%#/image%#/",self.exportLocation, imageNumberString];
if (![[NSFileManager defaultManager] fileExistsAtPath:dataPath]){
NSError* error;
[[NSFileManager defaultManager] createDirectoryAtPath:dataPath withIntermediateDirectories:NO attributes:nil error:&error];
if (error)
{
DDLogError(#"[%#] ERROR: attempting to write directory for images", [self class]);
NSAssert( FALSE, #"Failed to create directory maybe out of disk space?");
}
}
NSString* dataPathVideo = [[NSString alloc]initWithFormat:#"%#/video%#/",self.exportLocation, imageNumberString];
if (![[NSFileManager defaultManager] fileExistsAtPath:dataPathVideo]){
NSError* error;
[[NSFileManager defaultManager] createDirectoryAtPath:dataPathVideo withIntermediateDirectories:NO attributes:nil error:&error];
if (error)
{
DDLogError(#"[%#] ERROR: attempting to write directory for images", [self class]);
NSAssert( FALSE, #"Failed to create directory maybe out of disk space?");
}
}
NSString * exportLocationFull = [[NSString alloc]initWithFormat:#"%#/image%#/camera_%#.jpg",self.exportLocation, imageNumberString, cameraNumberString];
DDLogInfo(#"Full export URL = %#", exportLocationFull);
[imageData writeToFile:exportLocationFull atomically:YES];
self.currentSet = [NSNumber numberWithInt:[imageNumberString intValue]];
NSImage* imageToStore = [[NSImage alloc]initWithData:imageData];
[self.imagesToMakeVideo replaceObjectAtIndex: cameraNumberNumber withObject:imageToStore];
} else {
DDLogError(#"No image loacted in data");
}
if (endCommandString)
{
DDLogVerbose(#"Command String is %#", endCommandString);
//[self logMessage:endCommandString];
}
else
{
//[self logError:#"No command string"];
}
self.imageBuffer = nil;
}
self.totalCamerasFetched = [NSNumber numberWithInt:0];
[self loadandDisplayLatestImages];
[self createVideowithImages:imagesToMakeVideo toLocation:[[NSString alloc]initWithFormat:#"%#/video%#/image_sequence_%#.mov",self.exportLocation, self.currentSet, self.currentSet]];
processing = false;
}//end of for loop
}
});
#synchronized(connectedSockets)
{
[connectedSockets removeObject:sock];
}
}
}
also here is how i modified the Python code to add the extra "end" tag.
def send_media_to(self, ip, port, media_name, media_number, media_dir):
camera_number = self.camera.current_mode['option'].number
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((ip, port))
try:
sock.send(bytes(str(camera_number).ljust(16), 'utf-8'))
sock.send(bytes(str(media_number).ljust(16), 'utf-8'))
with open(media_dir + media_name, 'rb') as media:
sock.sendall(media.read())
finally:
sock.send(bytes(str("end").ljust(16), 'utf-8'))
sock.close()
Hopefully this helps someone else stuck in the same situation!

AVAssetWriter sometimes fails with status AVAssetWriterStatusFailed. Seems random

I'm writing a MP4 video file with a AVAssetWriter using a AVAssetWriterInputPixelBufferAdaptor.
The source is a video from a UIImagePickerController, either freshly captured from the camera or from the asset library. Quality right now is UIImagePickerControllerQualityTypeMedium.
Some times the writer fails. It's status is AVAssetWriterStatusFailed and the AVAssetWriter objects error property is:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed"
UserInfo=0xf5d8990 {NSLocalizedFailureReason=An unknown error occurred (-536870210),
NSUnderlyingError=0x4dd8e0 "The operation couldn’t be completed. (OSStatus error -536870210.)",
NSLocalizedDescription=The operation could not be completed
The error occurs approximately 20% of the times the code is run. It seems to fail more frequently on iPhone 4 / 4S than on iPhone 5.
It also occurs more frequently if the source video quality is higher.
Using UIImagePickerControllerQualityTypeLow the error doesn't happen so often.
Using UIImagePickerControllerQualityTypeHigh, the error happens a little more frequently.
I have also noticed something else:
It seems to come in waves. When it fails, the following runs will often fail too, even though I delete the app and reinstall it. That leaves me wondering, whether my program leaks some memory and if that memory stays alive even if the app gets killed (is that even possible?).
Here is the code i use to render my video:
- (void)writeVideo
{
offlineRenderingInProgress = YES;
/* --- Writer Setup --- */
[locationQueue cancelAllOperations];
[self stopWithoutRewinding];
NSError *writerError = nil;
BOOL succes;
succes = [[NSFileManager defaultManager] removeItemAtURL:self.outputURL error:nil];
// DLog(#"Url: %#, succes: %i, error: %#", self.outputURL, succes, fileError);
writer = [AVAssetWriter assetWriterWithURL:self.outputURL fileType:(NSString *)kUTTypeQuickTimeMovie error:&writerError];
//writer.shouldOptimizeForNetworkUse = NO;
if (writerError) {
DLog(#"Writer error: %#", writerError);
return;
}
float bitsPerPixel;
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions((__bridge CMVideoFormatDescriptionRef)([readerVideoOutput.videoTracks[0] formatDescriptions][0]));
int numPixels = dimensions.width * dimensions.height;
int bitsPerSecond;
// Assume that lower-than-SD resolutions are intended for streaming, and use a lower bitrate
if ( numPixels < (640 * 480) )
bitsPerPixel = 4.05; // This bitrate matches the quality produced by AVCaptureSessionPresetMedium or Low.
else
bitsPerPixel = 11.4; // This bitrate matches the quality produced by AVCaptureSessionPresetHigh.
bitsPerSecond = numPixels * bitsPerPixel;
NSDictionary *videoCompressionSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithFloat:videoSize.width], AVVideoWidthKey,
[NSNumber numberWithInteger:videoSize.height], AVVideoHeightKey,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey,
nil], AVVideoCompressionPropertiesKey,
nil];
writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings];
writerVideoInput.transform = movie.preferredTransform;
writerVideoInput.expectsMediaDataInRealTime = YES;
[writer addInput:writerVideoInput];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
writerPixelAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerVideoInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
BOOL couldStart = [writer startWriting];
if (!couldStart) {
DLog(#"Could not start AVAssetWriter!");
abort = YES;
[locationQueue cancelAllOperations];
return;
}
[self configureFilters];
CIContext *offlineRenderContext = [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer : #NO}];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!self.canEdit) {
[self createVideoReaderWithAsset:movie timeRange:CMTimeRangeFromTimeToTime(kCMTimeZero, kCMTimePositiveInfinity) forOfflineRender:YES];
} else {
[self createVideoReaderWithAsset:movie timeRange:CMTimeRangeWithNOVideoRangeInDuration(self.thumbnailEditView.range, movie.duration) forOfflineRender:YES];
}
CMTime startOffset = reader.timeRange.start;
DLog(#"startOffset: %llu", startOffset.value);
[self.thumbnailEditView removeFromSuperview];
// self.thumbnailEditView = nil;
[glLayer removeFromSuperlayer];
glLayer = nil;
[playerView removeFromSuperview];
playerView = nil;
glContext = nil;
[writerVideoInput requestMediaDataWhenReadyOnQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) usingBlock:^{
#try {
BOOL didWriteSomething = NO;
DLog(#"Preparing to write...");
while ([writerVideoInput isReadyForMoreMediaData]) {
if (abort) {
NSLog(#"Abort == YES");
[locationQueue cancelAllOperations];
[writerVideoInput markAsFinished];
videoConvertCompletionBlock(NO, writer.error.localizedDescription);
}
if (writer.status == AVAssetWriterStatusFailed) {
DLog(#"Writer.status: AVAssetWriterStatusFailed, error: %#", writer.error);
[[NSUserDefaults standardUserDefaults] setObject:[NSNumber numberWithInt:1] forKey:#"QualityOverride"];
[[NSUserDefaults standardUserDefaults] synchronize];
abort = YES;
[locationQueue cancelAllOperations];
videoConvertCompletionBlock(NO, writer.error.localizedDescription);
return;
DLog(#"Source file exists: %i", [[NSFileManager defaultManager] fileExistsAtPath:movie.URL.relativePath]);
}
DLog(#"Writing started...");
CMSampleBufferRef buffer = nil;
if (reader.status != AVAssetReaderStatusUnknown) {
if (reader.status == AVAssetReaderStatusReading) {
buffer = [readerVideoOutput copyNextSampleBuffer];
if (didWriteSomething == NO) {
DLog(#"Copying sample buffers...");
}
}
if (!buffer) {
[writerVideoInput markAsFinished];
DLog(#"Finished...");
CGColorSpaceRelease(colorSpace);
[self offlineRenderingDidFinish];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[writer finishWriting];
if (writer.error != nil) {
DLog(#"Error: %#", writer.error);
} else {
DLog(#"Succes!");
}
if (writer.status == AVAssetWriterStatusCompleted) {
videoConvertCompletionBlock(YES, nil);
}
else {
abort = YES;
videoConvertCompletionBlock(NO, writer.error.localizedDescription);
}
});
return;
}
didWriteSomething = YES;
}
else {
DLog(#"Still waiting...");
//Reader just needs a moment to get ready...
continue;
}
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(buffer);
if (pixelBuffer == NULL) {
DLog(#"Pixelbuffer == NULL");
continue;
}
//DLog(#"Sample call back! Pixelbuffer: %lu", CVPixelBufferGetHeight(pixelBuffer));
//NSDictionary *options = [NSDictionary dictionaryWithObject:(__bridge id)CGColorSpaceCreateDeviceRGB() forKey:kCIImageColorSpace];
CIImage *ciimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:nil];
CIImage *outputImage = [self filteredImageWithImage:ciimage];
CVPixelBufferRef outPixelBuffer = NULL;
CVReturn status;
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CFDictionarySetValue(attrs,
kCVPixelBufferCGImageCompatibilityKey,
(__bridge const void *)([NSNumber numberWithBool:YES]));
CFDictionarySetValue(attrs,
kCVPixelBufferCGBitmapContextCompatibilityKey,
(__bridge const void *)([NSNumber numberWithBool:YES]));
status = CVPixelBufferCreate(kCFAllocatorDefault, ciimage.extent.size.width, ciimage.extent.size.height, kCVPixelFormatType_32BGRA, attrs, &outPixelBuffer);
//DLog(#"Output image size: %f, %f, pixelbuffer height: %lu", outputImage.extent.size.width, outputImage.extent.size.height, CVPixelBufferGetHeight(outPixelBuffer));
if (status != kCVReturnSuccess) {
DLog(#"Couldn't allocate output pixelBufferRef!");
continue;
}
[offlineRenderContext render:outputImage toCVPixelBuffer:outPixelBuffer bounds:outputImage.extent colorSpace:colorSpace];
CMTime currentSourceTime = CMSampleBufferGetPresentationTimeStamp(buffer);
CMTime currentTime = CMTimeSubtract(currentSourceTime, startOffset);
CMTime duration = reader.timeRange.duration;
if (CMTIME_IS_POSITIVE_INFINITY(duration)) {
duration = movie.duration;
}
CMTime durationConverted = CMTimeConvertScale(duration, currentTime.timescale, kCMTimeRoundingMethod_Default);
float durationFloat = (float)durationConverted.value;
float progress = ((float) currentTime.value) / durationFloat;
//DLog(#"duration : %f, progress: %f", durationFloat, progress);
[self updateOfflineRenderProgress:progress];
if (pixelBuffer != NULL && writerVideoInput.readyForMoreMediaData) {
[writerPixelAdaptor appendPixelBuffer:outPixelBuffer withPresentationTime:currentTime];
} else {
continue;
}
if (writer.status == AVAssetWriterStatusWriting) {
DLog(#"Writer.status: AVAssetWriterStatusWriting");
}
CFRelease(buffer);
CVPixelBufferRelease(outPixelBuffer);
}
}
#catch (NSException *exception) {
DLog(#"Catching exception: %#", exception);
}
}];
}
Ok, I think I solved it myself. The bad guy was this line:
[writerVideoInput requestMediaDataWhenReadyOnQueue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) usingBlock:^{ ....
The global queue I was passing is a concurrent queue. This allows a new callback to be made before the previous one is finished. The asset writer is not designed to be written to from more than one thread at a time.
Creating and using a new serial queue seems to remedy the problem:
assetWriterQueue = dispatch_queue_create("AssetWriterQueue", DISPATCH_QUEUE_SERIAL);
[writerVideoInput requestMediaDataWhenReadyOnQueue:assetWriterQueue usingBlock:^{...