I'ved got a strange error for AVCaptureDevice that only happens on iPhone6. I'ved tried it on iPhone6S and iPad and it works fine. The error is: Domain=AVFoundationErrorDomain Code=-11814
Here is the error print output:
Unable to obtain video device input, error: Error Domain=AVFoundationErrorDomain Code=-11814 "Cannot Record" UserInfo={NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}
Here is the code snippet. There is no return in videoDevices, so it skips that. Usually its able to obtain that but not for iPhone6.
//get the front camera
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in videoDevices)
{
if (device.position == AVCaptureDevicePositionFront) {
_videoDevice = device;
break;
}
}
// obtain device input
NSError *error = nil;
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:_videoDevice error:&error];
if (!videoDeviceInput)
{
NSLog(#"%#", [NSString stringWithFormat:#"Unable to obtain video device input, error: %#", error]); <--- the print out message is from here.
return;
}
How can I solve this, it only happens on iPhone6. Its been updated to latest version 12.4.6.
Related
I have this code which assumes that an AV decide is connected...
AVCaptureDeviceInput *device_input = [[AVCaptureDeviceInput alloc] initWithDevice :
[AVCaptureDevice devicesWithMediaType : AVMediaTypeVideo][0] error : nil];
How can I modify that code so that I get a message like this...
if (No AV devices were detected)
NSLog(#"No AV devices were detected");
else
NSLog(#"The following devices were detected...");
Thanks,
Len.
If you have to check for audio device you can use below code-
-(void)checkForDevice{
AVCaptureDevice *audioDevice = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeAudio] firstObject];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if (error)
{
NSLog(#"%#", error); //problem with the device
}
else
{
//device is available
}
}
In the similar way you can check for vedio and other AV Devices.
Anyone know what could be causing this error which is returned in the query block below:
<CKError 0x7f8d5ba27a10: "Internal Error" (1/4000); "Couldn't send a valid signature">
Here is my code snippet:
CKContainer *container = [CKContainer defaultContainer];
CKDatabase *publicDatabase = [container publicCloudDatabase];
//CKDatabase *publicDatabase = [[CKContainer containerWithIdentifier:container] publicCloudDatabase];
NSPredicate *predicate = [NSPredicate predicateWithFormat:#"doctorName = %#", #"Dr. Harry White"];
CKQuery *query = [[CKQuery alloc] initWithRecordType:#"WellnessTest" predicate:predicate];
[publicDatabase performQuery:query inZoneWithID:nil completionHandler:^(NSArray *results, NSError *error)
{
if (error)
{
// Error handling for failed fetch from public database
NSLog(#"ERROR: %#", error.description);
}
else
{
// Display the fetched records
NSLog(#"Results: %#", results.description);
}
}];
On the iPhone/iPad simulator go to Settings > iCloud and then log in using your Apple ID.
If you're testing your app against CloudKit's Production environment, you may still have this error even if you have signed into iCloud on the simulator. In this case, running your app on a device would fix the error.
I'm developing an app in iOS. I need to capture video from the camera and I need to record that video to a file and also get the uncompressed frames, that's why I need to use both AVCaptureOutput...
I read this in the apple's documentation "You can configure multiple inputs and outputs, coordinated by a single session:" So I think it must be doable, but I'm having problems with it...
I'm setting both to the session doing:
self.fileOutput.maxRecordedDuration = CMTimeMake(5000,1 );;
self.fileOutput.minFreeDiskSpaceLimit = 3000;
if([self.captureSession canAddOutput:self.fileOutput]){
[self.captureSession addOutput:self.fileOutput];
NSLog(#"Added File Video Output");
}else{
NSLog(#"Couldn't add video output");
}
if ([self.captureSession canAddOutput:videoOutput]){
[self.captureSession addOutput:videoOutput];
NSLog(#"Added Data Video Output");
}else{
NSLog(#"Couldn't add video output");
}
I'm getting both 'positive' confirmation messages. After that I'm calling to:
NSString *assetPath = [self createAssetFilePath:#"mov"];
NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:assetPath];
[self.fileOutput startRecordingToOutputFileURL:outputURL recordingDelegate:self];
[self.captureSession startRunning];
And then I have my delegate function:
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error {
NSLog(#"Output File URL: %# ", outputFileURL);
BOOL recordedSuccessfully = YES;
if ([error code] != noErr) {
NSLog(#"Error: %#", error);
id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
NSLog(#"Error: %#", value);
if (value) {
recordedSuccessfully = [value boolValue];
}
}
}
And I'm getting no error, but the "AVCaptureVideoDataOutput" was working before adding the "AVCaptureMovieFileOutput" and now it's not...
So... Is that possible to do both?! Any idea?!
Thanks!
The answer to this question: Simultaneous AVCaptureVideoDataOutput and AVCaptureMovieFileOutput indicates that you cannot have an AVCaptureVideoDataOutput and a AVCaptureMovieFileOutput to your session simultaneously. I can't verify this in the Apple documentation unfortunately. My experience is that I no longer receive messages to my AVCaptureVideoDataOutput's sampleBufferDelegate after I add an AVCaptureMovieFileOutput to the session's output, which seemingly backs up that assertion.
Source
So, I have a QTCaptureSession set up thusly:
//Setup Camera
cameraSession = [[QTCaptureSession alloc] init];
QTCaptureDevice *camera = [QTCaptureDevice deviceWithUniqueID: cameraID];
BOOL success = [camera open: &error];
if (!success || error)
{
NSLog(#"Could not open device %#.", cameraID);
NSLog(#"Error: %#", [error localizedDescription]);
return nil;
}
//Setup Input Session
QTCaptureDeviceInput *cameraInput = [[QTCaptureDeviceInput alloc] initWithDevice: camera];
success = [cameraSession addInput: cameraInput error: &error];
if (!success || error)
{
NSLog(#"Could not initialize input session.");
NSLog(#"Error: %#", [error localizedDescription]);
return nil;
}
//Setup Output
QTCaptureDecompressedVideoOutput *cameraOutput = [[QTCaptureDecompressedVideoOutput alloc] init];
[cameraOutput setDelegate: self];
success = [cameraSession addOutput: cameraOutput error: &error];
if (!success || error)
{
NSLog(#"Could not initialize output session.");
NSLog(#"Error: %#", [error localizedDescription]);
return nil;
}
And the QTCaptureDecompressedVideoOutput delegate's captureOutput:didOutputVideoFrame:WithSampleBuffer:fromConnection: thusly:
- (void)captureOutput:(QTCaptureOutput *)captureOutput didOutputVideoFrame:(CVImageBufferRef)videoFrame withSampleBuffer:(QTSampleBuffer *)sampleBuffer fromConnection:(QTCaptureConnection *)connection
{
NSLog(#"starting convert\n");
}
I then start the capture processing using:
[cameraSession startRunning];
All of the variables initialize fine, and the session starts fine, but captureOutput:didOutputVideoFrame:withSampleBuffer:fromConnection: never gets called.
Context
This is a command-line app, compiled with the GCC. It's linked against the following frameworks:
Foundation
Cocoa
QTKit
QuartzCore
Relevant Miscellany
The frame is not likely dropping because captureOutput:didDropVideoFrameWithSampleBuffer:fromConnection: is also not getting called.
So, with some help from Mike Ash, I managed to figure out that my program was terminating immediately and not waiting for the delegate callback (which, according to Apple's QTKit docs, might occur on a separate thread).
My solution was to add a BOOL properties to my object named captureIsFinished, then add this to the main() function:
//Wait Until Capture is Finished
while (![snap captureIsFinished])
{
[[NSRunLoop currentRunLoop] runUntilDate: [NSDate dateWithTimeIntervalSinceNow: 1]];
}
Which effectively perpetuates the run-loop of the app for 1 second, checks to see if the capture is finished, then runs for another second.
I have the following code:
BOOL success;
QTCaptureSession *session = [[QTCaptureSession alloc] init];
QTCaptureDevice *device = [QTCaptureDevice defaultInputDeviceWithMediaType: QTMediaTypeVideo];
success = [device open: &e];
if ( !success )
{
NSLog(#"error opening input device: %#", e);
return;
}
QTCaptureDeviceInput *input = [QTCaptureDeviceInput deviceInputWithDevice: device];
success = [session addInput: input error: &e];
if ( !success )
{
NSLog(#"error adding input device to session: %#", e);
return;
}
QTCaptureDecompressedVideoOutput *output = [[QTCaptureDecompressedVideoOutput alloc] init];
[output setDelegate: self];
success = [session addOutput: output error: &e];
if ( !success )
{
NSLog(#"error adding output device to session: %#", e);
return;
}
[session startRunning];
this is located in a bundle loaded during runtime and is part of a method that is the selector of a NSThread (aka. it runs in in a background thread and not the main thread).
My problem is, that the call to #addInput:error: never returns. What am I missing here?
The problem was, that the main thread of the application was not using the Run Loop normally provided by NSApplicationMain() in Cocoa applications. Instead I was handling events within my own while loop.
The solution was calling:
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, YES);
This thread on Apple's QuickTime mailing list for a more detailed explanation of the problem.