I am developing an iPhone App. In that, there is a requirement for Pausing and resuming the camera. So i used AVFoundation for that instead of using UIImagePickerController.
My code is :
- (void) startup :(BOOL)isFrontCamera
{
if (_session == nil)
{
NSLog(#"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(isFrontCamera)
{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices)
{
if (device.position == AVCaptureDevicePositionFront)
{
captureDevice = device;
break;
}
}
cameraDevice = captureDevice;
}
cameraDevice=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("uk.co.gdcl.cameraengine.capture", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:#"Width"] integerValue];
_cx = [[actual objectForKey:#"Height"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
Here i am facing the problem when i change the camera to Front. when i calling the above method by changing the camera to front, the preview layer is getting stuck and no preview is coming. My doubt is "Can we change the capture device in the middle of capture session ?". Please guide me where i am going wrong (or) Suggest me with a solution on how to navigate between front and back camera while recording.
Thanks in Advance.
Yes, you can. There are just a few of things you need to cater to.
Need to be using AVCaptureVideoDataOutput and its delegate for recording.
Make sure you remove the previous deviceInput before adding the new deviceInput.
You must remove and recreate the AVCaptureVideoDataOutput as well.
I am using these two functions for it right now and it works while the session is running.
- (void)configureVideoWithDevice:(AVCaptureDevice *)camera {
[_session beginConfiguration];
[_session removeInput:_videoInputDevice];
_videoInputDevice = nil;
_videoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:camera error:nil];
if ([_session canAddInput:_videoInputDevice]) {
[_session addInput:_videoInputDevice];
}
[_session removeOutput:_videoDataOutput];
_videoDataOutput = nil;
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:_outputQueueVideo];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey, nil];
_videoDataOutput.videoSettings = setcapSettings;
[_session addOutput:_videoDataOutput];
_videoConnection = [_videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
if([_videoConnection isVideoOrientationSupported]) {
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
}
[_session commitConfiguration];
}
- (void)configureAudioWithDevice:(AVCaptureDevice *)microphone {
[_session beginConfiguration];
_audioInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:microphone error:nil];
if ([_session canAddInput:_audioInputDevice]) {
[_session addInput:_audioInputDevice];
}
[_session removeOutput:_audioDataOutput];
_audioDataOutput = nil;
_audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioDataOutput setSampleBufferDelegate:self queue:_outputQueueAudio];
[_session addOutput:_audioDataOutput];
_audioConnection = [_audioDataOutput connectionWithMediaType:AVMediaTypeAudio];
[_session commitConfiguration];
}
You can't change the captureDevice mid-session. And you can only have one capture session running at a time. You could end the current session and create a new one. There will be a slight lag (maybe a second or two depending on your cpu load).
I wish Apple would allow multiple sessions or at least multiple devices per session... but they do not... yet.
have you considered having multiple sessions and then afterwards processing the video files to join them together into one?
Related
I have an iOS app with a simple UIView placed in the view controller. I am trying to show the camera feed of the front facing camera, in the UIView. I am not trying to take a picture or record a video, I simply want to show the live feed in a UIView.
I have tried to implement AVCaptureVideoPreviewLayer, however the feed I get is blank. Nothing seems to happen. Here is my code:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input;
#try {
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
} #catch (NSException *exception) {
NSLog(#"Error; %#", error);
} #finally {
if (error == nil) {
if ([session canAddInput:input]) {
[session addInput:input];
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}];
if ([session canAddOutput:stillImageOutput]) {
[session setSessionPreset:AVCaptureSessionPresetHigh];
[session addOutput:stillImageOutput];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[previewLayer.connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[backgroundStreamView.layer addSublayer:previewLayer];
[session startRunning];
NSLog(#"session running");
} else {
NSLog(#"cannot add output");
}
} else {
NSLog(#"cannot add inout");
}
} else {
NSLog(#"general error: %#", error);
}
}
The session runs perfectly fine, however no video feed is shown. What am I doing wrong?
Managed to fix it myself, turned out to be a fairly simple issue - I didn't specify the frame size of the AVCapturePreviewLayer and as a result it was not appearing (presumably because it defaults to a frame size of zero).
To fix this I set the frame to match the frame of my custom UIView:
[previewLayer setFrame:backgroundStreamView.bounds];
Deprecation code fix
AVCaptureStillImageOutput is also deprecated, so to fix that, I replaced it with the AVCapturePhotoOutput class. Thus the code changed from:
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[stillImageOutput setOutputSettings:#{AVVideoCodecKey : AVVideoCodecJPEG}]
to the following:
AVCapturePhotoOutput *stillImageOutput = [[AVCapturePhotoOutput alloc] init];
Application get crashed for barcode scanning using AVFoundation.
following is my code.
_session = [[AVCaptureSession alloc] init];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
_input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
if (_input) {
[_session addInput:_input];
} else {
NSLog(#"Error: %#", error);
}
_output = [[AVCaptureMetadataOutput alloc] init];
[_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_output];
_output.metadataObjectTypes = [_output availableMetadataObjectTypes];
_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_prevLayer.frame = _previewView.bounds;
_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
//[self.view.layer addSublayer:_prevLayer];
[_previewView.layer addSublayer:_prevLayer];
//[self.view];
//[_session startRunning];
[_previewView bringSubviewToFront:_highlightView];
/* code Ends*/
Showing Bad Access.
[_previewView.layer addSublayer:_prevLayer];
This line occurs after the frame is set. Try adding the layer and then setting the frame. I'm sure you've moved on, but this answer could benefit someone else.
I am using an AVCapture: Session, Device, DeviceInput, MetadataOutput and a VideoPreviewLayer in order to scan barcodes. I am having a bit of trouble reading inverted barcodes (white bars on a black background).
I've tried inverting the actual barcode data's byte codes by getting the bytes in the metadataObjects from the captureOutput method and using byte = ~byte. That didn't work out to well as the metadataObject wouldn't properly convert to an NSData object.
I hoped this would invert the barcode data but it was a mess.
I just don't know what to do, whether I have to pass the image through a filter and then decode the data or what.
Can anyone point me in the right direction?
_session = [[AVCaptureSession alloc] init];
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
_input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:&error];
if (_input) {
[_session addInput:_input];
} else {
NSLog(#"Error: %#", error);
}
_output = [[AVCaptureMetadataOutput alloc] init];
[_output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[_session addOutput:_output];
_output.metadataObjectTypes = [_output availableMetadataObjectTypes];
_prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_prevLayer.frame = self.view.bounds;
_prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:_prevLayer];
[_session startRunning];
This is how I'm initializing the AVCaptureSession, AVCaptureDevice, AVCaptureDeviceInput, AVCaptureMetadataOutput, and the AVCaptureVideoPreviewLayer.
I'm now trying to implement GPUImage framework into my project but I keep getting the "GPUImage.h" file not found when I'm trying to import the main header file.
I'm working over application, that using zxing library to read QRcodes. I have problem with ZxingWidgetController - when view is showed, during application is in background/not active (eg. screen is lock) image from camera is not shown on screen - only background is visible, and scanner seems to be not working.
when i call initCapture method again, after a little delay video from camera is showed, but in this case, every time when application lose activity i need to reinitialize scanner - this behavior is not comfortable at all.
this bug can be repeated on almost all aplication used zXing, so i suppose that is some zXing bug.
zXing initCapture method code is:
- (void)initCapture {
#if HAS_AVFF
AVCaptureDeviceInput *captureInput =
[AVCaptureDeviceInput deviceInputWithDevice:
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
if(!captureInput)
{
NSLog(#"ERROR - CaptureInputNotInitialized");
}
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
if(!captureOutput)
{
NSLog(#"ERROR - CaptureOutputNotInitialized");
}
[captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
self.captureSession = [[[AVCaptureSession alloc] init] autorelease];
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium; // 480x360 on a 4
if([self.captureSession canAddInput:captureInput])
{
[self.captureSession addInput:captureInput];
}
else
{
NSLog(#"ERROR - cannot add input");
}
if([self.captureSession canAddOutput:captureOutput])
{
[self.captureSession addOutput:captureOutput];
}
else
{
NSLog(#"ERROR - cannot add output");
}
[captureOutput release];
if (!self.prevLayer)
{
[self.prevLayer release];
}
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
// NSLog(#"prev %p %#", self.prevLayer, self.prevLayer);
self.prevLayer.frame = self.view.bounds;
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
#endif
}
Maybe you guys know what is wrong?
I dont understand your question. If application is in background/not active, of course it cant working. You should make it clear.
each time I start a AVCaptureSession I receive a memory warning leading to crashes after time.
I'm starting the session asynchronously and the Instruments tool says the application consumes about 2 MB memory.
Do you have any idea how to overcome that issue? Are 2MB of allocated memory too much?
Thankx!
[iOS 4.3, ARC]
#autoreleasepool {
//Init capture session
session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetPhoto;
//Resize container view
CGRect cameraContainerFrame = cameraContainerView.frame;
cameraContainerFrame.size = CGSizeMake(320, 426);
cameraContainerView.frame = cameraContainerFrame;
CALayer *viewLayer = [cameraContainerView layer];
[viewLayer setMasksToBounds:YES];
//Create preview layer
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
CGRect bounds = [cameraContainerView bounds];
[captureVideoPreviewLayer setFrame:bounds];
if ([captureVideoPreviewLayer isOrientationSupported]) {
[captureVideoPreviewLayer setOrientation:AVCaptureVideoOrientationPortrait];
}
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[viewLayer addSublayer:captureVideoPreviewLayer];
//Get input device
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([captureDevice lockForConfiguration:nil]){
captureDevice.focusMode = AVCaptureFocusModeContinuousAutoFocus;
captureDevice.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
[captureDevice unlockForConfiguration];
}
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input) {
// Handle the error appropriately.
DLog(#"ERROR: trying to open camera: %#", error);
}
//Add input to session
[session addInput:input];
//Output
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObject:AVVideoCodecJPEG forKey:AVVideoCodecKey];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
//Save state
cameraSessionInitialized = YES;
[session startRunning];
}
session.sessionPreset = AVCaptureSessionPresetMedium;
If you don't care about quality, this does get rid of memory warnings. I'm trying to figure out how to get it working with the AVCaptureSessionPresetPhoto.