AVCaptureSession preset photo and AVCaptureVideoPreviewLayer size - objective-c

I initialize an AVCaptureSession and I preset it like this :
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
if (YES==[newCaptureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) {
newCaptureSession.sessionPreset = AVCaptureSessionPresetPhoto;
} else {
// Error management
}
Then I setup an AVCaptureVideoPreviewLayer :
self.preview = [[UIView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height/*426*/)];
CALayer *previewLayer = preview.layer;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
captureVideoPreviewLayer.frame = previewLayer.frame;
[previewLayer addSublayer:captureVideoPreviewLayer];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
My question is:
How can I get the exact CGSize needed to display all the captureVideoPreviewLayer layer on screen ? More precisely I need the height as AVLayerVideoGravityResizeAspect make the AVCaptureVideoPreviewLayer fits the preview.size ?
I try to get AVCaptureVideoPreviewLayer size that fit right.
Very thank you for your help

After some research with AVCaptureSessionPresetPhoto the AVCaptureVideoPreviewLayer respect the 3/4 ration of iPhone camera. So it's easy to have the right height with simple calculus.
As an instance if the width is 320 the adequate height is:
320*4/3=426.6

Weischel's code didn't work for me. The idea worked, but the code didn't. Here's the code that did work:
// Get your AVCaptureSession somehow. I'm getting mine out of self.videoCamera, which is a GPUImageVideoCamera
// Get the appropriate AVCaptureVideoDataOutput out of the capture session. I only have one session, so it's easy.
AVCaptureVideoDataOutput *output = [[[self.videoCamera captureSession] outputs] lastObject];
NSDictionary* outputSettings = [output videoSettings];
// AVVideoWidthKey and AVVideoHeightKey did not work. I had to use these literal keys.
long width = [[outputSettings objectForKey:#"Width"] longValue];
long height = [[outputSettings objectForKey:#"Height"] longValue];
// video camera output dimensions are always for landscape mode. Transpose if your camera is in portrait mode.
if (UIInterfaceOrientationIsPortrait([self.videoCamera outputImageOrientation])) {
long buf = width;
width = height;
height = buf;
}
CGSize outputSize = CGSizeMake(width, height);

If I understand you correctly, you try to get width & height of the current video session.
You can obtain them from the outputSettings dictionary of your AVCaptureOutput. (Use AVVideoWidthKey & AVVideoHeightKey).
e.g.
NSDictionary* outputSettings = [movieFileOutput outputSettingsForConnection:videoConnection];
CGSize videoSize = NSMakeSize([[outputSettings objectForKey:AVVideoWidthKey] doubleValue], [[outputSettings objectForKey:AVVideoHeightKey] doubleValue]);
Update:
Another idea would be to grab the frame size from the image buffer of the preview session.
Implement the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
(don't forget to set the delegate of your AVCaptureOutput)
- (void)captureOutput:(AVCaptureFileOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if(imageBuffer != NULL)
{
CGSize imageSize = CVImageBufferGetDisplaySize(imageBuffer);
NSLog(#"%#", NSStringFromSize(imageSize));
}
}

Thanks to gsempe for your answer. I'm on the same problem since hours :)
And i solved it with this code, to center it in the screen in landscape mode :
CGRect layerRect = [[[self view] layer] bounds];
[PreviewLayer setBounds:CGRectMake(0, 0, 426.6, 320)];
[PreviewLayer setPosition:CGPointMake(CGRectGetMidY(layerRect), CGRectGetMidX(layerRect))];
Note that I had to invert the CGRectGetMidY() and CGRectGetMidX() function to have a centered layer into my screen.
Thanks,
Julian

Related

Image resized when using NSUrl [duplicate]

I see that sometimes NSImage size is not real size (with some pictures) and CIImage size is always real. I was testing with this image.
This is source code which I wrote for testing:
NSImage *_imageNSImage = [[NSImage alloc]initWithContentsOfFile:#"<path to image>"];
NSSize _dimensions = [_imageNSImage size];
[_imageNSImage release];
NSLog(#"Width from CIImage: %f",_dimensions.width);
NSLog(#"Height from CIImage: %f",_dimensions.height);
NSURL *_myURL = [NSURL fileURLWithPath:#"<path to image>"];
CIImage *_imageCIImage = [CIImage imageWithContentsOfURL:_myURL];
NSRect _rectFromCIImage = [_imageCIImage extent];
NSLog(#"Width from CIImage: %f",_rectFromCIImage.size.width);
NSLog(#"Height from CIImage: %f",_rectFromCIImage.size.height);
And output is:
So how that can be?? Maybe I'm doing something wrong?
NSImage size method returns size information that is screen resolution dependent. To get the size represented in the actual file image you need to use an NSImageRep. You can get an NSImageRep from an NSImage using the representations method. Alternatively you can create a NSBitmapImageRep subclass instance directly like this:
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:#"<path to image>"];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSLog(#"Width from NSBitmapImageRep: %f",(CGFloat)width);
NSLog(#"Height from NSBitmapImageRep: %f",(CGFloat)height);
The loop takes into account that some image formats may contain more than a single image (such as TIFFs for example).
You can create an NSImage at this size by using the following:
NSImage * imageNSImage = [[NSImage alloc] initWithSize:NSMakeSize((CGFloat)width, (CGFloat)height)];
[imageNSImage addRepresentations:imageReps];
NSImage size method return size in points. To get size represented in pixels you need inspect NSImage.representations property that contains an array of NSImageRep objects with pixelWide/pixelHigh properties and simple change size NSImage object:
#implementation ViewController {
__weak IBOutlet NSImageView *imageView;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do view setup here.
NSImage *image = [[NSImage alloc] initWithContentsOfFile:#"/Users/username/test.jpg"];
if (image.representations && image.representations.count > 0) {
long lastSquare = 0, curSquare;
NSImageRep *imageRep;
for (imageRep in image.representations) {
curSquare = imageRep.pixelsWide * imageRep.pixelsHigh;
if (curSquare > lastSquare) {
image.size = NSMakeSize(imageRep.pixelsWide, imageRep.pixelsHigh);
lastSquare = curSquare;
}
}
imageView.image = image;
NSLog(#"%.0fx%.0f", image.size.width, image.size.height);
}
}
#end
Thanks to Zenopolis for the original ObjC code, here's a nice concise Swift version:
func sizeForImageAtURL(url: NSURL) -> CGSize? {
guard let imageReps = NSBitmapImageRep.imageRepsWithContentsOfURL(url) else { return nil }
return imageReps.reduce(CGSize.zero, combine: { (size: CGSize, rep: NSImageRep) -> CGSize in
return CGSize(width: max(size.width, CGFloat(rep.pixelsWide)), height: max(size.height, CGFloat(rep.pixelsHigh)))
})
}
If your file contains only one image, you can just use this :
let rep = image.representations[0]
let imageSize = NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
image is your NSImage, imageSize is the image size in pixels.
Copied and updated here: https://stackoverflow.com/a/13228091/3608824
NSImage's size param returns size information dependent to screen resolution and scaling configuration.
Real size of image you can get with the following extension:
extension NSImage {
var sizeReal: NSSize {
guard representations.count > 0 else { return NSSize(width: 0, height: 0) }
let rep = self.representations[0]
return NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
}
}

Method to resize CALayer frame on window resize?

I draw a series of images to various CALayer sublayers, then add those sublayers to a superlayer:
- (void)renderImagesFromArray:(NSArray *)array {
CALayer *superLayer = [CALayer layer];
for (id object in array) {
CALayer* subLayer = [CALayer layer];
// Disregard...
NSURL *path = [NSURL fileURLWithPathComponents:#[NSHomeDirectory(), #"Desktop", object]];
NSImage *image = [[NSImage alloc] initWithContentsOfURL:path];
[self positionImage:image layer:subLayer];
subLayer.contents = image;
subLayer.hidden = YES;
[superLayer addSublayer:subLayer];
}
[self.view setLayer:superLayer];
[self.view setWantsLayer:YES];
// Show top layer
CALayer *top = superLayer.sublayers[0];
top.hidden = NO;
}
I then call [self positionImage: layer:] to stretch the CALayer to it's maximum bounds (essentially using the algorithm for the CSS cover property), and position it in the center of the window:
- (void)positionImage:(NSImage *)image layer:(CALayer *)layer{
float imageWidth = image.size.width;
float imageHeight = image.size.height;
float frameWidth = self.view.frame.size.width;
float frameHeight = self.view.frame.size.height;
float aspectRatioFrame = frameWidth/frameHeight;
float aspectRatioImage = imageWidth/imageHeight;
float computedImageWidth;
float computedImageHeight;
float verticalSpace;
float horizontalSpace;
if (aspectRatioImage <= aspectRatioFrame){
computedImageWidth = frameHeight * aspectRatioImage;
computedImageHeight = frameHeight;
verticalSpace = 0;
horizontalSpace = (frameWidth - computedImageWidth)/2;
} else {
computedImageWidth = frameWidth;
computedImageHeight = frameWidth / aspectRatioImage;
horizontalSpace = 0;
verticalSpace = (frameHeight - computedImageHeight)/2;
}
[CATransaction flush];
[CATransaction begin];
CATransaction.disableActions = YES;
layer.frame = CGRectMake(horizontalSpace, verticalSpace, computedImageWidth, computedImageHeight);
[CATransaction commit];
}
This all works fine, except when the window gets resized. I solved this (in a very ugly way) by subclassing NSView, then implementing the only method that was actually called when the window resized, viewWillDraw::
- (void)viewWillDraw{
[super viewWillDraw];
[self redraw];
}
- (void)redraw{
AppDelegate *appDelegate = (AppDelegate *)[[NSApplication sharedApplication] delegate];
CALayer *superLayer = self.layer;
NSArray *sublayers = superLayer.sublayers;
NSImage *image;
CALayer *current;
for (CALayer *view in sublayers){
if (!view.isHidden){
current = view;
image = view.contents;
}
}
[appDelegate positionImage:image layer:current];
}
So... what's the right way to do this? viewWillDraw: get's called too many times which means I have to do unnecessary and redundant calculations, and I can't use viewWillStartLiveResize: because I need to constantly keep the image in its correct position. What am I overlooking?
Peter Hosey was right; my original method was clunky, and I shouldn't have been overriding setNeedsDisplayInRect:. I first made sure that I was using an auto layout in my app, then implemented the following:
subLayer.layoutManager = [CAConstraintLayoutManager layoutManager];
subLayer.autoresizingMask = kCALayerHeightSizable | kCALayerWidthSizable;
subLayer.contentsGravity = kCAGravityResizeAspect;
Basically, I set the sublayer's autoResizingMask to stretch both horizontally and vertically, and then set contentsGravity to preserve the aspect ratio.
That last variable I found by chance, but it's worth noting that you can only use a few contentsGravity constants if, like in my case, you're setting an NSImage as the layer's contents:
That method creates an image that is suited for use as the contents of a layer and that is supports all of the layer’s gravity modes. By contrast, the NSImage class supports only the kCAGravityResize, kCAGravityResizeAspect, and kCAGravityResizeAspectFill modes.
Always fun when a complicated solution can be simplified to 3 lines of code.

AVFoundation crop captured still image according to the preview aspect ratio

My question is mostly similar to this one:
Cropping image captured by AVCaptureSession
I have an application which uses AVFoundation for capturing still images. My AVCaptureVideoPreviewLayer has AVLayerVideoGravityResizeAspectFill video gravity thus making preview picture which is shown to the user to be cropped from the top and from the bottom parts.
When user is pressing "Capture" button, the image actually captured is differs from the preview picture shown to user. My question is how to crop captured image accordingly?
Thanks in advance.
I used UIImage+Resize category provided in here with some new methods I wrote to do the job. I reformatted some code to look better and not tested, but it should work. :))
- (UIImage*)cropAndResizeAspectFillWithSize:(CGSize)targetSize
interpolationQuality:(CGInterpolationQuality)quality {
UIImage *outputImage = nil;
UIImage *imageForProcessing = self;
// crop center square (AspectFill)
if (self.size.width != self.size.height) {
CGFloat shorterLength = 0;
CGPoint origin = CGPointZero;
if (self.size.width > self.size.height) {
origin.x = (self.size.width - self.size.height)/2;
shorterLength = self.size.height;
}
else {
origin.y = (self.size.height - self.size.width)/2;
shorterLength = self.size.width;
}
imageForProcessing = [imageForProcessing normalizedImage];
imageForProcessing = [imageForProcessing croppedImage:CGRectMake(origin.x, origin.y, shorterLength, shorterLength)];
}
outputImage = [imageForProcessing resizedImage:targetSize interpolationQuality:quality];
return outputImage;
}
// fix image orientation, which may wrongly rotate the output.
- (UIImage *)normalizedImage {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}

UIScrollView setZoomScale not working?

I am struggeling with my UIScrollview to get it to zoom-in the underlying UIImageView. In my view controller I set
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView {
return myImageView;
}
In the viewDidLoad method I try to set the zoomScale to 2 as follows (note the UIImageView and Image is set in Interface Builder):
- (void)viewDidLoad {
[super viewDidLoad];
myScrollView.contentSize = CGSizeMake(myImageView.frame.size.width, myImageView.frame.size.height);
myScrollView.contentOffset = CGPointMake(941.0, 990.0);
myScrollView.minimumZoomScale = 0.1;
myScrollView.maximumZoomScale = 10.0;
myScrollView.zoomScale = 0.7;
myScrollView.clipsToBounds = YES;
myScrollView.delegate = self;
NSLog(#"zoomScale: %.1f, minZoolScale: %.3f", myScrollView.zoomScale, myScrollView.minimumZoomScale);
}
I tried a few variations of this, but the NSLog always shows a zoomScale of 1.0.
Any ideas where I screw this one up?
I finally got this to work. what caused the problem was the delegate call being at the end. I now moved it up and .... here we go.
New code looks like this:
- (void)viewDidLoad {
[super viewDidLoad];
myScrollView.delegate = self;
myScrollView.contentSize = CGSizeMake(myImageView.frame.size.width, myImageView.frame.size.height);
myScrollView.contentOffset = CGPointMake(941.0, 990.0);
myScrollView.minimumZoomScale = 0.1;
myScrollView.maximumZoomScale = 10.0;
myScrollView.zoomScale = 0.7;
myScrollView.clipsToBounds = YES;
}
Here is another example I made. This one is using an image that is included in the resource folder. Compared to the one you have this one adds the UIImageView to the view as a subview and then changes the zoom to the whole view.
-(void)viewDidLoad{
[super viewDidLoad];
UIImage *image = [UIImage imageNamed:#"random.jpg"];
imageView = [[UIImageView alloc] initWithImage:image];
[self.view addSubview:imageView];
[(UIScrollView *) self.view setContentSize:[image size]];
[(UIScrollView *) self.view setMaximumZoomScale:2.0];
[(UIScrollView *) self.view setMinimumZoomScale:0.5];
}
I know this is quite late as answers go, but the problem is that your code calls zoomScale before it sets the delegate. You are right the other things in there don't require the delegate, but zoomScale does because it has to be able to call back when the zoom is complete. At least that's how I think it works.
My code must be completely crazy because the scale that I use is completely opposite to what tutorials and others are doing. For me, minScale = 1 which indicates that the image is fully zoomed out and fits the UIImageView that contains it.
Here's my code:
[self.imageView setImage:image];
// Makes the content size the same size as the imageView size.
// Since the image size and the scroll view size should be the same, the scroll view shouldn't scroll, only bounce.
self.scrollView.contentSize = self.imageView.frame.size;
// despite what tutorials say, the scale actually goes from one (image sized to fit screen) to max (image at actual resolution)
CGRect scrollViewFrame = self.scrollView.frame;
CGFloat minScale = 1;
// max is calculated by finding the max ratio factor of the image size to the scroll view size (which will change based on the device)
CGFloat scaleWidth = image.size.width / scrollViewFrame.size.width;
CGFloat scaleHeight = image.size.height / scrollViewFrame.size.height;
self.scrollView.maximumZoomScale = MAX(scaleWidth, scaleHeight);
self.scrollView.minimumZoomScale = minScale;
// ensure we are zoomed out fully
self.scrollView.zoomScale = minScale;
This works as I expect. When I load the image into the UIImageView, it is fully zoomed out. I can then zoom in and then I can pan the image.

Show bounding box of UIImageView in UIView

I have written a class extending UIImageView in order to allow me dynamically generate bricks on screen. The brick is a 20x10 PNG.
Here is my codes:
- (id) initBrick:(NSInteger *)str x:(float)ptX y:(float)ptY {
int brickIndex = arc4random() % 10 + 1;
NSString *filename = [NSString stringWithFormat:#"brick%d.png", brickIndex];
UIImage *brickImage = [UIImage imageNamed:filename];
CGRect imageRect = CGRectMake(0.0f, 0.0f, 20.0f, 10.0f);
[self initWithFrame:imageRect];
[self setImage:brickImage];
self.center = CGPointMake(ptX, ptY);
self.opaque = YES;
self.isDead = NO;
return self;
}
Then, I have a simple collision detection function in the same class:
- (BOOL)checkHit:(CGRect)frame {
if(CGRectIntersectsRect(self.frame, frame)) {
isDead = YES;
return YES;
} else {
return NO;
}
}
But the collision detection is not performed well.
The bounding box seems a bit lower than my image.
How to show the bounding box in order to allow me to check the collision?
If the code is unclear, I can supply more information.
You could set the background color to be sure the problem is not caused by the image. But if the image is simple opaque rectangle, it should be fine. I’d set a breakpoint in the checkHit method, see what self.frame gives and think for a while, it can’t be too hard.
And as for the checkHit method, you should either rename it to checkAndSetHit, or (better) do not set the dead flag there:
- (BOOL) checkHit: (CGRect) frame
{
return CGRectIntersectsRect(self.frame, frame);
}
The code would read even a tiny little bit better if you renamed it to hitsFrame or intersectsFrame, but that’s nitpicking.