Image resized when using NSUrl [duplicate] - objective-c

I see that sometimes NSImage size is not real size (with some pictures) and CIImage size is always real. I was testing with this image.
This is source code which I wrote for testing:
NSImage *_imageNSImage = [[NSImage alloc]initWithContentsOfFile:#"<path to image>"];
NSSize _dimensions = [_imageNSImage size];
[_imageNSImage release];
NSLog(#"Width from CIImage: %f",_dimensions.width);
NSLog(#"Height from CIImage: %f",_dimensions.height);
NSURL *_myURL = [NSURL fileURLWithPath:#"<path to image>"];
CIImage *_imageCIImage = [CIImage imageWithContentsOfURL:_myURL];
NSRect _rectFromCIImage = [_imageCIImage extent];
NSLog(#"Width from CIImage: %f",_rectFromCIImage.size.width);
NSLog(#"Height from CIImage: %f",_rectFromCIImage.size.height);
And output is:
So how that can be?? Maybe I'm doing something wrong?

NSImage size method returns size information that is screen resolution dependent. To get the size represented in the actual file image you need to use an NSImageRep. You can get an NSImageRep from an NSImage using the representations method. Alternatively you can create a NSBitmapImageRep subclass instance directly like this:
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:#"<path to image>"];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSLog(#"Width from NSBitmapImageRep: %f",(CGFloat)width);
NSLog(#"Height from NSBitmapImageRep: %f",(CGFloat)height);
The loop takes into account that some image formats may contain more than a single image (such as TIFFs for example).
You can create an NSImage at this size by using the following:
NSImage * imageNSImage = [[NSImage alloc] initWithSize:NSMakeSize((CGFloat)width, (CGFloat)height)];
[imageNSImage addRepresentations:imageReps];

NSImage size method return size in points. To get size represented in pixels you need inspect NSImage.representations property that contains an array of NSImageRep objects with pixelWide/pixelHigh properties and simple change size NSImage object:
#implementation ViewController {
__weak IBOutlet NSImageView *imageView;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do view setup here.
NSImage *image = [[NSImage alloc] initWithContentsOfFile:#"/Users/username/test.jpg"];
if (image.representations && image.representations.count > 0) {
long lastSquare = 0, curSquare;
NSImageRep *imageRep;
for (imageRep in image.representations) {
curSquare = imageRep.pixelsWide * imageRep.pixelsHigh;
if (curSquare > lastSquare) {
image.size = NSMakeSize(imageRep.pixelsWide, imageRep.pixelsHigh);
lastSquare = curSquare;
}
}
imageView.image = image;
NSLog(#"%.0fx%.0f", image.size.width, image.size.height);
}
}
#end

Thanks to Zenopolis for the original ObjC code, here's a nice concise Swift version:
func sizeForImageAtURL(url: NSURL) -> CGSize? {
guard let imageReps = NSBitmapImageRep.imageRepsWithContentsOfURL(url) else { return nil }
return imageReps.reduce(CGSize.zero, combine: { (size: CGSize, rep: NSImageRep) -> CGSize in
return CGSize(width: max(size.width, CGFloat(rep.pixelsWide)), height: max(size.height, CGFloat(rep.pixelsHigh)))
})
}

If your file contains only one image, you can just use this :
let rep = image.representations[0]
let imageSize = NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
image is your NSImage, imageSize is the image size in pixels.
Copied and updated here: https://stackoverflow.com/a/13228091/3608824

NSImage's size param returns size information dependent to screen resolution and scaling configuration.
Real size of image you can get with the following extension:
extension NSImage {
var sizeReal: NSSize {
guard representations.count > 0 else { return NSSize(width: 0, height: 0) }
let rep = self.representations[0]
return NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
}
}

Related

Objective C Memory Crash, UIImageView

My app assigns and displays an image inside of a UIImageView. This happens with 24 imageViews all at viewDidLoad. The images are assigned randomly from a list of fifty images. The view controller is pushed to modally from the main screen. The first time it takes a while to load. If I'm lucky, the view loads a second time. The third time, it almost always crashes. I've tried resizing the images to around 200 pixels. I've tried assigning the images with :
image1 = [UIImage imageNamed:#"image1.png"];
[self.imageView setImage: image1];
and also with:
NSString *imagePath = [[NSBundle mainBundle] pathForResource:#"image1" ofType:#"png"];
image1 = [[UIImage alloc] initWithContentsOfFile:imagePath];
This second one seemed to only make things worse.
I also tried running the app with Instruments, which didn't recognize any memory leaks.
I really don't know where else to turn. This app represents an enormous investment of time and I would really like to see this problem resolved...
Thank you so much
The most efficient way to load a smaller version of an image from disk is this: instead of using imageNamed:, use the Image I/O framework to request a thumbnail that is the actual size you'll be displaying, by calling CGImageSourceCreateThumbnailAtIndex. Here's the example from my book:
NSURL* url =
[[NSBundle mainBundle] URLForResource:#"colson"
withExtension:#"jpg"];
CGImageSourceRef src =
CGImageSourceCreateWithURL((__bridge CFURLRef)url, nil);
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat w = self.iv.bounds.size.width*scale;
NSDictionary* d =
#{(id)kCGImageSourceShouldAllowFloat: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageAlways: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #((int)w)};
CGImageRef imref =
CGImageSourceCreateThumbnailAtIndex(src, 0, (__bridge CFDictionaryRef)d);
UIImage* im =
[UIImage imageWithCGImage:imref scale:scale
orientation:UIImageOrientationUp];
self.iv.image = im;
CFRelease(imref); CFRelease(src);
It is a huge waste of memory to ask a UIImageView to display an image larger than the UIImageView itself, as the bitmap for the full-size image must be maintained in memory. The Image I/O framework generates the smaller version without even ever unpacking the entire original image into memory as a bitmap.
I had this problem once before with images from a regular website that were way larger than the view I was using. The images are being uncompressed to their full resolution and then fit into the image view, if I remember correctly, hogging up your memory. I had to scale them down to the image view size first before showing them. Add CoreGraphics.framework and use this class to make an image object to use with your image view. I found it online and tweaked it a little looking for the same answer but don't remember where, so thanks to that person who posted the original, whoever they are.
ImageScale.h
#import <Foundation/Foundation.h>
#interface ImageScale : NSObject
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
#end
ImageScale.m
#import "ImageScale.h"
#implementation ImageScale
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize
{
CGFloat targetWidth = newSize.width;
CGFloat targetHeight = newSize.height;
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
#end

AVCaptureSession preset photo and AVCaptureVideoPreviewLayer size

I initialize an AVCaptureSession and I preset it like this :
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
if (YES==[newCaptureSession canSetSessionPreset:AVCaptureSessionPresetPhoto]) {
newCaptureSession.sessionPreset = AVCaptureSessionPresetPhoto;
} else {
// Error management
}
Then I setup an AVCaptureVideoPreviewLayer :
self.preview = [[UIView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height/*426*/)];
CALayer *previewLayer = preview.layer;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
captureVideoPreviewLayer.frame = previewLayer.frame;
[previewLayer addSublayer:captureVideoPreviewLayer];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
My question is:
How can I get the exact CGSize needed to display all the captureVideoPreviewLayer layer on screen ? More precisely I need the height as AVLayerVideoGravityResizeAspect make the AVCaptureVideoPreviewLayer fits the preview.size ?
I try to get AVCaptureVideoPreviewLayer size that fit right.
Very thank you for your help
After some research with AVCaptureSessionPresetPhoto the AVCaptureVideoPreviewLayer respect the 3/4 ration of iPhone camera. So it's easy to have the right height with simple calculus.
As an instance if the width is 320 the adequate height is:
320*4/3=426.6
Weischel's code didn't work for me. The idea worked, but the code didn't. Here's the code that did work:
// Get your AVCaptureSession somehow. I'm getting mine out of self.videoCamera, which is a GPUImageVideoCamera
// Get the appropriate AVCaptureVideoDataOutput out of the capture session. I only have one session, so it's easy.
AVCaptureVideoDataOutput *output = [[[self.videoCamera captureSession] outputs] lastObject];
NSDictionary* outputSettings = [output videoSettings];
// AVVideoWidthKey and AVVideoHeightKey did not work. I had to use these literal keys.
long width = [[outputSettings objectForKey:#"Width"] longValue];
long height = [[outputSettings objectForKey:#"Height"] longValue];
// video camera output dimensions are always for landscape mode. Transpose if your camera is in portrait mode.
if (UIInterfaceOrientationIsPortrait([self.videoCamera outputImageOrientation])) {
long buf = width;
width = height;
height = buf;
}
CGSize outputSize = CGSizeMake(width, height);
If I understand you correctly, you try to get width & height of the current video session.
You can obtain them from the outputSettings dictionary of your AVCaptureOutput. (Use AVVideoWidthKey & AVVideoHeightKey).
e.g.
NSDictionary* outputSettings = [movieFileOutput outputSettingsForConnection:videoConnection];
CGSize videoSize = NSMakeSize([[outputSettings objectForKey:AVVideoWidthKey] doubleValue], [[outputSettings objectForKey:AVVideoHeightKey] doubleValue]);
Update:
Another idea would be to grab the frame size from the image buffer of the preview session.
Implement the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
(don't forget to set the delegate of your AVCaptureOutput)
- (void)captureOutput:(AVCaptureFileOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if(imageBuffer != NULL)
{
CGSize imageSize = CVImageBufferGetDisplaySize(imageBuffer);
NSLog(#"%#", NSStringFromSize(imageSize));
}
}
Thanks to gsempe for your answer. I'm on the same problem since hours :)
And i solved it with this code, to center it in the screen in landscape mode :
CGRect layerRect = [[[self view] layer] bounds];
[PreviewLayer setBounds:CGRectMake(0, 0, 426.6, 320)];
[PreviewLayer setPosition:CGPointMake(CGRectGetMidY(layerRect), CGRectGetMidX(layerRect))];
Note that I had to invert the CGRectGetMidY() and CGRectGetMidX() function to have a centered layer into my screen.
Thanks,
Julian

How to uniformly scale rich text in an NSTextView?

Context:
I have a normal Document-based Cocoa Mac OS X application which uses an NSTextView for rich text input. The user may edit the font family, point size and colors of the text in the NSTextView.
Base SDK: 10.7
Deployment Target: 10.6
Question:
I would like to implement zooming of the entire UI programmatically (including the NSTextView) while the user is editing text. Scaling the frame of the NSTextView is no problem. But I don't know how to scale the editable text inside the view which may contain multiple different point sizes in different sub-sections of the entire run of text.
How can I apply a uniform scale factor to the rich text displayed in an NSTextView?
This should play nicely with "rich text", such that the user's font family, color and especially point size (which may be different at different points of the run of text) are preserved, but scaled uniformly/relatively.
Is this possible given my Base SDK and Deployment targets? Is it possible with a newer Base SDK or Deployment target?
If the intent is to scale the view (and not actually change the attributes in the string), I would suggest using scaleUnitSquareToSize: method: along with the ScalingScrollView (available with the TextEdit sample code) for the proper scroll bar behavior.
The core piece from the ScalingScrollView is:
- (void)setScaleFactor:(CGFloat)newScaleFactor adjustPopup:(BOOL)flag
{
CGFloat oldScaleFactor = scaleFactor;
if (scaleFactor != newScaleFactor)
{
NSSize curDocFrameSize, newDocBoundsSize;
NSView *clipView = [[self documentView] superview];
scaleFactor = newScaleFactor;
// Get the frame. The frame must stay the same.
curDocFrameSize = [clipView frame].size;
// The new bounds will be frame divided by scale factor
newDocBoundsSize.width = curDocFrameSize.width / scaleFactor;
newDocBoundsSize.height = curDocFrameSize.height / scaleFactor;
}
scaleFactor = newScaleFactor;
[scale_delegate scaleChanged:oldScaleFactor newScale:newScaleFactor];
}
The scale_delegate is your delegate that can adjust your NSTextView object:
- (void) scaleChanged:(CGFloat)oldScale newScale:(CGFloat)newScale
{
NSInteger percent = lroundf(newScale * 100);
CGFloat scaler = newScale / oldScale;
[textView scaleUnitSquareToSize:NSMakeSize(scaler, scaler)];
NSLayoutManager* lm = [textView layoutManager];
NSTextContainer* tc = [textView textContainer];
[lm ensureLayoutForTextContainer:tc];
}
The scaleUnitSquareToSize: method scales relative to its current state, so you keep track of your scale factor and then convert your absolute scale request (200%) into a relative scale request.
Works for both iOS and Mac OS
#implementation NSAttributedString (Scale)
- (NSAttributedString *)attributedStringWithScale:(double)scale
{
if(scale == 1.0)
{
return self;
}
NSMutableAttributedString *copy = [self mutableCopy];
[copy beginEditing];
NSRange fullRange = NSMakeRange(0, copy.length);
[self enumerateAttribute:NSFontAttributeName inRange:fullRange options:0 usingBlock:^(UIFont *oldFont, NSRange range, BOOL *stop) {
double currentFontSize = oldFont.pointSize;
double newFontSize = currentFontSize * scale;
// don't trust -[UIFont fontWithSize:]
UIFont *scaledFont = [UIFont fontWithName:oldFont.fontName size:newFontSize];
[copy removeAttribute:NSFontAttributeName range:range];
[copy addAttribute:NSFontAttributeName value:scaledFont range:range];
}];
[self enumerateAttribute:NSParagraphStyleAttributeName inRange:fullRange options:0 usingBlock:^(NSParagraphStyle *oldParagraphStyle, NSRange range, BOOL *stop) {
NSMutableParagraphStyle *newParagraphStyle = [oldParagraphStyle mutableCopy];
newParagraphStyle.lineSpacing *= scale;
newParagraphStyle.paragraphSpacing *= scale;
newParagraphStyle.firstLineHeadIndent *= scale;
newParagraphStyle.headIndent *= scale;
newParagraphStyle.tailIndent *= scale;
newParagraphStyle.minimumLineHeight *= scale;
newParagraphStyle.maximumLineHeight *= scale;
newParagraphStyle.paragraphSpacing *= scale;
newParagraphStyle.paragraphSpacingBefore *= scale;
[copy removeAttribute:NSParagraphStyleAttributeName range:range];
[copy addAttribute:NSParagraphStyleAttributeName value:newParagraphStyle range:range];
}];
[copy endEditing];
return copy;
}
#end
OP here.
I found one solution that kinda works and is not terribly difficult to implement. I'm not sure this is the best/ideal solution however. I'm still interested in finding other solutions. But here's one way:
Manually scale the font point size and line height multiple properties of the NSAttributedString source text before display, and then un-scale the displayed text before storing as source.
The problem with this solution is that while scaled, the system Font Panel will show the actual scaled display point size of selected text (rather than the "real" source point size) while editing. That's not desirable.
Here's my implementation of that:
- (void)scaleAttributedString:(NSMutableAttributedString *)str by:(CGFloat)scale {
if (1.0 == scale) return;
NSRange r = NSMakeRange(0, [str length]);
[str enumerateAttribute:NSFontAttributeName inRange:r options:0 usingBlock:^(NSFont *oldFont, NSRange range, BOOL *stop) {
NSFont *newFont = [NSFont fontWithName:[oldFont familyName] size:[oldFont pointSize] * scale];
NSParagraphStyle *oldParaStyle = [str attribute:NSParagraphStyleAttributeName atIndex:range.location effectiveRange:NULL];
NSMutableParagraphStyle *newParaStyle = [[oldParaStyle mutableCopy] autorelease];
CGFloat oldLineHeight = [oldParaStyle lineHeightMultiple];
CGFloat newLineHeight = scale * oldLineHeight;
[newParaStyle setLineHeightMultiple:newLineHeight];
id newAttrs = #{
NSParagraphStyleAttributeName: newParaStyle,
NSFontAttributeName: newFont,
};
[str addAttributes:newAttrs range:range];
}];
}
This requires scaling the source text before display:
// scale text
CGFloat scale = getCurrentScaleFactor();
[self scaleAttributedString:str by:scale];
And then reverse-scaling the displayed text before storing as source:
// un-scale text
CGFloat scale = 1.0 / getCurrentScaleFactor();
[self scaleAttributedString:str by:scale];
I want to thank Mark Munz for his answer, as it saved me from wandering in a dark forest, full of of NSScrollView magnification madness and NSLayoutManagers.
For anyone still looking, this is my approach. This code is inside a NSDocument. All text is being inset into a fixed-width and centered container, and the zooming here keeps word wrapping etc. intact. It creates a nice "page view" sort of appearance without resorting to complicated layout management.
You need to have CGFloat _documentSize and NSTextView textView constants set in you class for this example to work.
- (void) initZoom {
// Call this when the view has loaded and is ready
// I am storing a separate _scaleFactor and _magnification for my own purposes, mainly to have the initial scale to be higher than 1.0
_scaleFactor = 1.0;
_magnification = 1.1;
[self setScaleFactor:_magnification adjustPopup:false];
[self updateLayout];
// NOTE: You might need to call updateLayout after the content is set and we know the window size etc.
}
- (void) zoom: (bool) zoomIn {
if (!_scaleFactor) _scaleFactor = _magnification;
// Arbitrary maximum levels of zoom
if (zoomIn) {
if (_magnification < 1.6) _magnification += 0.1;
} else {
if (_magnification > 0.8) _magnification -= 0.1;
}
[self setScaleFactor:_magnification adjustPopup:false];
[self updateLayout];
}
- (void)setScaleFactor:(CGFloat)newScaleFactor adjustPopup:(BOOL)flag
{
CGFloat oldScaleFactor = _scaleFactor;
if (_scaleFactor != newScaleFactor)
{
NSSize curDocFrameSize, newDocBoundsSize;
NSView *clipView = [[self textView] superview];
_scaleFactor = newScaleFactor;
// Get the frame. The frame must stay the same.
curDocFrameSize = [clipView frame].size;
// The new bounds will be frame divided by scale factor
//newDocBoundsSize.width = curDocFrameSize.width / _scaleFactor;
newDocBoundsSize.width = curDocFrameSize.width;
newDocBoundsSize.height = curDocFrameSize.height / _scaleFactor;
NSRect newFrame = NSMakeRect(0, 0, newDocBoundsSize.width, newDocBoundsSize.height);
clipView.frame = newFrame;
}
_scaleFactor = newScaleFactor;
[self scaleChanged:oldScaleFactor newScale:newScaleFactor];
}
- (void) scaleChanged:(CGFloat)oldScale newScale:(CGFloat)newScale
{
CGFloat scaler = newScale / oldScale;
[self.textView scaleUnitSquareToSize:NSMakeSize(scaler, scaler)];
NSLayoutManager* lm = [self.textView layoutManager];
NSTextContainer* tc = [self.textView textContainer];
[lm ensureLayoutForTextContainer:tc];
}
- (void) updateLayout {
CGFloat width = (self.textView.frame.size.width / 2 - _documentWidth * _magnification / 2) / _magnification; self.textView.textContainerInset = NSMakeSize(width, TEXT_INSET_TOP);
self.textView.textContainer.size = NSMakeSize(_documentWidth, self.textView.textContainer.size.height);
}

Get the correct image width and height of an NSImage

I use the code below to get the width and height of a NSImage:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:[NSString stringWithFormat:s]] autorelease];
imageWidth=[image size].width;
imageHeight=[image size].height;
NSLog(#"%f:%f",imageWidth,imageHeight);
But sometime imageWidth, imageHeight does not return the correct value. For example when I read an image, the EXIF info displays:
PixelXDimension = 2272;
PixelYDimension = 1704;
But imageWidth, imageHeight outputs
521:390
Dimensions of your image in pixels is stored in NSImageRep of your image. If your file contains only one image, it will be like this:
NSImageRep *rep = [[image representations] objectAtIndex:0];
NSSize imageSize = NSMakeSize(rep.pixelsWide, rep.pixelsHigh);
where image is your NSImage and imageSize is your image size in pixels.
NSImage size method returns size information that is screen resolution dependent. To get the size represented in the actual file image you need to use an NSImageRep.
Refer nsimage-size-not-real-size-with-some-pictures link and get helped
the direct API gives also the correct results
CGImageRef cgImage = [oldImage CGImageForProposedRect:nil context:context hints:nil];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
Apple uses a point system based on DPI to map points to physical device pixels. It doesnt matter what the EXIF says, it matters how many logical screen points your canvas has to display the image.
iOS and OSX perform this mapping for you. The only size you should be concerned about is the size returned from UIImage.size
You cant (read shouldnt have to shouldnt care) do the mapping to device pixels yourself, thats why apple does it.
SWIFT 4
You have to make a NSBitmapImageRep representation of the NSImage to get the correct pixel height and width.
First a this extension to gather a CGImage from the NSImage:
extension NSImage {
#objc var CGImage: CGImage? {
get {
guard let imageData = self.tiffRepresentation else { return nil }
guard let sourceData = CGImageSourceCreateWithData(imageData as CFData, nil) else { return nil }
return CGImageSourceCreateImageAtIndex(sourceData, 0, nil)
}
}
}
Then when you want to get the height and width:
let rep = NSBitmapImageRep(cgImage: (NSImage(named: "Your Image Name")?.CGImage)!)
let imageHeight = rep.size.height
let imageWidth = rep.size.width
i make a extension like this:
extension NSImage{
var pixelSize: NSSize?{
if let rep = self.representations.first{
let size = NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
return size
}
return nil
}
}

AVFoundation crop captured still image according to the preview aspect ratio

My question is mostly similar to this one:
Cropping image captured by AVCaptureSession
I have an application which uses AVFoundation for capturing still images. My AVCaptureVideoPreviewLayer has AVLayerVideoGravityResizeAspectFill video gravity thus making preview picture which is shown to the user to be cropped from the top and from the bottom parts.
When user is pressing "Capture" button, the image actually captured is differs from the preview picture shown to user. My question is how to crop captured image accordingly?
Thanks in advance.
I used UIImage+Resize category provided in here with some new methods I wrote to do the job. I reformatted some code to look better and not tested, but it should work. :))
- (UIImage*)cropAndResizeAspectFillWithSize:(CGSize)targetSize
interpolationQuality:(CGInterpolationQuality)quality {
UIImage *outputImage = nil;
UIImage *imageForProcessing = self;
// crop center square (AspectFill)
if (self.size.width != self.size.height) {
CGFloat shorterLength = 0;
CGPoint origin = CGPointZero;
if (self.size.width > self.size.height) {
origin.x = (self.size.width - self.size.height)/2;
shorterLength = self.size.height;
}
else {
origin.y = (self.size.height - self.size.width)/2;
shorterLength = self.size.width;
}
imageForProcessing = [imageForProcessing normalizedImage];
imageForProcessing = [imageForProcessing croppedImage:CGRectMake(origin.x, origin.y, shorterLength, shorterLength)];
}
outputImage = [imageForProcessing resizedImage:targetSize interpolationQuality:quality];
return outputImage;
}
// fix image orientation, which may wrongly rotate the output.
- (UIImage *)normalizedImage {
if (self.imageOrientation == UIImageOrientationUp) return self;
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:(CGRect){0, 0, self.size}];
UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage;
}