Get the correct image width and height of an NSImage - objective-c

I use the code below to get the width and height of a NSImage:
NSImage *image = [[[NSImage alloc] initWithContentsOfFile:[NSString stringWithFormat:s]] autorelease];
imageWidth=[image size].width;
imageHeight=[image size].height;
NSLog(#"%f:%f",imageWidth,imageHeight);
But sometime imageWidth, imageHeight does not return the correct value. For example when I read an image, the EXIF info displays:
PixelXDimension = 2272;
PixelYDimension = 1704;
But imageWidth, imageHeight outputs
521:390

Dimensions of your image in pixels is stored in NSImageRep of your image. If your file contains only one image, it will be like this:
NSImageRep *rep = [[image representations] objectAtIndex:0];
NSSize imageSize = NSMakeSize(rep.pixelsWide, rep.pixelsHigh);
where image is your NSImage and imageSize is your image size in pixels.

NSImage size method returns size information that is screen resolution dependent. To get the size represented in the actual file image you need to use an NSImageRep.
Refer nsimage-size-not-real-size-with-some-pictures link and get helped

the direct API gives also the correct results
CGImageRef cgImage = [oldImage CGImageForProposedRect:nil context:context hints:nil];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);

Apple uses a point system based on DPI to map points to physical device pixels. It doesnt matter what the EXIF says, it matters how many logical screen points your canvas has to display the image.
iOS and OSX perform this mapping for you. The only size you should be concerned about is the size returned from UIImage.size
You cant (read shouldnt have to shouldnt care) do the mapping to device pixels yourself, thats why apple does it.

SWIFT 4
You have to make a NSBitmapImageRep representation of the NSImage to get the correct pixel height and width.
First a this extension to gather a CGImage from the NSImage:
extension NSImage {
#objc var CGImage: CGImage? {
get {
guard let imageData = self.tiffRepresentation else { return nil }
guard let sourceData = CGImageSourceCreateWithData(imageData as CFData, nil) else { return nil }
return CGImageSourceCreateImageAtIndex(sourceData, 0, nil)
}
}
}
Then when you want to get the height and width:
let rep = NSBitmapImageRep(cgImage: (NSImage(named: "Your Image Name")?.CGImage)!)
let imageHeight = rep.size.height
let imageWidth = rep.size.width

i make a extension like this:
extension NSImage{
var pixelSize: NSSize?{
if let rep = self.representations.first{
let size = NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
return size
}
return nil
}
}

Related

Translate detected rectangle from portrait CIImage to landscape CIImage

I am modifying code found here. In the code we are capturing video from the phone camera using AVCaptureSession and using CIDetector to detect a rectangle in the image feed. The feed has an image which is 640x842 (iphone5 in portrait). We then do an overlay on the image so the user can see the detected rectangle (actually it's a trapezoid most of the time).
When the user presses a button on the UI, we capture an image from the video and re-run the rectangle detection on this larger image (3264x2448) which as you can see is landscape. We then do a perspective transform on the detected rectangle and crop on the image.
This is working pretty well but the issue I have is say 1 out of 5 captures the detected rectangle on the larger image is different to the one detected (and presented to the user) from the smaller image. Even though I only capture when I detect the phone is (relatively) still, the final image then does not represent the rectangle the user expected.
To resolve this my idea is to use the coordinates of the originally captured rectangle and translate them to a rectangle on the captured still image. This is where I'm struggling.
I tried this with the detected rectangle:
CGFloat radians = -90 * (M_PI/180);
CGAffineTransform rotation = CGAffineTransformMakeRotation(radians);
CGRect rect = CGRectMake(detectedRect.bounds.origin.x, detectedRect.bounds.origin.y, detectedRect.bounds.size.width, detectedRect.bounds.size.height);
CGRect rotatedRect = CGRectApplyAffineTransform(rect, rotation);
So given a detected rect:
TopLeft: 88.213425, 632.31329
TopRight: 545.59302, 632.15546
BottomRight: 575.57819, 369.22321
BottomLeft: 49.973862, 369.40466
I now have this rotated rect:
origin = (x = 369.223206, y = -575.578186)
size = (width = 263.090088, height = 525.604309)
How do I translate the rotated rectangle coordinates in the smaller portrait image to coordinates to the 3264x2448 image?
Edit
Duh.. reading my own approach realised that creating a rectangle out of a trapezoid will not solve my problem!
Supporting code to detect the rectangle etc...
// In this method we detect a rect from the video feed and overlay
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// image is 640x852 on iphone5
NSArray *rects = [[CIDetector detectorOfType:CIDetectorTypeRectangle context:nil options:#{CIDetectorAccuracy : CIDetectorAccuracyHigh}] featuresInImage:image];
CIRectangleFeature *detectedRect = rects[0];
// draw overlay on image code....
}
This is a summarized version of how the still image is obtained:
// code block to handle output from AVCaptureStillImageOutput
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:#{kCIImageColorSpace:[NSNull null]}];
imageData = nil;
CIRectangleFeature *rectangleFeature = [self getDetectedRect:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];
if (rectangleFeature) {
enhancedImage = [self correctPerspectiveForImage:enhancedImage withTopLeft:rectangleFeature.topLeft andTopRight:rectangleFeature.topRight andBottomRight:rectangleFeature.bottomRight andBottomLeft:rectangleFeature.bottomLeft];
}
}
Thank you.
I had the same issue while doing this kind of stuff.I have resolved it by doing below code in swift. Take a look if it can help you.
if let videoConnection = stillImageOutput.connection(withMediaType: AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronously(from: videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
var img = UIImage(data: imageData!)!
let outputRect = self.previewLayer?.metadataOutputRectOfInterest(for: (self.previewLayer?.bounds)!)
let takenCGImage = img.cgImage
let width = (takenCGImage?.width)!
let height = (takenCGImage?.height)!
let cropRect = CGRect(x: (outputRect?.origin.x)! * CGFloat(width), y: (outputRect?.origin.y)! * CGFloat(height), width: (outputRect?.size.width)! * CGFloat(width), height: (outputRect?.size.height)! * CGFloat(height))
let cropCGImage = takenCGImage!.cropping(to: cropRect)
img = UIImage(cgImage: cropCGImage!, scale: 1, orientation: img.imageOrientation)
let cropViewController = TOCropViewController(image: self.cropToBounds(image: img))
cropViewController.delegate = self
self.navigationController?.pushViewController(cropViewController, animated: true)
}
}

Image resized when using NSUrl [duplicate]

I see that sometimes NSImage size is not real size (with some pictures) and CIImage size is always real. I was testing with this image.
This is source code which I wrote for testing:
NSImage *_imageNSImage = [[NSImage alloc]initWithContentsOfFile:#"<path to image>"];
NSSize _dimensions = [_imageNSImage size];
[_imageNSImage release];
NSLog(#"Width from CIImage: %f",_dimensions.width);
NSLog(#"Height from CIImage: %f",_dimensions.height);
NSURL *_myURL = [NSURL fileURLWithPath:#"<path to image>"];
CIImage *_imageCIImage = [CIImage imageWithContentsOfURL:_myURL];
NSRect _rectFromCIImage = [_imageCIImage extent];
NSLog(#"Width from CIImage: %f",_rectFromCIImage.size.width);
NSLog(#"Height from CIImage: %f",_rectFromCIImage.size.height);
And output is:
So how that can be?? Maybe I'm doing something wrong?
NSImage size method returns size information that is screen resolution dependent. To get the size represented in the actual file image you need to use an NSImageRep. You can get an NSImageRep from an NSImage using the representations method. Alternatively you can create a NSBitmapImageRep subclass instance directly like this:
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:#"<path to image>"];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSLog(#"Width from NSBitmapImageRep: %f",(CGFloat)width);
NSLog(#"Height from NSBitmapImageRep: %f",(CGFloat)height);
The loop takes into account that some image formats may contain more than a single image (such as TIFFs for example).
You can create an NSImage at this size by using the following:
NSImage * imageNSImage = [[NSImage alloc] initWithSize:NSMakeSize((CGFloat)width, (CGFloat)height)];
[imageNSImage addRepresentations:imageReps];
NSImage size method return size in points. To get size represented in pixels you need inspect NSImage.representations property that contains an array of NSImageRep objects with pixelWide/pixelHigh properties and simple change size NSImage object:
#implementation ViewController {
__weak IBOutlet NSImageView *imageView;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do view setup here.
NSImage *image = [[NSImage alloc] initWithContentsOfFile:#"/Users/username/test.jpg"];
if (image.representations && image.representations.count > 0) {
long lastSquare = 0, curSquare;
NSImageRep *imageRep;
for (imageRep in image.representations) {
curSquare = imageRep.pixelsWide * imageRep.pixelsHigh;
if (curSquare > lastSquare) {
image.size = NSMakeSize(imageRep.pixelsWide, imageRep.pixelsHigh);
lastSquare = curSquare;
}
}
imageView.image = image;
NSLog(#"%.0fx%.0f", image.size.width, image.size.height);
}
}
#end
Thanks to Zenopolis for the original ObjC code, here's a nice concise Swift version:
func sizeForImageAtURL(url: NSURL) -> CGSize? {
guard let imageReps = NSBitmapImageRep.imageRepsWithContentsOfURL(url) else { return nil }
return imageReps.reduce(CGSize.zero, combine: { (size: CGSize, rep: NSImageRep) -> CGSize in
return CGSize(width: max(size.width, CGFloat(rep.pixelsWide)), height: max(size.height, CGFloat(rep.pixelsHigh)))
})
}
If your file contains only one image, you can just use this :
let rep = image.representations[0]
let imageSize = NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
image is your NSImage, imageSize is the image size in pixels.
Copied and updated here: https://stackoverflow.com/a/13228091/3608824
NSImage's size param returns size information dependent to screen resolution and scaling configuration.
Real size of image you can get with the following extension:
extension NSImage {
var sizeReal: NSSize {
guard representations.count > 0 else { return NSSize(width: 0, height: 0) }
let rep = self.representations[0]
return NSSize(width: rep.pixelsWide, height: rep.pixelsHigh)
}
}

How do I resize a UIImage without smoothing? [duplicate]

I want to scale up an UIImage in such a way, that the user can see the pixels in the UIImage very sharp. When I put that to an UIImageView and scale the transform matrix up, the UIImage appears antialiased and smoothed.
Is there a way to render in a bigger bitmap context by simply repeating every row and every column to get bigger pixels? How could I do that?
#import <QuartzCore/CALayer.h>
view.layer.magnificationFilter = kCAFilterNearest
When drawing directly into bitmap context, we can use:
CGContextSetInterpolationQuality(myBitmapContext, kCGInterpolationNone);
I found this on CGContextDrawImage very slow on iPhone 4
Swift 5
let image = UIImage(named: "Foo")
let scaledImageSize = image.size.applying(CGAffineTransform(scaleX: 2, y: 2))
UIGraphicsBeginImageContext(scaledImageSize)
let scaledContext = UIGraphicsGetCurrentContext()!
scaledContext.interpolationQuality = .none
image.draw(in: CGRect(origin: .zero, size: scaledImageSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()!
I was also trying this (on a sublayer) and I couldn't get it working, it was still blurry. This is what I had to do:
const CGFloat PIXEL_SCALE = 2;
layer.magnificationFilter = kCAFilterNearest; //Nearest neighbor texture filtering
layer.transform = CATransform3DMakeScale(PIXEL_SCALE, PIXEL_SCALE, 1); //Scale layer up
//Rasterize w/ sufficient resolution to show sharp pixels
layer.shouldRasterize = YES;
layer.rasterizationScale = PIXEL_SCALE;
For UIImage created from CIImage you may use:
imageView.image = UIImage(CIImage: ciImage.imageByApplyingTransform(CGAffineTransformMakeScale(kScale, kScale)))

Subtract one image from another iOS

Anybody know how to create subtract one UIImage from another UIImage
for example as this screen:
Thanks for response!
I believe you can accomplish this by using the kCGBlendModeDestinationOut blend mode. Create a new context, draw your background image, then draw the foreground image with this blend mode.
UIGraphicsBeginImageContextWithOptions(sourceImage.size, NO, sourceImage.scale)
[sourceImage drawAtPoint:CGPointZero];
[maskImage drawAtPoint:CGPointZero blendMode:kCGBlendModeDestinationOut alpha:1.0f];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
what does it mean to subtract an image? the sample image given shows more of a !red operation. let us say that to subtract image a from image b means to set every pixel in b that intersects a pixel in a to transparent. to perform the subtraction, what we are actually doing is masking image b to the inverse of image a. so, a good approach would be to create an image mask from the alpha channel of image a, then apply it to b. to create the mask you would do something like this:
// get access to the image bytes
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
// create a buffer to hold the mask values
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
uint8_t *maskData = malloc(width * height);
// iterate over the pixel data, reading the alpha value
uint8_t *alpha = (uint8_t *)CFDataGetBytePtr(pixelData) + 3;
uint8_t *mask = maskData;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
*mask = *alpha;
mask++;
alpha += 4; // skip to the next pixel
}
}
// create the mask image from the buffer
CGDataProviderRef maskProvider = CGDataProviderCreateWithData(NULL, maskData, width * height, NULL);
CGImageRef maskImage = CGImageMaskCreate(width, height, 8, 8, width, maskProvider, NULL, false);
// cleanup
CFRelease(pixelData);
CFRelease(maskProvider);
free(maskData);
whew. then, to mask image b, all you have to do is:
CGImageRef subtractedImage = CGImageCreateWithMask(b.CGImage, maskImage);
hey presto.
To get those results, use the second image as a mask when you draw the first image. For this kind of drawing, you'll need to use Core Graphics, a.k.a. Quartz 2D. The Quartz 2D Programming Guide has a section called Bitmap Images and Image Masks that should tell you everything you need to know.
You're asking about UIImage objects, but to use Core Graphics you'll need CGImages instead. That's no problem -- UIImage provides a CGImage property that lets you get the data you need easily.
An updated answer for iOS 10+ and Swift 4+:
func subtract(source: UIImage, mask: UIImage) -> UIImage? {
return UIGraphicsImageRenderer(size: source.size).image { _ in
source.draw(at: .zero)
mask.draw(at: .zero, blendMode: .destinationOut, alpha: 1)
}
}

Cocoa Touch - Adding texture with overlay view

I have a set of tiles as UIViews that have a programmable background color, and each one
can be a different color. I want to add texture, like a side-lit bevel, to each one. Can this be done with an overlay view or by some other method?
I'm looking for suggestions that don't require a custom image file for each case.
This may help someone, although this was pieced together from other topics on SO.
To create a beveled tile image with an arbitrary color for normal and for retina display, I made a beveled image in photoshop and set the saturation to zero, making a grayscale image called tileBevel.png
I also created one for the retina display (tileBevel#2x.png)
Here is the code:
+ (UIImage*) createTileWithColor:(UIColor*)tileColor {
int pixelsHigh = 44;
int pixelsWide = 46;
UIImage *bottomImage;
if([UIScreen respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2.0) {
pixelsHigh *= 2;
pixelsWide *= 2;
bottomImage = [UIImage imageNamed:#"tileBevel#2x.png"];
}
else {
bottomImage = [UIImage imageNamed:#"tileBevel.png"];
}
CGImageRef theCGImage = NULL;
CGContextRef tileBitmapContext = NULL;
CGRect rectangle = CGRectMake(0,0,pixelsWide,pixelsHigh);
UIGraphicsBeginImageContext(rectangle.size);
[bottomImage drawInRect:rectangle];
tileBitmapContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(tileBitmapContext, kCGBlendModeOverlay);
CGContextSetFillColorWithColor(tileBitmapContext, tileColor.CGColor);
CGContextFillRect(tileBitmapContext, rectangle);
theCGImage=CGBitmapContextCreateImage(tileBitmapContext);
UIGraphicsEndImageContext();
return [UIImage imageWithCGImage:theCGImage];
}
This checks to see if the retina display is used, sizes the rectangle to draw in, picks the appropriate grayscale base image, set the blending mode to overlay, then draws a rectangle on top of the bottom image. All of this is done inside a graphics context bracketed by the BeginImageContext and EndImageContext calls. These set the current context needed by the UIImage drawRect: method. The Core Graphics functions need the context as a parameter, which is obtained by a call to get the current context.
And the result looks like this:
If you want to preserve the alpha channel of the source image, just add this to jim's code before the fill rect:
// Apply mask
CGContextTranslateCTM(tileBitmapContext, 0, rectangle.size.height);
CGContextScaleCTM(tileBitmapContext, 1.0f, -1.0f);
CGContextClipToMask(tileBitmapContext, rectangle, bottomImage.CGImage);
Swift 3 solution, essentially based on Jim's answer with Scriptease's addition, and some minor changes:
class func image(bottomImage: UIImage, topImage: UIImage, tileColor: UIColor) -> UIImage? {
let pixelsHigh: CGFloat = bottomImage.size.height
let pixelsWide: CGFloat = bottomImage.size.width
let rectangle = CGRect.init(x: 0, y: 0, width: pixelsWide, height: pixelsHigh)
UIGraphicsBeginImageContext(rectangle.size);
bottomImage.draw(in: rectangle)
if let tileBitmapContext = UIGraphicsGetCurrentContext() {
tileBitmapContext.setBlendMode(.overlay)
tileBitmapContext.setFillColor(tileColor.cgColor)
tileBitmapContext.scaleBy(x: 1.0, y: -1.0)
tileBitmapContext.clip(to: rectangle, mask: bottomImage.cgImage!)
tileBitmapContext.fill(rectangle)
let theCGImage = tileBitmapContext.makeImage()
UIGraphicsEndImageContext();
if let theImage = theCGImage {
return UIImage.init(cgImage: theImage)
}
}
return nil
}