Is there an easy way to do this that works in 10.5?
In 10.6 I can use nsImage CGImageForProposedRect: NULL context: NULL hints: NULL
If I'm not using 1b black and white images (Like Group 4 TIFF), I can use bitmaps, but cgbitmaps seem to not like that setup... Is there a general way of doing this?
I need to do this because I have an IKImageView that seems to only want to add CGImages, but all I've got are NSImages. Currently, I'm using a private setImage:(NSImage*) method that I'd REALLY REALLY rather not be using...
Found the following solution on this page:
NSImage* someImage;
// create the image somehow, load from file, draw into it...
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)[someImage TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
All the methods seem to be 10.4+ so you should be fine.
[This is the long way around. You should only do this if you're still supporting an old version of OS X. If you can require 10.6 or later, just use the CGImage method instead.]
Create a CGBitmapContext.
Create an NSGraphicsContext for the bitmap context.
Set the graphics context as the current context.
Draw the image.
Create the CGImage from the contents of the bitmap context.
Release the bitmap context.
As of Snow Leopard, you can just ask the image to create a CGImage for you, by sending the NSImage a CGImageForProposedRect:context:hints: message.
Here's a more detailed answer for using - CGImageForProposedRect:context:hints::
NSImage *image = [NSImage imageNamed:#"image.jpg"];
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
Just did this using CGImageForProposedRect:context:hints:...
NSImage *image; // an image
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGRect imageCGRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSRect imageRect = NSRectFromCGRect(imageCGRect);
CGImageRef imageRef = [image CGImageForProposedRect:&imageRect context:context hints:nil];
As of 2021, CALayers contents can directly be feed with NSImage but you still may get an [Utility] unsupported surface format: LA08 warning.
After a couple of days research and testing around i found out that this Warning is triggered if you created an NSView with backingLayer, aka CALayer and just used an NSImage to feed its contents. This alone is not much of a problem. But if you try to render it via [self.layer renderInContext:ctx], each rendering will trigger the warning again.
To make use simple, i created an NSImage extension to fix this..
#interface NSImage (LA08FIXExtension)
-(id)CGImageRefID;
#end
#implementation NSImage (LA08FIXExtension)
-(id)CGImageRefID {
NSSize size = [self size];
NSRect rect = NSMakeRect(0, 0, size.width, size.height);
CGImageRef ref = [self CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:NULL];
return (__bridge id)ref;
}
#end
see how it does not return an CGImageRef directly but a bridged cast to id..! This does the trick.
Now you can use it like..
CALayer *somelayer = [CALayer layer];
somelayer.frame = ...
somelayer.contents = [NSImage imageWithName:#"SomeImage"].CGImageRefID;
PS: posted her because if you got that problem also, google will lead you to this Thread of answers anyway, and all above answers inspired me for this fix.
Related
I am currently having problems with CGImageRef.
Whenever I create a CGImageRef and look at it in debugger view, in Xcode, it is nil.
Here's the code:
-(void)mouseMoved:(NSEvent *)theEvent{
if (self.shoulddrag) {
NSPoint event_location = [theEvent locationInWindow];//direct from the docs
NSPoint local_point = [self convertPoint:event_location fromView:nil];//direct from the docs
CGImageRef theImage = (__bridge CGImageRef)(self.image);
CGImageRef theClippedImage = CGImageCreateWithImageInRect(theImage, CGRectMake(local_point.x,local_point.y,1,1));
NSImage * image = [[NSImage alloc] initWithCGImage:theClippedImage size:NSZeroSize];
self.pixleView.image = image;
CGImageRelease(theClippedImage);
}
}
Everything else seems to be working though. I can't understand. Any help would be appreciated.
Note: self.pixelView is an NSImageView instance that has not been overridden in any way.
Very likely local_point is not inside of the image. You've converted the point from the window to the view coordinates, but that may not be equivalent to the image coordinates. Test this to see if the lower-left corner of your image results in local_point being (0,0).
It's not clear how your view is laid out, but I suspect that what you want to do is subtract the origin of whatever region (possibly a subview) the user is interacting with relative to self.
Alright, I figured it out.
What I was using to create the CGImageRef was:
CGImageRef theImage = (__bridge CGImageRef)(self.image);
Apparently what I should have used is:
CGImageSourceRef theImage = CGImageSourceCreateWithData((CFDataRef)[self.image TIFFRepresentation], NULL);
I guess my problem was that for some reason I thought NSImage and CGImageRef had toll free bridging.
Apparently, I was wrong.
I have a CGImageRef (lets call it original image) and a transparent png (watermark). I'm trying to write a method to place the watermark on top of the original, and return a CGImageRef.
In iOS I would have used UIKit to draw them both onto a context, but that doesn't seem possible with OSX (doesn't support UIKit).
Whats the simplest way to stack two images? Thanks
For a quick 'n dirty solution you can use the NSImage drawing APIs:
NSImage *background = [NSImage imageNamed:#"background"];
NSImage *overlay = [NSImage imageNamed:#"overlay"];
NSImage *newImage = [[NSImage alloc] initWithSize:[background size]];
[newImage lockFocus];
CGRect newImageRect = CGRectZero;
newImageRect.size = [newImage size];
[background drawInRect:newImageRect];
[overlay drawInRect:newImageRect];
[newImage unlockFocus];
CGImageRef newImageRef = [newImage CGImageForProposedRect:NULL context:nil hints:nil];
If you don't like that, most of the CGContext APIs you'd expect are available cross platform—for drawing with a little more control. Similarly, you could look into NSGraphicsContext.
This is pretty easy when you render to a CGContext.
If you want an image as a result, you can create and render to a CGBitmapContext, then request the image after render.
General flow, with common details and contextual info omitted:
CGImageRef CreateCompositeOfImages(CGImageRef pBackground,
const CGRect pBackgroundRect,
CGImageRef pForeground,
const CGRect pForegroundRect)
{
// configure context parameters
CGContextRef gtx = CGBitmapContextCreate( %%% );
// configure context
// configure context to render background image
// draw background image
CGContextDrawImage(gtx, pBackgroundRect, pBackground);
// configure context to render foreground image
// draw foreground image
CGContextDrawImage(gtx, pForegroundRect, pForeground);
// create result
CGImageRef result = CGBitmapContextCreateImage(gtx);
// cleanup
return result;
}
You would need to create a CGImage from your PNG.
Additional APIs you may be interested in using:
CGContextSetBlendMode
CGContextSetAllowsAntialiasing
CGContextSetInterpolationQuality.
I know a lot of people will generally advise you to use higher level abstractions (i.e. AppKit and UIKit), but CoreGraphics is a great library for rendering in both of those contexts. If you are interested in graphics implementations which are easy to use in both OS X and iOS, CoreGraphics is a good choice to base your work upon if you are comfortable working with those abstractions.
If anyone, like me, needs a Swift version.
This is a functional Swift 5 version:
let background = NSImage(named: "background")
let overlay = NSImage(named: "overlay")
let newImage = NSImage(size: background.size)
newImage.lockFocus()
var newImageRect: CGRect = .zero
newImageRect.size = newImage.size
background.draw(in: newImageRect)
overlay.draw(in: newImageRect)
newImage.unlockFocus()
I wish I had the time to do the same with the CGContext example.
How to take region screenshot in Mac OS X using Cocoa and CGDisplayCreateImageForRect? I found how to make a full size screenshot How to take screenshot in Mac OS X using Cocoa or C++ , but how I make region screenshot?
Sorry for ugly formatting. The second function does exactly what you want.
// this is a cute function for creating CGImageRef from NSImage.
// I found it somewhere on SO but I do not remember the link, I am sorry..
CGImageRef CGImageCreateWithNSImage(NSImage *image) {
NSSize imageSize = [image size];
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, imageSize.width, imageSize.height, 8, 0, [[NSColorSpace genericRGBColorSpace] CGColorSpace], kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapContext flipped:NO]];
[image drawInRect:NSMakeRect(0, 0, imageSize.width, imageSize.height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
return cgImage;
}
NSImage* NSImageFromScreenWithRect(CGRect rect){
// copy screenshot to clipboard, works on OS X only..
system("screencapture -c -x");
// get NSImage from clipboard..
NSImage *imageFromClipboard=[[NSImage alloc]initWithPasteboard:[NSPasteboard generalPasteboard]];
// get CGImageRef from NSImage for further cutting..
CGImageRef screenShotImage=CGImageCreateWithNSImage(imageFromClipboard);
// cut desired subimage from fullscreen screenshot..
CGImageRef screenShotCenter=CGImageCreateWithImageInRect(screenShotImage,rect);
// create NSImage from CGImageRef..
NSImage *resultImage=[[NSImage alloc]initWithCGImage:screenShotCenter size:rect.size];
// release CGImageRefs cause ARC has no effect on them..
CGImageRelease(screenShotCenter);
CGImageRelease(screenShotImage);
return resultImage;
}
Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.
I use the same big images in a tableView and detailView.
Need to make imageView filled in 40x40 when an imags is showed in tableView, but stretched on a half of a screen. I played with several properties but have no positive result:
[cell.imageView setBounds:CGRectMake(0, 0, 50, 50)];
[cell.imageView setClipsToBounds:NO];
[cell.imageView setFrame:CGRectMake(0, 0, 50, 50)];
[cell.imageView setContentMode:UIViewContentModeScaleAspectFill];
I am using SDK 3.0 with build in "Cell Objects in Predefined Styles".
I put Ben's code as an extension in my NS-Extensions file so that I can tell any image to make a thumbnail of itself, as in:
UIImage *bigImage = [UIImage imageNamed:#"yourImage.png"];
UIImage *thumb = [bigImage makeThumbnailOfSize:CGSizeMake(50,50)];
Here is .h file:
#interface UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size;
#end
and then in the NS-Extensions.m file:
#implementation UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size
{
UIGraphicsBeginImageContextWithOptions(size, NO, UIScreen.mainScreen.scale);
// draw scaled image into thumbnail context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
return newThumbnail;
}
#end
I cache a thumbnail version since using large images scaled down on the fly uses too much memory.
Here's my thumbnail code:
- (UIImage *)thumbnailOfSize:(CGSize)size {
if( self.previewThumbnail )
return self.previewThumbnail; // returned cached thumbnail
UIGraphicsBeginImageContext(size);
// draw scaled image into thumbnail context
[self.preview drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
self.previewThumbnail = newThumbnail;
return self.previewThumbnail;
}
Just make sure you properly clear the cached thumbnail if you change your original image (self.preview in my case).
I have mine wrapped in a UIView and use this code:
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.autoresizingMask = UIViewAutoresizingFlexibleWidth |UIViewAutoresizingFlexibleHeight;
[self addSubview:imageView];
imageView.frame = self.bounds;
(self is the wrapper UIView, with the dimensions I want - I use AsyncImageView).
I thought Ben Lachman's suggestion of generating thumbnails in advance rather than on the fly was smart, so I adapted his code so it could handle a whole array and to make it more portable (no hard-coded property names).
- (NSArray *)arrayOfThumbnailsOfSize:(CGSize)size fromArray:(NSArray*)original {
NSMutableArray *temp = [NSMutableArray arrayWithCapacity:[original count]];
for(UIImage *image in original){
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0,0,size.width,size.height)];
UIImage *thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp addObject:thumb];
}
return [NSArray arrayWithArray:temp];
}
you might be able to use this?
yourTableViewController.rowImage = [UIImage imageNamed:#"yourImage.png"];
and/or
cell.image = yourTableViewController.rowImage;
and if your images are already 40x40 then you shouldn't have to worry about setting bounds and stuff... but, i'm also new to this, so, i wouldn't know, haven't played around with Table View row/cell images much
hope this helps.
I was able to make this work using interface builder and a tableviewcell. You can set the "Mode" properties for an image view to "Aspect Fit". I'm not sure how to do this programatically.
Try setting UIImageView.autoresizesSubviews and/or UIImageView.contentStretch.