I have a CGImageRef (lets call it original image) and a transparent png (watermark). I'm trying to write a method to place the watermark on top of the original, and return a CGImageRef.
In iOS I would have used UIKit to draw them both onto a context, but that doesn't seem possible with OSX (doesn't support UIKit).
Whats the simplest way to stack two images? Thanks
For a quick 'n dirty solution you can use the NSImage drawing APIs:
NSImage *background = [NSImage imageNamed:#"background"];
NSImage *overlay = [NSImage imageNamed:#"overlay"];
NSImage *newImage = [[NSImage alloc] initWithSize:[background size]];
[newImage lockFocus];
CGRect newImageRect = CGRectZero;
newImageRect.size = [newImage size];
[background drawInRect:newImageRect];
[overlay drawInRect:newImageRect];
[newImage unlockFocus];
CGImageRef newImageRef = [newImage CGImageForProposedRect:NULL context:nil hints:nil];
If you don't like that, most of the CGContext APIs you'd expect are available cross platform—for drawing with a little more control. Similarly, you could look into NSGraphicsContext.
This is pretty easy when you render to a CGContext.
If you want an image as a result, you can create and render to a CGBitmapContext, then request the image after render.
General flow, with common details and contextual info omitted:
CGImageRef CreateCompositeOfImages(CGImageRef pBackground,
const CGRect pBackgroundRect,
CGImageRef pForeground,
const CGRect pForegroundRect)
{
// configure context parameters
CGContextRef gtx = CGBitmapContextCreate( %%% );
// configure context
// configure context to render background image
// draw background image
CGContextDrawImage(gtx, pBackgroundRect, pBackground);
// configure context to render foreground image
// draw foreground image
CGContextDrawImage(gtx, pForegroundRect, pForeground);
// create result
CGImageRef result = CGBitmapContextCreateImage(gtx);
// cleanup
return result;
}
You would need to create a CGImage from your PNG.
Additional APIs you may be interested in using:
CGContextSetBlendMode
CGContextSetAllowsAntialiasing
CGContextSetInterpolationQuality.
I know a lot of people will generally advise you to use higher level abstractions (i.e. AppKit and UIKit), but CoreGraphics is a great library for rendering in both of those contexts. If you are interested in graphics implementations which are easy to use in both OS X and iOS, CoreGraphics is a good choice to base your work upon if you are comfortable working with those abstractions.
If anyone, like me, needs a Swift version.
This is a functional Swift 5 version:
let background = NSImage(named: "background")
let overlay = NSImage(named: "overlay")
let newImage = NSImage(size: background.size)
newImage.lockFocus()
var newImageRect: CGRect = .zero
newImageRect.size = newImage.size
background.draw(in: newImageRect)
overlay.draw(in: newImageRect)
newImage.unlockFocus()
I wish I had the time to do the same with the CGContext example.
Related
I am currently having problems with CGImageRef.
Whenever I create a CGImageRef and look at it in debugger view, in Xcode, it is nil.
Here's the code:
-(void)mouseMoved:(NSEvent *)theEvent{
if (self.shoulddrag) {
NSPoint event_location = [theEvent locationInWindow];//direct from the docs
NSPoint local_point = [self convertPoint:event_location fromView:nil];//direct from the docs
CGImageRef theImage = (__bridge CGImageRef)(self.image);
CGImageRef theClippedImage = CGImageCreateWithImageInRect(theImage, CGRectMake(local_point.x,local_point.y,1,1));
NSImage * image = [[NSImage alloc] initWithCGImage:theClippedImage size:NSZeroSize];
self.pixleView.image = image;
CGImageRelease(theClippedImage);
}
}
Everything else seems to be working though. I can't understand. Any help would be appreciated.
Note: self.pixelView is an NSImageView instance that has not been overridden in any way.
Very likely local_point is not inside of the image. You've converted the point from the window to the view coordinates, but that may not be equivalent to the image coordinates. Test this to see if the lower-left corner of your image results in local_point being (0,0).
It's not clear how your view is laid out, but I suspect that what you want to do is subtract the origin of whatever region (possibly a subview) the user is interacting with relative to self.
Alright, I figured it out.
What I was using to create the CGImageRef was:
CGImageRef theImage = (__bridge CGImageRef)(self.image);
Apparently what I should have used is:
CGImageSourceRef theImage = CGImageSourceCreateWithData((CFDataRef)[self.image TIFFRepresentation], NULL);
I guess my problem was that for some reason I thought NSImage and CGImageRef had toll free bridging.
Apparently, I was wrong.
I seem to have a curious bug involving renderInContext.
I am working on a "scrapbook" app which allows people to freely arrange text, colored squares and photos within a scrapbookView. Note that users can add both photos from a built-in collection of photos (loaded via imageNamed:) and photos from their asset library.
My code creates both PDFs and thumbnails as needed from the scrapbook view and all its subviews by first creating a PDFContext (for PDFs) or an ImageContext (for thumbnail images), then calling renderInContext on scrapBookView.layer.
In general, the code works, with one exception which I will describe. When I am creating a PDF, both built-in images and asset library images are correctly included in the rendered PDF. However, when I am creating a thumbnail, the built-in images appear, but the asset library images don't correctly appear. I know the imageView is there because elsewhere I set its background color to light gray but the image itself is absent.
I suspect that there may be an asynchronous element to loading the images into image views, and the image doesn't get there fast enough when its an asset library image being rendered into an ImageContext.
Is this possible? If not, why would renderInContext work fine in a PDFContext but not so well in an ImageContext?
And more to the point, how can I get my asset library images included in my thumbnails.
Both the pdf representation and the thumbnail are created via methods in scrapbookView. The code is as follows:
To create pdfs:
-(NSData*) pdfRepresentation
{
//== create a pdf context
NSMutableData* result = [[NSMutableData alloc] init];
UIGraphicsBeginPDFContextToData(result, self.bounds, nil);
UIGraphicsBeginPDFPage();
//== draw this view into it
[self.layer renderInContext: UIGraphicsGetCurrentContext()]; // works for asset library images
//== add a rectangular frame also
CGContextRef pdf = UIGraphicsGetCurrentContext();
CGContextAddRect(pdf, self.bounds);
[[UIColor lightGrayColor] setStroke];
CGContextStrokePath(pdf);
//== clean up and return the results
UIGraphicsEndPDFContext();
return [result copy];
}
To create the thumbnail:
-(UIImage*) thumbnailRepresentation
{
//== create a large bitmapped image context
CGFloat minSpan = MIN(self.bounds.size.height, self.bounds.size.width);
CGSize imageSize = CGSizeMake(minSpan, minSpan);
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
//== draw the current view into the bitmap context
self.backgroundColor = [UIColor whiteColor];
[self.layer renderInContext: UIGraphicsGetCurrentContext()]; // doesn't work for asset library images
self.backgroundColor = [UIColor clearColor];
//== get the preliminary results
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//== create a smaller image from this image
CGFloat thumbnailWidth = 40.0;
CGFloat scaleFactor = thumbnailWidth / minSpan;
UIImage* result = [UIImage imageWithCGImage:[image CGImage] scale:scaleFactor orientation:UIImageOrientationUp];
return result;
}
// btw - seems wasteful to create such a large intermediate image
Any help would be appreciated.
Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.
Is there an easy way to do this that works in 10.5?
In 10.6 I can use nsImage CGImageForProposedRect: NULL context: NULL hints: NULL
If I'm not using 1b black and white images (Like Group 4 TIFF), I can use bitmaps, but cgbitmaps seem to not like that setup... Is there a general way of doing this?
I need to do this because I have an IKImageView that seems to only want to add CGImages, but all I've got are NSImages. Currently, I'm using a private setImage:(NSImage*) method that I'd REALLY REALLY rather not be using...
Found the following solution on this page:
NSImage* someImage;
// create the image somehow, load from file, draw into it...
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)[someImage TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
All the methods seem to be 10.4+ so you should be fine.
[This is the long way around. You should only do this if you're still supporting an old version of OS X. If you can require 10.6 or later, just use the CGImage method instead.]
Create a CGBitmapContext.
Create an NSGraphicsContext for the bitmap context.
Set the graphics context as the current context.
Draw the image.
Create the CGImage from the contents of the bitmap context.
Release the bitmap context.
As of Snow Leopard, you can just ask the image to create a CGImage for you, by sending the NSImage a CGImageForProposedRect:context:hints: message.
Here's a more detailed answer for using - CGImageForProposedRect:context:hints::
NSImage *image = [NSImage imageNamed:#"image.jpg"];
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
Just did this using CGImageForProposedRect:context:hints:...
NSImage *image; // an image
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGRect imageCGRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSRect imageRect = NSRectFromCGRect(imageCGRect);
CGImageRef imageRef = [image CGImageForProposedRect:&imageRect context:context hints:nil];
As of 2021, CALayers contents can directly be feed with NSImage but you still may get an [Utility] unsupported surface format: LA08 warning.
After a couple of days research and testing around i found out that this Warning is triggered if you created an NSView with backingLayer, aka CALayer and just used an NSImage to feed its contents. This alone is not much of a problem. But if you try to render it via [self.layer renderInContext:ctx], each rendering will trigger the warning again.
To make use simple, i created an NSImage extension to fix this..
#interface NSImage (LA08FIXExtension)
-(id)CGImageRefID;
#end
#implementation NSImage (LA08FIXExtension)
-(id)CGImageRefID {
NSSize size = [self size];
NSRect rect = NSMakeRect(0, 0, size.width, size.height);
CGImageRef ref = [self CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:NULL];
return (__bridge id)ref;
}
#end
see how it does not return an CGImageRef directly but a bridged cast to id..! This does the trick.
Now you can use it like..
CALayer *somelayer = [CALayer layer];
somelayer.frame = ...
somelayer.contents = [NSImage imageWithName:#"SomeImage"].CGImageRefID;
PS: posted her because if you got that problem also, google will lead you to this Thread of answers anyway, and all above answers inspired me for this fix.
I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.