Resize cropped image - objective-c

Have the similar problem as this post:
How to remove the transparent area of an UIImageView after masking?
but solution doesn't help me, in my case I am cropping image to multi parts, and every part has bounds like parent image had, this is cropping function:
- (UIImage*) maskImage:(CALayer *) sourceLayer toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceLayer onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) compositeImage:(CALayer *) layer onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *sourceImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext([layer frame].size);
[path fill];
[sourceImage drawInRect:[layer frame] blendMode:blend alpha:1.0];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return maskedImage;
}
and how I am using it:
- (PuzzlePart *)calculate:(NSDictionary *) item
{
UIBezierPath *path = [self calculatePath:[item objectForKey:#"figure"]];
UIImageView *iView = [[UIImageView alloc] initWithImage:image];
//iView.contentMode = UIViewContentModeCenter;
UIImage *imagePart = [self maskImage:iView.layer toAreaInsidePath:path];
iView.image = imagePart;
return [[PuzzlePart alloc] initWithParams:imagePart lhckID:lifehackID];
}
- (UIBezierPath *)calculatePath:(NSArray *) points {
UIBezierPath *path = [UIBezierPath bezierPath];
NSDictionary *point = [points objectAtIndex:0];
[path moveToPoint:CGPointMake([point[#"x"] intValue], [point[#"y"] intValue])];
for (int i = 1; i < [points count]; i++) {
point = [points objectAtIndex:i];
[path addLineToPoint:CGPointMake([point[#"x"] intValue], [point[#"y"] intValue])];
}
[path closePath];
return path;
}
and place it:
- (void)placeItems
{
for (PuzzlePart *part in puzzleParts) {
UIImage *imagePart = [part imagePart];
CGRect newRect = [self cropRectForImage:imagePart];
UIImage *image = [self imageWithImage:imagePart convertToSize:newRect.size];
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
//imageView.contentMode = UIViewContentModeCenter;
imageView.center = CGPointMake([self getRandomNumberBetween:0 to:self.view.frame.size.width],
[self getRandomNumberBetween:0 to:self.view.frame.size.height]);
[self.view addSubview:imageView];
while ([self viewIntersectsWithAnotherView:self.view chosenView:imageView]) {
viewPositionMovingStep++;
[self placeItem:self.view:imageView];
}
}
[self startCountdown];
}
[self cropRectForImage:imagePart] is the same method from post I marked
and this what I have after:
any ideas? Thank you!

Related

iOS 9.3 UIGraphicsImageRenderer showing nil any other option for this?

I'm want apply feather effect to the image when cropped but want i render the image using UIGraphicsImageRenderer the render is nil(iOS 9.3) is there any other option for rendering the image in iOS 9.3
This is my code:
UIImage* Img = _myimage;
UIImageView * imageview = [[UIImageView alloc]initWithImage:Img];
[imageview setFrame:CGRectMake(0, 0, Img.size.width, Img.size.height)];
UIBezierPath *aPath;
CAShapeLayer *shapeLayer;
aPath = [UIBezierPath bezierPath];
aPath.contractionFactor = 0.8;
NSValue *mvalue = [_pointsarray objectAtIndex:0];
CGPoint mp2 = [mvalue CGPointValue];
[aPath moveToPoint:mp2];
[aPath addBezierThroughPoints:_pointsarray];
[aPath closePath];
aPath.lineWidth = 2;
shapeLayer = [CAShapeLayer new];
shapeLayer.path=aPath.CGPath;
[shapeLayer setFillColor:[UIColor redColor].CGColor];
[shapeLayer fillColor];
//Here the Renderer is nil in iOS 9.3
UIGraphicsImageRenderer * renderer = [[UIGraphicsImageRenderer alloc] initWithSize:Img.size];
UIImage *shapeImage = [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull context)
{
[shapeLayer renderInContext: context.CGContext];
}];
CIImage * shapeCimage = [[CIImage alloc] initWithImage:shapeImage];
CIFilter * gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:shapeCimage forKey: #"inputImage"];
[gaussianBlurFilter setValue:#8 forKey:#"inputRadius"];
CIImage * blurredCIImage = [gaussianBlurFilter valueForKey:kCIOutputImageKey];
UIImage * blurredImage = [UIImage imageWithCIImage:blurredCIImage];
UIImageView *maskedImageView = [[UIImageView alloc]initWithImage:blurredImage];
maskedImageView.contentMode = UIViewContentModeScaleAspectFit;
maskedImageView.frame = imageview.frame;
imageview.layer.mask=maskedImageView.layer;
[self.view addSubview:imageview];
UIGraphicsImageRenderer is not available in iOS 9. If you need to target this, you can easily write your own like that:
+(instancetype)LT_imageByDrawingOnCanvasOfSize:(CGSize)size usingBlock:(LTImageDrawingContextBlock)block
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(size.width, size.height), NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
block( context );
UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return theImage;
}

CGRect intersection from child to self

Question
How to achieve an intersection of two sprites, when one is a child of self and the other is a child of a sprite?
As their positions are completely different relevant to the comparisons between each other?
Example; this is ran before each frame to determine whether they intersect
-(void)checkInFOVWithPlayer:(Player *)player andEnemy:(Player *)enemy {
SKNode *node = [player childNodeWithName:player.playersFOVName];
if (CGRectIntersectsRect(node.frame, enemy.frame)) {
// [self playerAimAtEnemy:enemy withPlayer:player];
NSLog(#"inframe");
} else {
NSLog(#" ");
}
}
However, node is a child of player and enemy is a child of self. So how can you check if they intersect?
Here's where they're initialised
float radianAngle = ((fovAngle) / 180.0 * M_PI);
float fovOpposite = atanf(radianAngle) * fovDistance;
SKShapeNode *fov = [SKShapeNode node];
UIBezierPath *fovPath = [[UIBezierPath alloc] init];
[fovPath moveToPoint:CGPointMake(0, 0)];
[fovPath addLineToPoint:CGPointMake(fovOpposite *-1, fovDistance)];
[fovPath addLineToPoint:CGPointMake(fovOpposite, fovDistance)];
[fovPath addLineToPoint:CGPointMake(0, 0)];
fov.path = fovPath.CGPath;
fov.lineWidth = 1.0;
fov.strokeColor = [UIColor clearColor];
fov.antialiased = NO;
fov.fillColor = [UIColor greenColor];
fov.alpha = 0.2;
fov.name = #"playerFOV";
[_playerImage addChild:fov];
and enemy
NSString *bundle = [[NSBundle mainBundle] pathForResource:#"enemyImage" ofType:#"png"];
UIImage *image = [[UIImage alloc] initWithContentsOfFile:bundle];
SKTexture *texture = [SKTexture textureWithImage:image];
_enemy = [Player spriteNodeWithTexture:texture];
_enemy.position = CGPointMake(100, 100);
_enemy.size = CGSizeMake(20, 20);
_enemy.playerAimAngle = [self returnRandomNumberBetween:0 to:360];
_enemy.anchorPoint = CGPointMake(0.5, 0.5);
_enemy.playerHealth = 100;
_enemy.playerIsDead = false;
[self addChild:_enemy];
-(void)checkInFOVWithPlayer:(Player *)player andEnemy:(Player *)enemy {
SKNode *fovNode = [player childNodeWithName:player.playersFOVName];
SKNode *node = [self childNodeWithName:#"enemy"];
CGPoint newPosition = [self convertPoint:node.position toNode:fovNode.parent];
if (CGRectContainsPoint(fovNode.frame, newPosition)) {
[self playerAimAtEnemy:enemy withPlayer:player];
}
}
This has ended up being my solution to the problem, Now however, I must find a way to change the frame of the node so that it is not rectangular.

Render a single UIImage and rotation has to be applied

I have rendered few UIImage objects using CGContextDrawImage. But when i apply rotation to the image, it is not applied and the view disappears.
code here
-(void )renderImage: (ItemView *)array
{
NSArray *selectedImages = self.slideView.selectedView.subviews;
CGSize combinedSize = CGSizeMake(0, 0);
for (int i = 0; i < [selectedImages count]; ++i) {
CGSize sourceSize = [selectedImages[i] size];
NSLog(#"sdfsd %f %f ", sourceSize.width, sourceSize.height);
combinedSize.width = MAX(combinedSize.width, sourceSize.width);
combinedSize.height += sourceSize.height;
}
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
{
CGContextTranslateCTM(context, 0, 768);
CGContextScaleCTM(context, 1,-1);
for (int i = 0; i < [selectedImages count]; ++i) {
UIImageView *imageview = selectedImages[i];
UIImage *sourceImage = imageview.image;
CGContextSaveGState(context);
float radians1 = atan2(imageview.transform.a, imageview.transform.b);
CGFloat angle = [(NSNumber *)[imageview valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
printf("\n radians %f",radians1);
printf("\n angle %f",angle);
CGContextRotateCTM(context, radians(angle));
CGContextDrawImage(context, imageview.frame, imageview.image.CGImage);
CGContextRestoreGState(context);
//CGContextDrawImage(context, imageview.frame, imageview.image);
}
}
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(combinedImage, nil, nil, nil);
}
Thanks,
Any help would be appreciated
Let's say you have an array of source images:
NSArray *sourceImages = ...;
// each element of sourceImages is a UIImage
Now you want to take the first count images from the array and concatenate them vertically into a new image. Start by figuring out the size of the combined image:
CGSize combinedSize = CGSizeMake(0, 0);
for (int i = 0; i < count; ++i) {
CGSize sourceSize = [sourceImages[i] size];
combinedSize.width = MAX(combinedSize.width, sourceSize.width);
combinedSize.height += sourceSize.height;
}
Next, create a graphics context of that size:
UIGraphicsBeginImageContextWithOptions(combinedSize, NO, 0); {
Now draw each component image into the graphics context at the appropriate position:
CGFloat y = 0;
for (int i = 0; i < count; ++i) {
UIImage *sourceImage = sourceImages[i];
[sourceImage drawAtPoint:CGPointMake(0, y)];
y += sourceImage.size.height;
}
Finally, create the combined image from the context and dispose of the context:
}
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Read Drawing and Printing Guide for iOS - Drawing and Creating Images for more information.
I cracked the answer . # rob mayoff : Thanks much who helped me out.
code here
//code to render group of images excluding the image which has been selected in a particular view
-(void )renderImage: (ItemView *)selectedItem
{
NSArray *selectedImages = self.slideView.selectedView.subviews;
int selectedItemIndex = [selectedImages indexOfObject:selectedItem];
if(selectedItem.image==nil){
[selectedItem loadImageFromFile];
}
// first set
UIGraphicsBeginImageContextWithOptions(self.slideView.selectedView.frame.size, NO, 0);
for (int i = 0; i < selectedItemIndex; ++i) {
if([selectedImages[i] isKindOfClass:[ItemView class]]){
ItemView *imageview = (ItemView *)selectedImages[i];
CGAffineTransform transform = imageview.transform;
imageview.transform = CGAffineTransformIdentity;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:imageview.filename];
UIImageView *renderImageView = [[UIImageView alloc] initWithImage:image]; //[imageview copy];
renderImageView.frame = imageview.frame;
renderImageView.bounds = imageview.bounds;
imageview.transform = transform;
renderImageView.transform = transform;
renderImageView.center = CGPointMake(imageview.frame.size.width/2,imageview.frame.size.height/2);
UIView *view = [[UIView alloc] initWithFrame:renderImageView.frame];
view.backgroundColor = [UIColor clearColor];
[view addSubview:renderImageView];
[renderImageView release];
[image release];
imageview.image = nil;
UIGraphicsBeginImageContext(imageview.frame.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context1];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[view release];
[result drawAtPoint:imageview.frame.origin];
}
}
UIImage *combinedImage1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//UIImageWriteToSavedPhotosAlbum(combinedImage1, nil, nil, nil);
if (groupBgImage1) {
[groupBgImage1 removeFromSuperview];
[groupBgImage1 release];
groupBgImage1 = nil;
}
groupBgImage1 = [[UIImageView alloc] initWithImage:combinedImage1];
[self.slideView.selectedView insertSubview:groupBgImage1 belowSubview:selectedItem ];
// second set
UIGraphicsBeginImageContextWithOptions(self.slideView.selectedView.frame.size, NO, 0);
for (int i = selectedItemIndex+1; i < [selectedImages count]; ++i) {
if([selectedImages[i] isKindOfClass:[ItemView class]]){
ItemView *imageview = (ItemView *)selectedImages[i];
CGAffineTransform transform = imageview.transform;
imageview.transform = CGAffineTransformIdentity;
UIImage *image = [[UIImage alloc] initWithContentsOfFile:imageview.filename];
UIImageView *renderImageView = [[UIImageView alloc] initWithImage:image]; //[imageview copy];
renderImageView.frame = imageview.frame;
renderImageView.bounds = imageview.bounds;
imageview.transform = transform;
renderImageView.transform = transform;
renderImageView.center = CGPointMake(imageview.frame.size.width/2,imageview.frame.size.height/2);
UIView *view = [[UIView alloc] initWithFrame:renderImageView.frame];
view.backgroundColor = [UIColor clearColor];
[view addSubview:renderImageView];
[renderImageView release];
[image release];
imageview.image = nil;
UIGraphicsBeginImageContext(imageview.frame.size);
CGContextRef context1 = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context1];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[view release];
[result drawAtPoint:imageview.frame.origin];
}
}
UIImage *combinedImage2 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(combinedImage2, nil, nil, nil);
if (groupBgImage2) {
[groupBgImage2 removeFromSuperview];
[groupBgImage2 release];
groupBgImage2 = nil;
}
groupBgImage2 = [[UIImageView alloc] initWithImage:combinedImage2];
[self.slideView.selectedView insertSubview:groupBgImage2 aboveSubview:selectedItem ];
}

How can I save the image from IKImageView?

Okay... I want to save the visible rectangle of the IKImageView image only.
My problem is that, somehow, if I had an image in portrait mode it will be not saved in the right orientation. I drawing the image with this code:
[sourceImg drawInRect:targetRect fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0f];
On the other side if I draw the image with this code:
[sourceImg drawAtPoint:NSZeroPoint fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0f];
It will be in the right orientation, but edits like zooming are lost. I mean it will save the right cut out, but unfortunately not in the right size.
Here is the code how I load the image:
- (IBAction)imageSelectionButtonAction:(id)sender {
NSLog(#"%s", __FUNCTION__);
NSOpenPanel *panel = [NSOpenPanel openPanel];
id model = [self getModel];
if (imageView) {
[panel setAllowedFileTypes:[NSImage imageFileTypes]];
[panel beginSheetModalForWindow:[[NSApplication sharedApplication] mainWindow] completionHandler:^(NSInteger returnCode) {
if (returnCode == 1) {
NSURL *imageUrl = [panel URL];
CGImageRef image = NULL;
CGImageSourceRef isr = CGImageSourceCreateWithURL( (__bridge CFURLRef)imageUrl, NULL);
if (isr) {
NSDictionary *options = [NSDictionary dictionaryWithObject: (id)kCFBooleanTrue forKey: (id) kCGImageSourceShouldCache];
image = CGImageSourceCreateImageAtIndex(isr, 0, (__bridge CFDictionaryRef)options);
if (image) {
_imageProperties = (__bridge_transfer NSDictionary*)CGImageSourceCopyPropertiesAtIndex(isr, 0, (__bridge CFDictionaryRef)_imageProperties);
_imageUTType = (__bridge NSString*)CGImageSourceGetType(isr);
}
CFRelease(isr);
}
if (image) {
[imageView setImage:image imageProperties:_imageProperties];
CGImageRelease(image);
}
[[model saveTempMutabDict] setValue:[imageUrl absoluteString] forKey:#"tempImage"];
}
}];
return;
}
}
And here is the code how I save it:
- (void)saveImage:(NSString *)path {
// get the current image from the image view
CGImageRef sourceImgRef = [imageView image];
NSRect targetRect = [imageView visibleRect];
NSImage *sourceImg = [[NSImage alloc] initWithCGImage:sourceImgRef size:NSZeroSize];
NSMutableDictionary *thisTiffDict = [_imageProperties valueForKey:#"{TIFF}"];
NSInteger theOrientation = [[thisTiffDict valueForKey:#"Orientation"] integerValue];
NSImage *targetImg = nil;
if (theOrientation == 6) {
targetImg = [[NSImage alloc] initWithSize:NSMakeSize([imageView frame].size.height, [imageView frame].size.width)];
} else {
targetImg = [[NSImage alloc] initWithSize:NSMakeSize([imageView frame].size.width, [imageView frame].size.height)];
}
NSRect sourceRect = [imageView convertViewRectToImageRect:targetRect];
[targetImg lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[sourceImg drawAtPoint:NSZeroPoint fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0f];
[targetImg unlockFocus];
_saveOptions = [[IKSaveOptions alloc] initWithImageProperties:_imageProperties imageUTType: _imageUTType];
NSString * newUTType = [_saveOptions imageUTType];
CGImageRef targetImgRef = [targetImg CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
if (targetImgRef) {
NSURL *url = [NSURL fileURLWithPath: path];
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)newUTType, 1, NULL);
if (dest) {
CGImageDestinationAddImage(dest, targetImgRef, (__bridge CFDictionaryRef)[_saveOptions imageProperties]);
CGImageDestinationFinalize(dest);
CFRelease(dest);
}
}
}
I have no idea what I'm doing wrong?
Thank you
Jens

Display NSImage on a CALayer

I've been trying to display a NSImage on a CALayer. Then I realised I need to convert it to a CGImage apparently, then display it...
I have this code which doesn't seem to be working
CALayer *layer = [CALayer layer];
NSImage *finderIcon = [[NSWorkspace sharedWorkspace] iconForFileType:NSFileTypeForHFSTypeCode(kFinderIcon)];
[finderIcon setSize:(NSSize){ 128.0f, 128.0f }];
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)finderIcon, NULL);
CGImageRef finalIcon = CGImageSourceCreateImageAtIndex(source, 0, NULL);
layer.bounds = CGRectMake(128.0f, 128.0f, 4, 4);
layer.position = CGPointMake(128.0f, 128.0f);
layer.contents = finalIcon;
// Insert the layer into the root layer
[mainLayer addSublayer:layer];
Why? How can I get this to work?
From the comments: Actually, if you're on 10.6, you can also just set the CALayer's contents to an NSImage rather than a CGImageRef...
If you're on OS X 10.6 or later, take a look at NSImage's CGImageForProposedRect:context:hints: method.
If you're not, I've got this in a category on NSImage:
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return (CGImageRef)[(id)cgImage autorelease];
}
I think I wrote this myself. But it's entirely possible that I ripped it off from somewhere else like Stack Overflow. It's an older personal project and I don't really remember.
Here's some code which may help you - I sure hope the formatting of this does not get all messed up like it appears is going to happen - all I can offer is that this works for me.
// -------------------------------------------------------------------------------------
- (void)awakeFromNib
{
// setup our main window 'contentWindow' to use layers
[[contentWindow contentView] setWantsLayer:YES]; // NSWindow*
// create a root layer to contain all of our layers
CALayer *root = [[contentWindow contentView] layer];
// use constraint layout to allow sublayers to center themselves
root.layoutManager = [CAConstraintLayoutManager layoutManager];
// create a new layer which will contain ALL our sublayers
// -------------------------------------------------------
mContainer = [CALayer layer];
mContainer.bounds = root.bounds;
mContainer.frame = root.frame;
mContainer.position = CGPointMake(root.bounds.size.width * 0.5,
root.bounds.size.height * 0.5);
// insert layer on the bottom of the stack so it is behind the controls
[root insertSublayer:mContainer atIndex:0];
// make it resize when its superlayer does
root.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
// make it resize when its superlayer does
mContainer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
}
// -------------------------------------------------------------------------------------
- (void) loadMyImage:(NSString*) path
n:(NSInteger) num
x:(NSInteger) xpos
y:(NSInteger) ypos
h:(NSInteger) hgt
w:(NSInteger) wid
b:(NSString*) blendstr
{
#ifdef __DEBUG_LOGGING__
NSLog(#"loadMyImage - ENTER [%#] num[%d] x[%d] y[%d] h[%d] w[%d] b[%#]",
path, num, xpos, ypos, hgt, wid, blendstr);
#endif
NSInteger xoffset = ((wid / 2) + xpos); // use CORNER versus CENTER for location
NSInteger yoffset = ((hgt / 2) + ypos);
CIFilter* filter = nil;
CGRect cgrect = CGRectMake((CGFloat) xoffset, (CGFloat) yoffset,
(CGFloat) wid, (CGFloat) hgt);
if(nil != blendstr) // would be equivalent to #"CIMultiplyBlendMode" or similar
{
filter = [CIFilter filterWithName:blendstr];
}
// read image file via supplied path
NSImage* theimage = [[NSImage alloc] initWithContentsOfFile:path];
if(nil != theimage)
{
[self setMyImageLayer:[CALayer layer]]; // create layer
myImageLayer.frame = cgrect; // locate & size image
myImageLayer.compositingFilter = filter; // nil is OK if no filter
[myImageLayer setContents:(id) theimage]; // deposit image into layer
// add new layer into our main layer [see awakeFromNib above]
[mContainer insertSublayer:myImageLayer atIndex:0];
[theimage release];
}
else
{
NSLog(#"ERROR loadMyImage - no such image [%#]", path);
}
}
+ (CGImageRef) getCachedImage:(NSString *) imageName
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSImage *img = [NSImage imageNamed:imageName];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
return [img CGImageForProposedRect:&rect context:context hints:NULL];
}
+ (CGImageRef) getImage:(NSString *) imageName withExtension:(NSString *) extension
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSString* imagePath = [[NSBundle mainBundle] pathForResource:imageName ofType:extension];
NSImage* img = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
CGImageRef imgRef = [img CGImageForProposedRect:&rect context:context hints:NULL];
[img release];
return imgRef;
}
then you can set it:
yourLayer.contents = (id)[self getCachedImage:#"myImage.png"];
or
yourLayer.contents = (id)[self getImage:#"myImage" withExtension:#"png"];