Blurred Screenshot of a view being drawn by UIBezierPath - objective-c

I'm drawing my graph view using UIBezierPathmethods and coretext. I use addQuadCurveToPoint:controlPoint: method to draw curves on graph. I also use CATiledLayer for the purpose of rendering graph with large data set on x axis. I draw my whole graph in an image context and in drawrect: method of my view I draw this image in my whole view. Following is my code.
- (void)drawImage{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
// Draw Curves
[self drawDiagonal];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
[screenshot retain];
UIGraphicsEndImageContext();
}
- (void)drawRect:(CGRect)rect{
NSLog(#"Draw iN rect with Bounds: %#",NSStringFromCGRect(rect));
[screenshot drawInRect:self.frame];
}
However in screenshot the curves drawn between two points are not smooth. I've also set the Render with Antialiasing to YES in my info plist. Please see screenshot.

We'd have to see how you construct the UIBezierPath, but in my experience, for smooth curves, the key issue is whether the slope of the line between a curve's control point and the end point of that particular segment of the curve is equal to the slope between the next segment of the curve's start point and its control point. I find that easier to draw general smoooth curves using addCurveToPoint rather than addQuadCurveToPoint, so that I can adjust the starting and ending control points to satisfy this criterion more generally.
To illustrate this point, the way I usually draw UIBezierPath curves is to have an array of points on the curve, and the angle that the curve should take at that point, and then the "weight" of the addCurveToPoint control points (i.e. how far out the control points should be). Thus, I use those parameters to dictate the second control point of a UIBezierPath and the first controlPoint of the next segment of the UIBezierPath. So, for example:
#interface BezierPoint : NSObject
#property CGPoint point;
#property CGFloat angle;
#property CGFloat weight;
#end
#implementation BezierPoint
- (id)initWithPoint:(CGPoint)point angle:(CGFloat)angle weight:(CGFloat)weight
{
self = [super init];
if (self)
{
self.point = point;
self.angle = angle;
self.weight = weight;
}
return self;
}
#end
And then, an example of how I use that:
- (void)loadBezierPointsArray
{
// clearly, you'd do whatever is appropriate for your chart.
// this is just a unclosed loop. But it illustrates the idea.
CGPoint startPoint = CGPointMake(self.view.frame.size.width / 2.0, 50);
_bezierPoints = [NSMutableArray arrayWithObjects:
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x, startPoint.y)
angle:M_PI_2 * 0.05
weight:100.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x + 100.0, startPoint.y + 70.0)
angle:M_PI_2
weight:70.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x, startPoint.y + 140.0)
angle:M_PI
weight:100.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x - 100.0, startPoint.y + 70.0)
angle:M_PI_2 * 3.0
weight:70.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x + 10.0, startPoint.y + 10)
angle:0.0
weight:100.0 / 1.7],
nil];
}
- (CGPoint)calculateForwardControlPoint:(NSUInteger)index
{
BezierPoint *bezierPoint = _bezierPoints[index];
return CGPointMake(bezierPoint.point.x + cosf(bezierPoint.angle) * bezierPoint.weight,
bezierPoint.point.y + sinf(bezierPoint.angle) * bezierPoint.weight);
}
- (CGPoint)calculateReverseControlPoint:(NSUInteger)index
{
BezierPoint *bezierPoint = _bezierPoints[index];
return CGPointMake(bezierPoint.point.x - cosf(bezierPoint.angle) * bezierPoint.weight,
bezierPoint.point.y - sinf(bezierPoint.angle) * bezierPoint.weight);
}
- (UIBezierPath *)bezierPath
{
UIBezierPath *path = [UIBezierPath bezierPath];
BezierPoint *bezierPoint = _bezierPoints[0];
[path moveToPoint:bezierPoint.point];
for (NSInteger i = 1; i < [_bezierPoints count]; i++)
{
bezierPoint = _bezierPoints[i];
[path addCurveToPoint:bezierPoint.point
controlPoint1:[self calculateForwardControlPoint:i - 1]
controlPoint2:[self calculateReverseControlPoint:i]];
}
return path;
}
When I render this into a UIImage (using the code below), I don't see any softening of the image, but admittedly the images are not identical. (I'm comparing the image rendered by capture against that which I capture manually with a screen snapshot by pressing power and home buttons on my physical device at the same time.)
If you're seeing some softening, I would suggest renderInContext (as shown below). I wonder if you writing the image as JPG (which is lossy). Maybe try PNG, if you used JPG.
- (void)drawBezier
{
UIBezierPath *path = [self bezierPath];
CAShapeLayer *oval = [[CAShapeLayer alloc] init];
oval.path = path.CGPath;
oval.strokeColor = [UIColor redColor].CGColor;
oval.fillColor = [UIColor clearColor].CGColor;
oval.lineWidth = 5.0;
oval.strokeStart = 0.0;
oval.strokeEnd = 1.0;
[self.view.layer addSublayer:oval];
}
- (void)capture
{
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// save the image
NSData *data = UIImagePNGRepresentation(screenshot);
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *imagePath = [documentsPath stringByAppendingPathComponent:#"image.png"];
[data writeToFile:imagePath atomically:YES];
// send it to myself so I can look at the file
NSURL *url = [NSURL fileURLWithPath:imagePath];
UIActivityViewController *controller = [[UIActivityViewController alloc] initWithActivityItems:#[url]
applicationActivities:nil];
[self presentViewController:controller animated:YES completion:nil];
}

Related

Trying to do a simple scale of a picture in Objective-C

I'm new to the language, and I'm just trying to get a simple .jpg to show with the correct aspect ratio. Here's what I have now, and it's just showing in the whole frame filled. Is it because I'm drawing to the rect?
`UIImage *logoJames = [UIImage imageNamed:#"images.jpg"];
[logoJames drawInRect:rect];
Some comments suggesting classes. I'm working through the Big Nerd Ranch book right now, and this is one of the challenge questions.
There is no reason to down vote some trying to learn. That's bad community.
Here is the rest of my code.
//
// BNRHypnosisView.m
// Hypnosister
//
// Created by James on 8/26/14.
// Copyright (c) 2014 Big Nerd Ranch. All rights reserved.
//
#import "BNRHypnosisView.h"
#implementation BNRHypnosisView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
//setting all background colors of all bnrhypnosisviews to clear
self.backgroundColor = [UIColor clearColor];
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect
{
// Drawing code
CGRect bounds = self.bounds;
CGPoint center;
center.x = bounds.origin.x + bounds.size.width / 2.0;
center.y = bounds.origin.y + bounds.size.height / 2.0;
UIImage *logoJAMES = [UIImage imageNamed:#"images.jpg"];
[logoJAMES drawInRect:rect];
// The circle will be the largest that will fit in the view
//float radius = MIN(bounds.size.width, bounds.size.height) / 2.0;
//the largest circle will circumscribe the view
float maxRadius = hypot(bounds.size.width, bounds.size.height) / 2.0;
UIBezierPath *path = [[UIBezierPath alloc] init];
//define the path of the circle
//[path addArcWithCenter:center radius:radius startAngle:0.0 endAngle:M_PI * 2 clockwise:YES];
for (float currentRadius = maxRadius; currentRadius > 0; currentRadius -=20) {
[path moveToPoint:CGPointMake(center.x + currentRadius, center.y)];
[path addArcWithCenter:center radius:currentRadius startAngle:0.0 endAngle:M_PI * 2.0 clockwise:YES];
}
//change width of line
path.lineWidth = 10;
//change line color
[[UIColor lightGrayColor] setStroke];
//Draw the line from above
[path stroke];
}
#end
If you're new to the language, you should consider using one of the Cocoa image view classes.
iOS: UIImageView
Mac: NSImageView.
If all you want to do is have a UIImage maintain its aspect ratio, the easiest method is to create a UIImageView with the image and set the content mode to aspectFit.
UIImageView *imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"images.jpg"]];
imageView.contentMode = UIViewContentModeScaleAspectFit;
Then you can set the frame of the image view however you want and add it to your view however you want.

Trying to display images using UIScrollView

I want to create an simple app that will contain one centered image at the first start screen, than upon swipe gesture(right, left) change images.
I'm very new to this so here is what I though is what I'm looking for http://idevzilla.com/2010/09/16/uiscrollview-a-really-simple-tutorial/ .
This is the code I have in my controller implementation :
- (void)viewDidLoad
{
[super viewDidLoad];
NSMutableDictionary *myDictionary = [[NSMutableDictionary alloc] init];
[myDictionary setObject:#"img1.jpg" forKey:#"http://www.testweb.com"];
[myDictionary setObject:#"img2.jpg" forKey:#"http://www.test2.com"];
UIScrollView *scroll = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
scroll.pagingEnabled = YES;
NSInteger numberOfViews = [myDictionary count];
for (NSString* key in myDictionary) {
UIImage * image = [UIImage imageNamed:[NSString stringWithFormat:[myDictionary objectForKey:key]]];
CGRect rect = CGRectMake(10.0f, 90.0f, image.size.width, image.size.height);
UIImageView * imageView = [[UIImageView alloc] initWithFrame:rect];
[imageView setImage:image];
CGPoint superCenter = CGPointMake([self.view bounds].size.width / 2.0, [self.view bounds].size.height / 2.0);
[imageView setCenter:superCenter];
self.view.backgroundColor = [UIColor whiteColor];
[scroll addSubview:imageView];
}
scroll.contentSize = CGSizeMake(self.view.frame.size.width * numberOfViews, self.view.frame.size.height);
[self.view addSubview:scroll];
}
My first issue here is that I get img2 on my initial screen instead of img1. And second issue is when I swipe right I get white screen and no image on it. Any suggestions what I missed, what I can try/read etc?
EDIT :
I'm looking to do this the "lightest" possible way, using no fancy galleries api etc. Just display couple of really small images(i.e 200x200 px) centered on the screen that I can swipe back and forth(should't be too hard). Well everything is hard when learning a new language.
There is a project on github MWPhotoBrowser that allows you to put in a set of images which are viewed one at a time. You can then scroll from one image to the next with a swipe gesture. It is also easy to add functionality and it should give you a good understanding of how this is done. There is also Apple's PhotoScroller example which gives you straight forward code in how to do this same thing and also tile your images. Since you are new to iOS you may want to first look at both of these examples or possibly just use them as your photo viewer.
Your problem is likely to be the fact that you are setting CGRect rect = CGRectMake(10.0f, 90.0f, image.size.width, image.size.height); for both of your image views. This means that both of your UIImageView objects are placed in exactly the same position (both are positioned at 10.f on the x-coordinate of the scrollview). As the second image view is added last it covers the first and means that there is white space to the right of it.
In order to fix this you could try something like...
- (void)viewDidLoad
{
//your setup code
float xPosition = 10.f;
for (NSString* key in myDictionary) {
UIImage * image = [UIImage imageNamed:[NSString stringWithFormat:[myDictionary objectForKey:key]]];
CGRect rect = CGRectMake(xPosition, 90.0f, image.size.width, image.size.height);
xPosition += image.size.width;
//rest of your code...
}
//rest of your code
}
The above would mean that the second view is positioned on the x-coordinate at 10 + the width of the first image. Note that I haven't tested my answer.
First off the images are placed on top of each other.
CGPoint superCenter = CGPointMake([self.view bounds].size.width / 2.0, [self.view bounds].size.height / 2.0);
[imageView setCenter:superCenter];
Right here you are setting both images to be placed at the center of the screen. The second thing is your using an NSDictionary and looping through the keys. NSDictionary are not orders like an array is. You would have to sort the keys to us it in a specific order. So Barjavel had it right but skipped over the fact your setting the images to center. Try this:
- (void)viewDidLoad
{
[super viewDidLoad];
NSArray *myArray = [[NSArray alloc] initWithObjects:#"img1.jpg", #:image2.jpg", nil];
UIScrollView *scroll = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
scroll.pagingEnabled = YES;
NSInteger numberOfViews = [myArray count];
int xPosition = 0;
for (NSString* image in myArray) {
UIImage * image = [UIImage imageNamed:image];
CGRect rect = CGRectMake(xPosition, 90.0f, image.size.width, image.size.height);
UIImageView * imageView = [[UIImageView alloc] initWithFrame:rect];
[imageView setImage:image];
self.view.backgroundColor = [UIColor whiteColor];
[scroll addSubview:imageView];
xPosition += image.size.width;
}
scroll.contentSize = CGSizeMake(self.view.frame.size.width * numberOfViews, self.view.frame.size.height);
[self.view addSubview:scroll];
}

drawRect drawing 'transparent' text?

I am looking to draw a UILabel (preferable through subclassing) as a transparent label, but with solid background. I draw up an quick example (sorry, it's ugly, but it gets the points across :)).
Basically I have a UILabel and I would like the background to be a set colour, and the text should be transparent. I do not want to colour the text with the views background, but instead have it be 100% transparent, since I have a texture in the background that I want to make sure lines up inside and outside of the label.
I've been spending the night browsing SO and searching on Google, but I have found no helpful sources. I don't have much experience with CG drawing, so I would appreciate any links, help, tutorial or sample code (maybe Apple has some I need to have a look at?).
Thanks a bunch!
I've rewritten it as a UILabel subclass using barely any code and posted it on GitHub
The gist of it is you override drawRect but call [super drawRect:rect] to let the UILabel render as normal. Using a white label color lets you easily use the label itself as a mask.
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// let the superclass draw the label normally
[super drawRect:rect];
CGContextConcatCTM(context, CGAffineTransformMake(1, 0, 0, -1, 0, CGRectGetHeight(rect)));
// create a mask from the normally rendered text
CGImageRef image = CGBitmapContextCreateImage(context);
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CGImageGetBitsPerPixel(image), CGImageGetBytesPerRow(image), CGImageGetDataProvider(image), CGImageGetDecode(image), CGImageGetShouldInterpolate(image));
CFRelease(image); image = NULL;
// wipe the slate clean
CGContextClearRect(context, rect);
CGContextSaveGState(context);
CGContextClipToMask(context, rect, mask);
CFRelease(mask); mask = NULL;
[self RS_drawBackgroundInRect:rect];
CGContextRestoreGState(context);
}
Solved using CALayer masks. Creating a standard mask (wallpapered text, for example) is simple. To create the knocked-out text, I had to invert the alpha channel of my mask, which involved rendering a label to a CGImageRef and then doing some pixel-pushing.
Sample application is available here: https://github.com/robinsenior/RSMaskedLabel
Relevant code is here to avoid future link-rot:
#import "RSMaskedLabel.h"
#import <QuartzCore/QuartzCore.h>
#interface UIImage (RSAdditions)
+ (UIImage *) imageWithView:(UIView *)view;
- (UIImage *) invertAlpha;
#end
#interface RSMaskedLabel ()
{
CGImageRef invertedAlphaImage;
}
#property (nonatomic, retain) UILabel *knockoutLabel;
#property (nonatomic, retain) CALayer *textLayer;
- (void) RS_commonInit;
#end
#implementation RSMaskedLabel
#synthesize knockoutLabel, textLayer;
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self)
{
[self RS_commonInit];
}
return self;
}
- (id)initWithCoder:(NSCoder *)aDecoder
{
self = [super initWithCoder:aDecoder];
if (self)
{
[self RS_commonInit];
}
return self;
}
+ (Class)layerClass
{
return [CAGradientLayer class];
}
- (void) RS_commonInit
{
[self setBackgroundColor:[UIColor clearColor]];
// create the UILabel for the text
knockoutLabel = [[UILabel alloc] initWithFrame:[self frame]];
[knockoutLabel setText:#"booyah"];
[knockoutLabel setTextAlignment:UITextAlignmentCenter];
[knockoutLabel setFont:[UIFont boldSystemFontOfSize:72.0]];
[knockoutLabel setNumberOfLines:1];
[knockoutLabel setBackgroundColor:[UIColor clearColor]];
[knockoutLabel setTextColor:[UIColor whiteColor]];
// create our filled area (in this case a gradient)
NSArray *colors = [[NSArray arrayWithObjects:
(id)[[UIColor colorWithRed:0.349 green:0.365 blue:0.376 alpha:1.000] CGColor],
(id)[[UIColor colorWithRed:0.455 green:0.490 blue:0.518 alpha:1.000] CGColor],
(id)[[UIColor colorWithRed:0.412 green:0.427 blue:0.439 alpha:1.000] CGColor],
(id)[[UIColor colorWithRed:0.208 green:0.224 blue:0.235 alpha:1.000] CGColor],
nil] retain];
NSArray *gradientLocations = [NSArray arrayWithObjects:
[NSNumber numberWithFloat:0.0],
[NSNumber numberWithFloat:0.54],
[NSNumber numberWithFloat:0.55],
[NSNumber numberWithFloat:1], nil];
// render our label to a UIImage
// if you remove the call to invertAlpha it will mask the text
invertedAlphaImage = [[[UIImage imageWithView:knockoutLabel] invertAlpha] CGImage];
// create a new CALayer to use as the mask
textLayer = [CALayer layer];
// stick the image in the layer
[textLayer setContents:(id)invertedAlphaImage];
// create a nice gradient layer to use as our fill
CAGradientLayer *gradientLayer = (CAGradientLayer *)[self layer];
[gradientLayer setBackgroundColor:[[UIColor clearColor] CGColor]];
[gradientLayer setColors: colors];
[gradientLayer setLocations:gradientLocations];
[gradientLayer setStartPoint:CGPointMake(0.0, 0.0)];
[gradientLayer setEndPoint:CGPointMake(0.0, 1.0)];
[gradientLayer setCornerRadius:10];
// mask the text layer onto our gradient
[gradientLayer setMask:textLayer];
}
- (void)layoutSubviews
{
// resize the text layer
[textLayer setFrame:[self bounds]];
}
- (void)dealloc
{
CGImageRelease(invertedAlphaImage);
[knockoutLabel release];
[textLayer release];
[super dealloc];
}
#end
#implementation UIImage (RSAdditions)
/*
create a UIImage from a UIView
*/
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
/*
get the image to invert its alpha channel
*/
- (UIImage *)invertAlpha
{
// scale is needed for retina devices
CGFloat scale = [self scale];
CGSize size = self.size;
int width = size.width * scale;
int height = size.height * scale;
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++)
{
unsigned char *linePointer = &memoryPool[y * width * 4];
for(int x = 0; x < width; x++)
{
linePointer[3] = 255-linePointer[3];
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage scale:scale orientation:UIImageOrientationUp];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
return returnImage;
}
#end
Here's a technique that's similar to Matt Gallagher's, which will generate an inverted text mask with an image.
Allocate a (mutable) data buffer. Create a bitmap context with an 8-bit alpha channel. Configure settings for text drawing. Fill the whole buffer in copy mode (default colour assumed to have alpha value of 1). Write the text in clear mode (alpha value of 0). Create an image from the bitmap context. Use the bitmap as a mask to make a new image from the source image. Create a new UIImage and clean up.
Every time the textString or sourceImage or size values change, re-generate the final image.
CGSize size = /* assume this exists */;
UIImage *sourceImage = /* assume this exists */;
NSString *textString = /* assume this exists */;
char *text = [textString cStringUsingEncoding:NSMacOSRomanStringEncoding];
NSUInteger len = [textString lengthOfBytesUsingEncoding:cStringUsingEncoding:NSMacOSRomanStringEncoding];
NSMutableData *data = [NSMutableData dataWithLength:size.width*size.height*1];
CGContextRef context = CGBitmapContextCreate([data mutableBytes], size.width, size.height, 8, size.width, NULL, kCGImageAlphaOnly);
CGContextSelectFont(context, "Gill Sans Bold", 64.0f, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextFillRect(context, overlay.bounds);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextShowTextAtPoint(context, 16.0f, 16.0f, text, len);
CGImageRef textImage = CGBitmapContextCreateImage(context);
CGImageRef newImage = CGImageCreateWithMask(sourceImage.CGImage, textImage);
UIImage *finalImage = [UIImage imageWithCGImage:newImage];
CGContextRelease(context);
CFRelease(newImage);
CFRelease(textImage);
Another way to do this involves putting the textImage into a new layer and setting that layer on your view's layer. (Remove the lines that create "newImage" and "finalImage".) Assuming this happens inside your view's code somewhere:
CALayer *maskLayer = [[CALayer alloc] init];
CGPoint position = CGPointZero;
// layout the new layer
position = overlay.layer.position;
position.y *= 0.5f;
maskLayer.bounds = overlay.layer.bounds;
maskLayer.position = position;
maskLayer.contents = (__bridge id)textImage;
self.layer.mask = maskLayer;
There are more alternatives, some might be better (subclass UIImage and draw the text directly in clear mode after the superclass has done its drawing?).
Swift 5 solution (Xcode: 12.5):
class MaskedLabel: UILabel {
var maskColor : UIColor?
override init(frame: CGRect) {
super.init(frame: frame)
customInit()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
customInit()
}
func customInit() {
maskColor = self.backgroundColor
self.textColor = UIColor.white
backgroundColor = UIColor.clear
self.isOpaque = false
}
override func draw(_ rect: CGRect) {
let context = UIGraphicsGetCurrentContext()!
super.draw(rect)
context.concatenate(__CGAffineTransformMake(1, 0, 0, -1, 0, rect.height))
let image: CGImage = context.makeImage()!
let mask: CGImage = CGImage(maskWidth: image.width, height: image.height, bitsPerComponent: image.bitsPerComponent, bitsPerPixel: image.bitsPerPixel, bytesPerRow: image.bytesPerRow, provider: image.dataProvider!, decode: image.decode, shouldInterpolate: image.shouldInterpolate)!
context.clear(rect)
context.saveGState()
context.clip(to: rect, mask: mask)
if (self.layer.cornerRadius != 0.0) {
context.addPath(CGPath(roundedRect: rect, cornerWidth: self.layer.cornerRadius, cornerHeight: self.layer.cornerRadius, transform: nil))
context.clip()
}
drawBackgroundInRect(rect: rect)
context.restoreGState()
}
func drawBackgroundInRect(rect: CGRect) {
let context = UIGraphicsGetCurrentContext()
if let _ = maskColor {
maskColor!.set()
}
context!.fill(rect)
}
}

Drag & Drop creation of drag image

I'm implementing drag & drop for a customView; this customView is a subclass of NSView and include some elements.
When I start drag operation on it, the dragImage it's just an rectangular gray box of the same size of the customView.
This is the code I wrote:
-(void) mouseDragged:(NSEvent *)theEvent
{
NSPoint downWinLocation = [mouseDownEvent locationInWindow];
NSPoint dragWinLocation = [theEvent locationInWindow];
float distance = hypotf(downWinLocation.x - dragWinLocation.x, downWinLocation.y - downWinLocation.x);
if (distance < 3) {
return;
}
NSImage *viewImage = [self getSnapshotOfView];
NSSize viewImageSize = [viewImage size];
//Get Location of mouseDown event
NSPoint p = [self convertPoint:downWinLocation fromView:nil];
//Drag from the center of image
p.x = p.x - viewImageSize.width / 2;
p.y = p.y - viewImageSize.height / 2;
//Write on PasteBoard
NSPasteboard *pb = [NSPasteboard pasteboardWithName:NSDragPboard];
[pb declareTypes:[NSArray arrayWithObject:NSFilenamesPboardType]
owner:nil];
//Assume fileList is list of files been readed
NSArray *fileList = [NSArray arrayWithObjects:#"/tmp/ciao.txt", #"/tmp/ciao2.txt", nil];
[pb setPropertyList:fileList forType:NSFilenamesPboardType];
[self dragImage:viewImage at:p offset:NSMakeSize(0, 0) event:mouseDownEvent pasteboard:pb source:self slideBack:YES];
}
And this is the function I use to create the snapshot:
- (NSImage *) getSnapshotOfView
{
NSRect rect = [self bounds] ;
NSImage *image = [[[NSImage alloc] initWithSize: rect.size] autorelease];
NSRect imageBounds;
imageBounds.origin = NSZeroPoint;
imageBounds.size = rect.size;
[self lockFocus];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageBounds];
[self unlockFocus];
[image addRepresentation:rep];
[rep release];
return image;
}
This is an image of a drag operation on my customView (the one with the icon and the label "drag me")
Why my dragImage it's just a gray box?
From the screenshot of IB in your comment, it looks like your view is layer backed. Layer backed views draw to their own graphics area that is separate from the normal window backing store.
This code:
[self lockFocus];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageBounds];
[self unlockFocus];
Effectively reads pixels from the window backing store. Since your view is layer backed, its content is not picked up.
Try this without a layer backed view.

Display NSImage on a CALayer

I've been trying to display a NSImage on a CALayer. Then I realised I need to convert it to a CGImage apparently, then display it...
I have this code which doesn't seem to be working
CALayer *layer = [CALayer layer];
NSImage *finderIcon = [[NSWorkspace sharedWorkspace] iconForFileType:NSFileTypeForHFSTypeCode(kFinderIcon)];
[finderIcon setSize:(NSSize){ 128.0f, 128.0f }];
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)finderIcon, NULL);
CGImageRef finalIcon = CGImageSourceCreateImageAtIndex(source, 0, NULL);
layer.bounds = CGRectMake(128.0f, 128.0f, 4, 4);
layer.position = CGPointMake(128.0f, 128.0f);
layer.contents = finalIcon;
// Insert the layer into the root layer
[mainLayer addSublayer:layer];
Why? How can I get this to work?
From the comments: Actually, if you're on 10.6, you can also just set the CALayer's contents to an NSImage rather than a CGImageRef...
If you're on OS X 10.6 or later, take a look at NSImage's CGImageForProposedRect:context:hints: method.
If you're not, I've got this in a category on NSImage:
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return (CGImageRef)[(id)cgImage autorelease];
}
I think I wrote this myself. But it's entirely possible that I ripped it off from somewhere else like Stack Overflow. It's an older personal project and I don't really remember.
Here's some code which may help you - I sure hope the formatting of this does not get all messed up like it appears is going to happen - all I can offer is that this works for me.
// -------------------------------------------------------------------------------------
- (void)awakeFromNib
{
// setup our main window 'contentWindow' to use layers
[[contentWindow contentView] setWantsLayer:YES]; // NSWindow*
// create a root layer to contain all of our layers
CALayer *root = [[contentWindow contentView] layer];
// use constraint layout to allow sublayers to center themselves
root.layoutManager = [CAConstraintLayoutManager layoutManager];
// create a new layer which will contain ALL our sublayers
// -------------------------------------------------------
mContainer = [CALayer layer];
mContainer.bounds = root.bounds;
mContainer.frame = root.frame;
mContainer.position = CGPointMake(root.bounds.size.width * 0.5,
root.bounds.size.height * 0.5);
// insert layer on the bottom of the stack so it is behind the controls
[root insertSublayer:mContainer atIndex:0];
// make it resize when its superlayer does
root.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
// make it resize when its superlayer does
mContainer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
}
// -------------------------------------------------------------------------------------
- (void) loadMyImage:(NSString*) path
n:(NSInteger) num
x:(NSInteger) xpos
y:(NSInteger) ypos
h:(NSInteger) hgt
w:(NSInteger) wid
b:(NSString*) blendstr
{
#ifdef __DEBUG_LOGGING__
NSLog(#"loadMyImage - ENTER [%#] num[%d] x[%d] y[%d] h[%d] w[%d] b[%#]",
path, num, xpos, ypos, hgt, wid, blendstr);
#endif
NSInteger xoffset = ((wid / 2) + xpos); // use CORNER versus CENTER for location
NSInteger yoffset = ((hgt / 2) + ypos);
CIFilter* filter = nil;
CGRect cgrect = CGRectMake((CGFloat) xoffset, (CGFloat) yoffset,
(CGFloat) wid, (CGFloat) hgt);
if(nil != blendstr) // would be equivalent to #"CIMultiplyBlendMode" or similar
{
filter = [CIFilter filterWithName:blendstr];
}
// read image file via supplied path
NSImage* theimage = [[NSImage alloc] initWithContentsOfFile:path];
if(nil != theimage)
{
[self setMyImageLayer:[CALayer layer]]; // create layer
myImageLayer.frame = cgrect; // locate & size image
myImageLayer.compositingFilter = filter; // nil is OK if no filter
[myImageLayer setContents:(id) theimage]; // deposit image into layer
// add new layer into our main layer [see awakeFromNib above]
[mContainer insertSublayer:myImageLayer atIndex:0];
[theimage release];
}
else
{
NSLog(#"ERROR loadMyImage - no such image [%#]", path);
}
}
+ (CGImageRef) getCachedImage:(NSString *) imageName
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSImage *img = [NSImage imageNamed:imageName];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
return [img CGImageForProposedRect:&rect context:context hints:NULL];
}
+ (CGImageRef) getImage:(NSString *) imageName withExtension:(NSString *) extension
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSString* imagePath = [[NSBundle mainBundle] pathForResource:imageName ofType:extension];
NSImage* img = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
CGImageRef imgRef = [img CGImageForProposedRect:&rect context:context hints:NULL];
[img release];
return imgRef;
}
then you can set it:
yourLayer.contents = (id)[self getCachedImage:#"myImage.png"];
or
yourLayer.contents = (id)[self getImage:#"myImage" withExtension:#"png"];