Drag & Drop creation of drag image - objective-c

I'm implementing drag & drop for a customView; this customView is a subclass of NSView and include some elements.
When I start drag operation on it, the dragImage it's just an rectangular gray box of the same size of the customView.
This is the code I wrote:
-(void) mouseDragged:(NSEvent *)theEvent
{
NSPoint downWinLocation = [mouseDownEvent locationInWindow];
NSPoint dragWinLocation = [theEvent locationInWindow];
float distance = hypotf(downWinLocation.x - dragWinLocation.x, downWinLocation.y - downWinLocation.x);
if (distance < 3) {
return;
}
NSImage *viewImage = [self getSnapshotOfView];
NSSize viewImageSize = [viewImage size];
//Get Location of mouseDown event
NSPoint p = [self convertPoint:downWinLocation fromView:nil];
//Drag from the center of image
p.x = p.x - viewImageSize.width / 2;
p.y = p.y - viewImageSize.height / 2;
//Write on PasteBoard
NSPasteboard *pb = [NSPasteboard pasteboardWithName:NSDragPboard];
[pb declareTypes:[NSArray arrayWithObject:NSFilenamesPboardType]
owner:nil];
//Assume fileList is list of files been readed
NSArray *fileList = [NSArray arrayWithObjects:#"/tmp/ciao.txt", #"/tmp/ciao2.txt", nil];
[pb setPropertyList:fileList forType:NSFilenamesPboardType];
[self dragImage:viewImage at:p offset:NSMakeSize(0, 0) event:mouseDownEvent pasteboard:pb source:self slideBack:YES];
}
And this is the function I use to create the snapshot:
- (NSImage *) getSnapshotOfView
{
NSRect rect = [self bounds] ;
NSImage *image = [[[NSImage alloc] initWithSize: rect.size] autorelease];
NSRect imageBounds;
imageBounds.origin = NSZeroPoint;
imageBounds.size = rect.size;
[self lockFocus];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageBounds];
[self unlockFocus];
[image addRepresentation:rep];
[rep release];
return image;
}
This is an image of a drag operation on my customView (the one with the icon and the label "drag me")
Why my dragImage it's just a gray box?

From the screenshot of IB in your comment, it looks like your view is layer backed. Layer backed views draw to their own graphics area that is separate from the normal window backing store.
This code:
[self lockFocus];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageBounds];
[self unlockFocus];
Effectively reads pixels from the window backing store. Since your view is layer backed, its content is not picked up.
Try this without a layer backed view.

Related

NSView draw permanently to background

I'm trying to draw some things on the background of my windows. Therefore I subclassed the NSView of the window and added some drawing code like this:
- (void)drawRect:(NSRect)dirtyRect {
float color = 0.95;
[[NSColor colorWithDeviceRed:color green:color blue:color alpha:1.0] set];
NSRectFill(NSMakeRect(320, 0, 220, NSHeight(dirtyRect)-60));
}
This works great, but as soon as I open a NSComboBox or if I activate a checkbox, the background of these elements erases my just drawn rect.
I don't understand this, because checking for example the checkbox causes, that drawRect is called (I added a NSLog). Only resizing the window draws my rect again.
EDIT:
here is a screenshot of the problem:
I sometimes face the same problem. I think the following is what I use.
/// .h
#interface BackgroundView1 : NSView {
NSImage *myImage;
}
// .m
- (void)awakeFromNib {
[self setupBackgroundImage];
}
- (void)setupBackgroundImage {
NSColor *c = [NSColor colorWithDeviceRed:0.0f/255.0f green:55.0f/255.0f blue:150.0f/255.0f alpha:1.0f];
if (myImage == nil)
myImage = [self createColorImage:NSMakeSize(1,1):c];
}
- (void)drawRect:(NSRect)rect {
[myImage drawInRect:NSMakeRect([self bounds].origin.x,[self bounds].origin.y,[self frame].size.width,[self frame].size.height)
fromRect:NSMakeRect(0,0,[myImage size].width, [myImage size].height)
operation:NSCompositeCopy
fraction:1.0];
}
// Start Functions //
- (NSImage *)createColorImage:(NSSize)size :(NSColor *)color {
NSImage *image = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep *rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size.width
pixelsHigh:size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[image addRepresentation:rep];
[image lockFocus]; // Lock focus of image, making it a destination for drawing
[color set];
NSRectFill(NSMakeRect(0,0,size.width,size.height));
[image unlockFocus];
return image;
}
// End Functions //

NSMutableArray not holding UIImages properly?

I have enabled my Cocoa Touch app to be navigable by swiping left or right to alter positions in history. The animation is kind of done like Android's "card" style. Where swiping to the left (<--) just moves the current screen image out of the way, while showing the previous view beneath it.
This works fine, but when I want to swipe the other way (-->), to go back, I need to get the previous image, and move that over the current view. Now, I had this working if I only store the previous image, but what if I go <-- a few times, then I will not have enough images.
So, the solution is obvious, use an NSMutableArray and just throw the latest image at the front of the array, and when you swipe the other way, just use the first image in the array. However, the image never shows when I start the animation. It just shows nothing. Here's the required methods you should see:
-(void)animateBack {
CGRect screenRect = [self.view bounds];
UIImage *screen = [self captureScreen];
[imageArray insertObject:screen atIndex:0]; //Insert the screenshot at the first index
imgView = [[UIImageView alloc] initWithFrame:screenRect];
imgView.image = screen;
[imgView setHidden:NO];
NSLog(#"Center of imgView is x = %f, y = %f", imgView.center.x, imgView.center.y);
CGFloat startPointX = imgView.center.x;
CGFloat width = screenRect.size.width;
NSLog(#"Width = %f", width);
imgView.center = CGPointMake(imgView.center.x, imgView.center.y);
[self.view addSubview: imgView];
[self navigateBackInHistory];
[UIView animateWithDuration:.3 animations:^{
isSwiping = 1;
imgView.center = CGPointMake(startPointX - width, imgView.center.y);
} completion:^(BOOL finished){
// Your animation is finished
[self clearImage];
isSwiping = 0;
}];
}
-(void)animateForward {
CGRect screenRect = [self.view bounds];
//UIImage *screen = [self captureScreen];
UIImage *screen = [imageArray objectAtIndex:0]; //Get the latest image
imgView = [[UIImageView alloc] initWithFrame:screenRect];
imgView.image = screen;
[imgView setHidden:NO];
NSLog(#"Center of imgView is x = %f, y = %f", imgView.center.x, imgView.center.y);
CGFloat startPointX = imgView.center.x;
CGFloat width = screenRect.size.width;
NSLog(#"Width = %f", width);
imgView.center = CGPointMake(imgView.center.x - width, imgView.center.y);
[self.view addSubview: imgView];
[UIView animateWithDuration:.3 animations:^{
isSwiping = 1;
imgView.center = CGPointMake(startPointX, imgView.center.y);
} completion:^(BOOL finished){
// Your animation is finished
[self navigateForwardInHistory];
[self clearImage];
isSwiping = 0;
}];
}
-(void)clearImage {
[imgView setHidden:YES];
imgView.image = nil;
}
-(void)navigateBackInHistory {
[self saveItems:self];
[self alterIndexBack];
item = [[[LEItemStore sharedStore] allItems] objectAtIndex:currentHistoryIndex];
[self loadItems:self];
}
-(void)navigateForwardInHistory {
[imageArray removeObjectAtIndex:0]; //Remove the latest image, since we just finished swiping this way.
[self saveItems:self];
[self alterIndexForward];
item = [[[LEItemStore sharedStore] allItems] objectAtIndex:currentHistoryIndex];
[self loadItems:self];
}
Note that imgView is a class-level UIImageView and imageArray is a class level array.
Any ideas? Thanks.
Edit
Here's the code at the top of my .m to initalize it. Still does not work.
.....
NSMutableArray *imageArray;
- (void)viewDidLoad
{
[super viewDidLoad];
imageArray = [imageArray initWithObjects:nil];
It looks like you forgot to create the array. Something like this at the appropriate time would do (assuming ARC):
imageArray = [NSMutableArray array];
Glad that worked out.

Blurred Screenshot of a view being drawn by UIBezierPath

I'm drawing my graph view using UIBezierPathmethods and coretext. I use addQuadCurveToPoint:controlPoint: method to draw curves on graph. I also use CATiledLayer for the purpose of rendering graph with large data set on x axis. I draw my whole graph in an image context and in drawrect: method of my view I draw this image in my whole view. Following is my code.
- (void)drawImage{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
// Draw Curves
[self drawDiagonal];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
[screenshot retain];
UIGraphicsEndImageContext();
}
- (void)drawRect:(CGRect)rect{
NSLog(#"Draw iN rect with Bounds: %#",NSStringFromCGRect(rect));
[screenshot drawInRect:self.frame];
}
However in screenshot the curves drawn between two points are not smooth. I've also set the Render with Antialiasing to YES in my info plist. Please see screenshot.
We'd have to see how you construct the UIBezierPath, but in my experience, for smooth curves, the key issue is whether the slope of the line between a curve's control point and the end point of that particular segment of the curve is equal to the slope between the next segment of the curve's start point and its control point. I find that easier to draw general smoooth curves using addCurveToPoint rather than addQuadCurveToPoint, so that I can adjust the starting and ending control points to satisfy this criterion more generally.
To illustrate this point, the way I usually draw UIBezierPath curves is to have an array of points on the curve, and the angle that the curve should take at that point, and then the "weight" of the addCurveToPoint control points (i.e. how far out the control points should be). Thus, I use those parameters to dictate the second control point of a UIBezierPath and the first controlPoint of the next segment of the UIBezierPath. So, for example:
#interface BezierPoint : NSObject
#property CGPoint point;
#property CGFloat angle;
#property CGFloat weight;
#end
#implementation BezierPoint
- (id)initWithPoint:(CGPoint)point angle:(CGFloat)angle weight:(CGFloat)weight
{
self = [super init];
if (self)
{
self.point = point;
self.angle = angle;
self.weight = weight;
}
return self;
}
#end
And then, an example of how I use that:
- (void)loadBezierPointsArray
{
// clearly, you'd do whatever is appropriate for your chart.
// this is just a unclosed loop. But it illustrates the idea.
CGPoint startPoint = CGPointMake(self.view.frame.size.width / 2.0, 50);
_bezierPoints = [NSMutableArray arrayWithObjects:
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x, startPoint.y)
angle:M_PI_2 * 0.05
weight:100.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x + 100.0, startPoint.y + 70.0)
angle:M_PI_2
weight:70.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x, startPoint.y + 140.0)
angle:M_PI
weight:100.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x - 100.0, startPoint.y + 70.0)
angle:M_PI_2 * 3.0
weight:70.0 / 1.7],
[[BezierPoint alloc] initWithPoint:CGPointMake(startPoint.x + 10.0, startPoint.y + 10)
angle:0.0
weight:100.0 / 1.7],
nil];
}
- (CGPoint)calculateForwardControlPoint:(NSUInteger)index
{
BezierPoint *bezierPoint = _bezierPoints[index];
return CGPointMake(bezierPoint.point.x + cosf(bezierPoint.angle) * bezierPoint.weight,
bezierPoint.point.y + sinf(bezierPoint.angle) * bezierPoint.weight);
}
- (CGPoint)calculateReverseControlPoint:(NSUInteger)index
{
BezierPoint *bezierPoint = _bezierPoints[index];
return CGPointMake(bezierPoint.point.x - cosf(bezierPoint.angle) * bezierPoint.weight,
bezierPoint.point.y - sinf(bezierPoint.angle) * bezierPoint.weight);
}
- (UIBezierPath *)bezierPath
{
UIBezierPath *path = [UIBezierPath bezierPath];
BezierPoint *bezierPoint = _bezierPoints[0];
[path moveToPoint:bezierPoint.point];
for (NSInteger i = 1; i < [_bezierPoints count]; i++)
{
bezierPoint = _bezierPoints[i];
[path addCurveToPoint:bezierPoint.point
controlPoint1:[self calculateForwardControlPoint:i - 1]
controlPoint2:[self calculateReverseControlPoint:i]];
}
return path;
}
When I render this into a UIImage (using the code below), I don't see any softening of the image, but admittedly the images are not identical. (I'm comparing the image rendered by capture against that which I capture manually with a screen snapshot by pressing power and home buttons on my physical device at the same time.)
If you're seeing some softening, I would suggest renderInContext (as shown below). I wonder if you writing the image as JPG (which is lossy). Maybe try PNG, if you used JPG.
- (void)drawBezier
{
UIBezierPath *path = [self bezierPath];
CAShapeLayer *oval = [[CAShapeLayer alloc] init];
oval.path = path.CGPath;
oval.strokeColor = [UIColor redColor].CGColor;
oval.fillColor = [UIColor clearColor].CGColor;
oval.lineWidth = 5.0;
oval.strokeStart = 0.0;
oval.strokeEnd = 1.0;
[self.view.layer addSublayer:oval];
}
- (void)capture
{
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// save the image
NSData *data = UIImagePNGRepresentation(screenshot);
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *imagePath = [documentsPath stringByAppendingPathComponent:#"image.png"];
[data writeToFile:imagePath atomically:YES];
// send it to myself so I can look at the file
NSURL *url = [NSURL fileURLWithPath:imagePath];
UIActivityViewController *controller = [[UIActivityViewController alloc] initWithActivityItems:#[url]
applicationActivities:nil];
[self presentViewController:controller animated:YES completion:nil];
}

Display NSImage on a CALayer

I've been trying to display a NSImage on a CALayer. Then I realised I need to convert it to a CGImage apparently, then display it...
I have this code which doesn't seem to be working
CALayer *layer = [CALayer layer];
NSImage *finderIcon = [[NSWorkspace sharedWorkspace] iconForFileType:NSFileTypeForHFSTypeCode(kFinderIcon)];
[finderIcon setSize:(NSSize){ 128.0f, 128.0f }];
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)finderIcon, NULL);
CGImageRef finalIcon = CGImageSourceCreateImageAtIndex(source, 0, NULL);
layer.bounds = CGRectMake(128.0f, 128.0f, 4, 4);
layer.position = CGPointMake(128.0f, 128.0f);
layer.contents = finalIcon;
// Insert the layer into the root layer
[mainLayer addSublayer:layer];
Why? How can I get this to work?
From the comments: Actually, if you're on 10.6, you can also just set the CALayer's contents to an NSImage rather than a CGImageRef...
If you're on OS X 10.6 or later, take a look at NSImage's CGImageForProposedRect:context:hints: method.
If you're not, I've got this in a category on NSImage:
-(CGImageRef)CGImage
{
CGContextRef bitmapCtx = CGBitmapContextCreate(NULL/*data - pass NULL to let CG allocate the memory*/,
[self size].width,
[self size].height,
8 /*bitsPerComponent*/,
0 /*bytesPerRow - CG will calculate it for you if it's allocating the data. This might get padded out a bit for better alignment*/,
[[NSColorSpace genericRGBColorSpace] CGColorSpace],
kCGBitmapByteOrder32Host|kCGImageAlphaPremultipliedFirst);
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:bitmapCtx flipped:NO]];
[self drawInRect:NSMakeRect(0,0, [self size].width, [self size].height) fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapCtx);
CGContextRelease(bitmapCtx);
return (CGImageRef)[(id)cgImage autorelease];
}
I think I wrote this myself. But it's entirely possible that I ripped it off from somewhere else like Stack Overflow. It's an older personal project and I don't really remember.
Here's some code which may help you - I sure hope the formatting of this does not get all messed up like it appears is going to happen - all I can offer is that this works for me.
// -------------------------------------------------------------------------------------
- (void)awakeFromNib
{
// setup our main window 'contentWindow' to use layers
[[contentWindow contentView] setWantsLayer:YES]; // NSWindow*
// create a root layer to contain all of our layers
CALayer *root = [[contentWindow contentView] layer];
// use constraint layout to allow sublayers to center themselves
root.layoutManager = [CAConstraintLayoutManager layoutManager];
// create a new layer which will contain ALL our sublayers
// -------------------------------------------------------
mContainer = [CALayer layer];
mContainer.bounds = root.bounds;
mContainer.frame = root.frame;
mContainer.position = CGPointMake(root.bounds.size.width * 0.5,
root.bounds.size.height * 0.5);
// insert layer on the bottom of the stack so it is behind the controls
[root insertSublayer:mContainer atIndex:0];
// make it resize when its superlayer does
root.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
// make it resize when its superlayer does
mContainer.autoresizingMask = kCALayerWidthSizable | kCALayerHeightSizable;
}
// -------------------------------------------------------------------------------------
- (void) loadMyImage:(NSString*) path
n:(NSInteger) num
x:(NSInteger) xpos
y:(NSInteger) ypos
h:(NSInteger) hgt
w:(NSInteger) wid
b:(NSString*) blendstr
{
#ifdef __DEBUG_LOGGING__
NSLog(#"loadMyImage - ENTER [%#] num[%d] x[%d] y[%d] h[%d] w[%d] b[%#]",
path, num, xpos, ypos, hgt, wid, blendstr);
#endif
NSInteger xoffset = ((wid / 2) + xpos); // use CORNER versus CENTER for location
NSInteger yoffset = ((hgt / 2) + ypos);
CIFilter* filter = nil;
CGRect cgrect = CGRectMake((CGFloat) xoffset, (CGFloat) yoffset,
(CGFloat) wid, (CGFloat) hgt);
if(nil != blendstr) // would be equivalent to #"CIMultiplyBlendMode" or similar
{
filter = [CIFilter filterWithName:blendstr];
}
// read image file via supplied path
NSImage* theimage = [[NSImage alloc] initWithContentsOfFile:path];
if(nil != theimage)
{
[self setMyImageLayer:[CALayer layer]]; // create layer
myImageLayer.frame = cgrect; // locate & size image
myImageLayer.compositingFilter = filter; // nil is OK if no filter
[myImageLayer setContents:(id) theimage]; // deposit image into layer
// add new layer into our main layer [see awakeFromNib above]
[mContainer insertSublayer:myImageLayer atIndex:0];
[theimage release];
}
else
{
NSLog(#"ERROR loadMyImage - no such image [%#]", path);
}
}
+ (CGImageRef) getCachedImage:(NSString *) imageName
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSImage *img = [NSImage imageNamed:imageName];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
return [img CGImageForProposedRect:&rect context:context hints:NULL];
}
+ (CGImageRef) getImage:(NSString *) imageName withExtension:(NSString *) extension
{
NSGraphicsContext *context = [[NSGraphicsContext currentContext] graphicsPort];
NSString* imagePath = [[NSBundle mainBundle] pathForResource:imageName ofType:extension];
NSImage* img = [[NSImage alloc] initWithContentsOfFile:imagePath];
NSRect rect = NSMakeRect(0, 0, [img size].width, [img size].height);
CGImageRef imgRef = [img CGImageForProposedRect:&rect context:context hints:NULL];
[img release];
return imgRef;
}
then you can set it:
yourLayer.contents = (id)[self getCachedImage:#"myImage.png"];
or
yourLayer.contents = (id)[self getImage:#"myImage" withExtension:#"png"];

cocoa hello world screensaver

I have been studying NSView and as such I thought I would give a shot at a screen saver. I have been able to display and image in an NSView but I can't seen to modify this example code to display a simple picture in ScreenSaverView.
http://www.mactech.com/articles/mactech/Vol.20/20.06/ScreenSaversInCocoa/
BTW great tutorial that works with Snow Leopard.
I would think to simply display an image I would need something that looked like this...
What am I doing wrong?
//
// try_screensaverView.m
// try screensaver
//
#import "try_screensaverView.h"
#implementation try_screensaverView
- (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview
{
self = [super initWithFrame:frame isPreview:isPreview];
if (self) {
[self setAnimationTimeInterval:1]; //refresh once per sec
}
return self;
}
- (void)startAnimation
{
[super startAnimation];
NSString *path = [[NSBundle mainBundle] pathForResource:#"leaf" ofType:#"JPG" inDirectory:#""];
image = [[NSImage alloc] initWithContentsOfFile:path];
}
- (void)stopAnimation
{
[super stopAnimation];
}
- (void)drawRect:(NSRect)rect
{
[super drawRect:rect];
}
- (void)animateOneFrame
{
//////////////////////////////////////////////////////////
//load image and display This does not scale the image
NSRect bounds = [self bounds];
NSSize newSize;
newSize.width = bounds.size.width;
newSize.height = bounds.size.height;
[image setSize:newSize];
NSRect imageRect;
imageRect.origin = NSZeroPoint;
imageRect.size = [image size];
NSRect drawingRect = imageRect;
[image drawInRect:drawingRect fromRect:imageRect operation:NSCompositeSourceOver fraction:1];
}
- (BOOL)hasConfigureSheet
{
return NO;
}
- (NSWindow*)configureSheet
{
return nil;
}
#end
NSRect bounds = [self bounds];
NSSize newSize;
newSize.width = bounds.size.width;
newSize.height = bounds.size.height;
[image setSize:newSize];
I don't know why you're doing this.
NSRect imageRect;
imageRect.origin = NSZeroPoint;
imageRect.size = [image size];
A.k.a. [self bounds].size.
NSRect drawingRect = imageRect;
[image drawInRect:drawingRect fromRect:imageRect operation:NSCompositeSourceOver fraction:1];
i.e., [image drawInRect:[self bounds] fromRect:[self bounds] operation:NSCompositeSourceOver fraction:1].
If you're trying to draw the image at its natural size, there's no reason to ever send it a setSize: message. Cut out that entire first part, and the rest should work just fine.
If you're trying to fill the screen (which would be scaling, which would contradict the comment), set the drawingRect to [self bounds], not imageRect. This does exactly as it reads:
image,
draw into (the bounds of the view),
from (the image's entire area).
[image
drawInRect:[self bounds]
fromRect:imageRect
⋮
];
Neither the natural-size-fixed-position draw nor the full-screen draw is an effective screen saver. The latter is irredeemable; you can make the former useful by animating the image around the screen.
I had a similar issue, so I'll post my solution. OP was attempting to load image contents via NSBundle's mainBundle. Instead, you'll have more luck fetching the screensaver's bundle and loading files from there, like so:
NSBundle *saverBundle = [NSBundle bundleForClass:[self class]];
NSImage *image = [[NSImage alloc] initWithContentsOfFile:[saverBundle pathForResource:#"image" ofType:#"png"]];
Since you are doing your drawing in animateOneFrame, try removing your overridden drawRect.