Background image for a window in Cocoa framework - objective-c

I am looking for a perfect solution to set a background image for a window in a cocoa application. I haven't found a solution to this, I am new in objective c, so please anyone help me...

A window in Cocoa has a root-level view called the "content view". This is the view that contains all the others in a window. By default, it's just a plain, blank NSView. But you could easily create your own custom NSView subclass, override the drawRect: method to draw your background image, and use that for your custom view.
However, it might just be easier to use a plain old NSImageView. The advantage of this is that you can set, for example, autosizing behavior to keep the image pinned to one corner (try this with Installer.app by resizing the installer window). You would also be able to make it semi-opaque so that the background shows through a bit. (Again, I'm thinking of Installer.app; your app could be totally different)
Hope that gets you going in the right direction!

Michael Vannorsdel suggests sublassing NSView for the purpose, and I quote:
You'd really be better off making an
NSView subclass and having it draw
the image you want in drawRect:.
- (void)awakeFromNib
{
myImage = [[NSImage alloc] init....
[self setNeedsDisplay:YES];
}
- (void)drawRect:(NSRect)rect
{
NSSize isize = [myImage size];
[myImage drawInRect:[self bounds] fromRect:NSMakeRect(0.0, 0.0,
isize.width, isize.height) operation: NSCompositeCopy fraction:1.0];
}
Read that whole thread on cocoabuilder, it's quite instructive.

Related

ObjC - method works on application launch, but not button press [duplicate]

Hello I am new to Cocoa programming and I met a problem about NSRectFill.
There is one button in the window, and the following is my AppDelegate.m file:
#implementation LGAppDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
[[NSColor redColor] set];
NSRectFill(NSMakeRect( 50,50,10,10));
}
- (IBAction)buttonPressed:(id)sender
{
[[NSColor greenColor] set];
NSRectFill(NSMakeRect( 60,60,10,10));
}
#end
What I expected to see is a rectangle shows when the application starts, and another rectangle shows after clicking the button. However, only one rectangle shows, nothing happened after clicking the button.
Please help me to solve this. Thank you.
Yours,
Z
well you don't have any context, the system has no idea where you want to draw.
if you want to draw on a view or an image you have to use a lockFocus / unlockFocus pair.
so if you have a view as an outlet called redView
[redView lockFocus];
[[NSColor redColor] set];
NSRectFill(NSMakeRect( 50,50,10,10));
[redView unlockFocus];
but this is a really poor model, you generally want your objects to draw themselves.
when a views drawRect: method is called you already have focus and don't need the lock unlock pair
Some background, applicationDidFinishLaunching: is called at the launch of a program once the program is 'done' loading resources.
But at this point, there could be many windows and many views displayed by the application. Simply calling NSRectFill() is not enough. The application wouldn't know where to draw said rectangle (is it in this window or that one? This NSView or that one). Even if there's only one window and it might seem obvious... there might be multiple NSViews displayed that you're unaware of.... and with computers you have to be very explicit with your commands.
The bottom line is: there is no 'context' established for where the drawing actions to occur. As d00dle points out, you should read up on Drawing Guide.
When an NSView's drawRect: is called, a context (itself) has already been set/defined. You could draw directly from your delegate's applicationDidFinishLaunching: but a 'context' needs to be defined.
Take a look at apple's Drawing Guide

Rendering NSView containing some CALayers to an NSImage

I have an NSView that contains an NSScrollView containing a CALayer-backed NSView. I've tried all the usual methods of capturing an NSView into an NSImage (using -dataWithPDFInsideRect, NSBitmapImageRep's -initWithFocusedViewRect, etc.) However, all these methods treat the CALayer-backed NSView as if it doesn't exist. I've already seen this StackOverflow post, but it was a question about rendering just a CALayer tree to an image, not an NSView containing both regular NSView's and layer-backed views.
Any help is appreciated, thanks :)
The only way I found to do this is to use the CGWindow API's, something like:
CGImageRef cgimg = CGWindowListCreateImage(CGRectZero, kCGWindowListOptionIncludingWindow, [theWindow windowNumber], kCGWindowImageDefault);
then clip out the part of that CGImage that corresponds to your view with
-imageByCroppingToRect.
Then make a NSImage from the cropped CGImage.
Be aware this won't work well if parts of that window are offscreen.
This works to draw a view directly to an NSImage, though I haven't tried it with a layer-backed view:
NSImage * i = [[NSImage alloc] initWithSize:[view frame].size];
[i lockFocus];
if ([view lockFocusIfCanDrawInContext:[NSGraphicsContext currentContext]]) {
[view displayRectIgnoringOpacity:[view frame] inContext:[NSGraphicsContext currentContext]];
[view unlockFocus];
}
[i unlockFocus];
NSData * d = [i TIFFRepresentation];
[d writeToFile:#"/path/to/my/test.tiff" atomically:YES];
[i release];
Have you looked at the suggestions in the Cocoa Drawing Guide? ("Creating a Bitmap")
To draw directly into a bitmap, create a new NSBitmapImageRep object with the parameters you want and use the graphicsContextWithBitmapImageRep: method of NSGraphicsContext to create a drawing context. Make the new context the current context and draw. This technique is available only in Mac OS X v10.4 and later.
Alternatively, you can create an NSImage object (or an offscreen window), draw into it, and then capture the image contents. This technique is supported in all versions of Mac OS X.
That sounds similar to the iOS solution I'm familiar with (using UIGraphicsBeginImageContext and UIGraphicsGetImageFromCurrentImageContext) so I'd expect it to work for your view.
Please have a look at the answer on this post: How do I render a view which contains Core Animation layers to a bitmap?
That approach worked for me under similar circumstances to your own.

Optionally set navigationbar background image

I have the need to draw a background image or set a tint color on a navigation bar, but I also need the option to have the navigation bar appear as it normally would. I'm currently using a category to support If my app does not specify a background image, what can I do instead to ensure the drawRect method does it normally would do?
I.E. -
#implementation UINavigationBar (UINavigationBarCategory)
- (void)drawRect:(CGRect)rect {
if(hasImage){
UIImage *img = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:#"http://myimageurl.com/img.jpg"]]];
[img drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
}else{
??????
}
}
#end
I actually ended up doing something entirely different and I'm wondering why nobody hasn't discovered this before. One of the approaches I've seen in the course of my Googling on the subject was simply adding an image as a subview to the UINavigationBar. The only problem was this made the buttons in the bar not clickable. The fix was to disable user interaction on the image.
myUIImageView.userInteractionEnabled = NO;
[myNavController.navigationBar addSubview:myUIImageView];
[myNavController.navigationBar sendSubviewToBack:myUIImageView];
With that, everything looks/works great and I don't have to override the drawRect method with a category, swizzle methods or any of that funky stuff. Simple and clean.
Theoretically you could do this by subclassing UINavigationBar overriding only the drawRect: method, and then calling [super drawRect:rect] when you want to use the default behavior.
But I don't believe you can in practice because you don't instantiate the UINavigationBar directly.
Solving this is possible but nontrivial, since the category method "replaces" the original method rather than subclassing it from the runtime's standpoint. This is why just using, say, super, won't work.
You should check out this post on "supersequent" implementation: http://cocoawithlove.com/2008/03/supersequent-implementation.html
(That link and some other related ideas are in the answer to this question: Using Super in an Objective C Category? )

Draw an NSView into an NSGraphicsContext?

I have a CGContext, which I can turn into an NSGraphicsContext.
I have an NSWindow with a clipRect for the context.
I want to put a scrollview into the context and then some other view into the scrollview so I can put an image into it... However, I can't figure out how to attach the scrollview into the context.
Eventually the view will probably be coming from a nib, but I don't see how that would matter.
I've seen this thread, (http://lists.apple.com/archives/quartz-dev/2006/Nov/msg00010.html) But they seem to leave off the step of how to attach the view into the context, unless there's something obvious I'm missing.
EDIT:
The reason I'm in this situation is that I'm writing a Mozilla Plugin. The browser gives me a CGContext (Quartz) and a WindowRef (QuickDraw). I can turn the CGContext into an NSGraphicsContext, and I can turn the windowRef into an NSWindow. From another data structure I also have the clipping rectangle...
I'm trying to draw an image into that context, with scrollbars as needed, and buttons and other UI elements... so I need (want) an NSView...
You can't put a view into a graphics context. A view goes either into another view, or as the content view of a window.
You can draw a view into a context by setting that context as the current context and telling the view to draw. You might do this as a means of rendering the view to an image, but otherwise, I can't think of a reason to do it. (Edit: OK, being a Netscape plug-in is probably a good reason.)
Normally, a view gets its own graphics context in NSView's implementation of the lockFocus method, which is called for you by display, which is called for you by displayIfNeeded (only if the view needs display, obviously), which is called for you as part of the event loop.
You don't need to create a context for a view except in very rare circumstances, such as the export-to-an-image case I mentioned. Normally, you let the view take care of that itself.
A partial solution?
What I have done currently is create a nib with a button in an IKImageView inside an NSScrollView. I load this in my plugin.
Then, since I have the NSWindow, I can get the contentView of the window. Then, I add the scrollview as subview of contentView.
It appears, but there seems to be some coordinate confusion about where the origin is. (top vs bottom) and since I'm mucking with the contentview of the WHOLE WINDOW, I'm doing some stuff very globally that perhaps I should be doing more locally. Like, the view never disappears, even when you close the tab, or go to another tab. (it does close when you close the window of course)
So, does this sound like a reasonable way of doing this? it feels a bit ... kludgy...
For future generations (and me when I forget how I did this and Google leads me back to my own question) Here's how I'm doing this:
I have a NIB with all my views, I load this on start-up.
on SetWindow, I set the clip rect and actually do the attaching:
NP_CGContext* npContext = (NP_CGContext*) window->window;
NSWindow* browserWindow = [[[NSWindow alloc] initWithWindowRef:npContext->window] autorelease];
NSView* cView = [browserWindow contentView];
NSView* hitView = [cView hitTest:NSMakePoint(window->x + 1, clip.origin.y + 1)];
if (hitView == nil || ![[hitView className] isEqualToString:#"ChildView"])
{
return;
}
superView = [hitView retain];
[superView addSubview: topView];
[superView setNextResponder: topView];
[topView setNextResponder: nil];
[browserWindow makeFirstResponder: topView];
To make sure I only addSubView once, I have a flag...
And then in handleEvent, I actually draw, Because I'm using an IKImageView, I can use the undocumented method: [imageView setImage: image]; which takes an NSImage.
So far this seems to be working for me. Hopefully this helps someone else.

How do you position a larger NSImage inside of a smaller NSImageView programmatically?

Let's say I have an NSImage that's 100x100. I also have an NSImageView that's 50x50. Is there a way I can place the NSImage at coordinates inside the NSImageView, so I can control which part of it shows? It didn't seem like NSImage had an initWithFrame method...
I did this in my NSImageView subclass, as Andrew suggested.
- (void)drawRect:(NSRect)rect
{
[super drawRect:rect];
NSRect cropRect = NSMakeRect(x, y, w, h);
[image drawAtPoint:NSZeroPoint
fromRect:cropRect
operation:NSCompositeCopy
fraction:1];
}
I don't believe so, but it's trivial to roll your own NSImageView equivalent that supports center/stretch options by drawing the image yourself.
Make your imageview as big as the image, and put it inside a scrollview. Hide the scrollers if you want. No need for subclassing in this case.
NSImageView has a method -setImageAlignment: which lets you control how the image is aligned within the image view. Unfortunately, if you want to display part of the image that doesn't correspond to any of the NSImageAlignment values, you're going to have to draw the image programmatically.
Depends on what your eventual goal is but the easiest thing to me seems to put your NSImageView inside an NSView (or a subclass – doesn't have to be NSScrollView as "#NSResponder" user suggests but this should work well too), set its imageScaling to NSImageScaleProportionallyUpOrDown and its frameSize to image's size. Then you can move your NSImageView freely around the upper view using setFrame:myDesiredFrame. No subclassing, no manual redrawing, etc.