I'm a beginner to Cocoa and Objective-C.
I want to make a Cocoa application that will generate a grid of boxes (used for practicing Chinese calligraphy) to export as a PDF, similar to this online generator: http://incompetech.com/graphpaper/chinesequarter/.
How should I generate the grid? I've tried to use Quartz with a CustomView, but didn't manage to get very far. Also, once the grid is drawn in the CustomView, what is the method for "printing" that to a PDF?
Thanks for the help.
How should I generate the grid?
Implement a custom view that draws it.
I've tried to use Quartz with a CustomView, …
That's one way; AppKit drawing is the other. Most parts of them are very similar, though; AppKit is directly based on PostScript, while Quartz is indirectly based on PostScript.
… but didn't manage to get very far.
You should ask a more specific question about your problem.
Also, once the grid is drawn in the CustomView, what is the method for "printing" that to a PDF?
Send it a dataWithPDFInsideRect: message, passing its bounds.
Note that there is no “once the grid is drawn in the CustomView”. Though there may be some internal caching, conceptually, a view does not draw once and hold onto it; it draws when needed, every time it's needed, into where it's needed. When the window needs to be redrawn, Cocoa will tell any views that are in the dirty area to (re)draw, and they will draw ultimately to the screen. When you ask for PDF data, that will also tell the view to draw, and it will draw into a context that records PDF data. This allows the view both to be lazy (draw only when needed) and to draw differently in different contexts (e.g., when printing).
Oops, you were asking about Cocoa and this is Cocoa Touch, but I'll leave it here as it may be some use (at least to others who find this later).
You can draw things in the view and then put what's there into a pdf.
This code will take what's drawn in a UIView (called sheetView here), put it into a pdf, then put that as an attachment in an email (so you can see it for now). You'll need to reference the protocol MFMailComposeViewControllerDelegate in your header.
if ([MFMailComposeViewController canSendMail]) {
//set up PDF rendering context
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, sheetView.bounds, nil);
UIGraphicsBeginPDFPage();
//tell our view to draw (would normally use setNeedsDisplay, but need drawn now).
[sheetView drawRect:sheetView.bounds];
//remove PDF rendering context
UIGraphicsEndPDFContext();
//send PDF data in mail message as an attachment
MFMailComposeViewController *mailComposer = [[[MFMailComposeViewController alloc] init] autorelease];
mailComposer.mailComposeDelegate = self;If
[mailComposer addAttachmentData:pdfData mimeType:#"application/pdf" fileName:#"SheetView.pdf"];
[self presentModalViewController:mailComposer animated:YES];
}
else {
if (WARNINGS) NSLog(#"Device is unable to send email in its current state.");
}
You'll also need this method...
#pragma mark -
#pragma mark MFMailComposeViewControllerDelegate protocol method
//also need to implement the following method, so that the email composer can let
//us know that the user has clicked either Send or Cancel in the window.
//It's our duty to end the modal session here.
-(void)mailComposeController:(MFMailComposeViewController *)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError *)error {
[self dismissModalViewControllerAnimated:YES];
}
Related
Having tried many methods I still haven't found a good and full-proof way of preventing the usual "maps" from being shown behind custom map tiles that I am using. Ultimately I want my app to have a map page consisting only of a custom map.
I am really looking for a solution that is pure iOS and doesn't require any 3rd party software but it would appear difficult.
I have tried 3 methods already:
Number 1, hiding the background map via it's view:
NSArray *views = [[[self.mapView subviews] objectAtIndex:0] subviews];
[[views objectAtIndex:0] setHidden:YES];
this however doesn't work on a certain new operating system coming out very soon! The whole screen goes blank. The Apple Developer Forum hasn't provided a solution either
Number 2, Using another blank overlay (e.g. MKCircle) to cover the background map. This works however when scrolling or zooming out quickly, sometimes the overlay flickers off and you can briefly see the background map behind so not ideal.
Number 3, and this is what I have been working on for a few days now is to simply prevent the user from zooming out. Most documented methods tend to use regionDidChangeAnimated or regionWillChangeAnimated, however these do not seem to suddenly stop the map zooming out when pinching - they wait until the pinch movement has finished before taking effect so again it means the background map can be viewed briefly.
So now I am stumped, unless of course I have missed something with these other two methods.
So any help would be much appreciated!
Add this:
-(void)viewDidAppear:(BOOL)animated
{
MKTileOverlay *overlay = [[MKTileOverlay alloc] init];// initWithURLTemplate:tileTemplate];
overlay.canReplaceMapContent = YES;
[map addOverlay:overlay];
overlay = nil;
}
-(void)loadTileAtPath:(MKTileOverlayPath)path result:(void (^)(NSData *, NSError *))result
{
NSData *tile =nil;
}
-(MKOverlayRenderer *)mapView:(MKMapView *)mapView rendererForOverlay: (id<MKOverlay>)overlay
{
if ([overlay isKindOfClass:[MKTileOverlay class]])
{
MKTileOverlayRenderer *renderer = [[MKTileOverlayRenderer alloc] initWithOverlay:overlay];
[renderer setAlpha:0.5];
return renderer;
}
}
It will replace the map content from the background. It worked very well in my case where I am adding an overlay on the whole map and hiding the real map from the user.
You can't do this in current releases without a third-party library like MapBox. However, the future OS release that you speak of lets you do this.
I'm building a small Mac Application that gets continuously supplied with data via a web socket. After processing each data segment, the data is displayed in a WebView. New data never replaces any data in the WebView instead new data is always appended to the WebView's content using DOM manipulations. The code sample below should give you an idea of what I'm doing.
DOMDocument *doc = self.webview.mainFrame.DOMDocument;
DOMHTMLElement *container = (DOMHTMLElement *)[doc createElement:#"div"];
NSString *html = [NSString stringWithFormat:#"... omitted for the sake of brevity ... "];
[container setInnerHTML:html];
[doc.body appendChild:container];
The rendering of the WebView apparently happens asynchronously. Is there a way to tell when the DOM manipulation finished and the content has been drawn? I want to use something like [webview scrollToEndOfDocument:self] to implement auto scrolling. Listening for DOM Events didn't help since they seem to be triggered when the DOM was modified but before these changes have been rendered. The code I'm using so far is similar to the following
[self.webview.mainFrame.DOMDocument addEventListener:#"DOMSubtreeModified" listener:self useCapture:NO];
in conjunction with
- (void)handleEvent:(DOMEvent *)event
{
[self.webview scrollToEndOfDocument:self];
}
The problem with this code is that the scrolling happens too early. I'm basically always one data segment behind. Can I register for a callback / notification of any kind that is triggered when the content was drawn?
Using timers
Auto scrolling can be implemented using an NSTimer. The challenge this solution bears is to figure out when to disable the timer to allow manual scrolling. I wasn't able to do the latter. Anyways, here is the code that enables autoscrolling using a timer:
self.WebViewAutoScrollTimer =
[NSTimer scheduledTimerWithTimeInterval:1.0/30.0
target:self
selector:#selector(scrollWebView:)
userInfo:nil
repeats:YES];
scrollWebView: simply being a method that calls scrollToEndOfDocument: on the web view.
Using notifications
Listing to NSViewFrameDidChangeNotification emitted by the web view allows to only scroll in the event of frame size changes. These changes occur when new content is added but also when the size of the encapsulating view changes, e.g. the window is resized. My implementation does not distinguish between those two scenarios.
NSClipView *contentView = self.webView.enclosingScrollView.contentView;
[contentView setPostsFrameChangedNotifications:YES];
[[NSNotificationCenter defaultCenter] addObserverForName:NSViewFrameDidChangeNotification
object:contentView
queue:nil
usingBlock:^(NSNotification *notification) {
[self.webView scrollToEndOfDocument:self];
}];
Note: It is important that you instruct the content view of the web view's scroll view – think about this a couple of times and it will start to make sense – to post notifications when its frame size changes because NSView instances do not do this by default. This is accomplished using the first two lines of code in the example above.
I am developing an app which allows you to select photos and place them into different positions. The workflow is basically:
Tap an area of the screen
UIImagePickerController displays
Select a photo
Photo displays in the tapped area of the screen
I would like it so that if the user goes through this workflow for a second time, the UIImagePickerController when displayed will be showing the same album, and position within that album, that the user was last at.
I've tried saving a reference to the UIImagePickerController, as well as the UIPopoverController, so that they are created only once. However, every time I present the popover containing the UIImagePickerController, it is back at the main photos menu (eg. Camera Roll, Photo Library, My Photo Stream).
Any ideas for how to achieve what I'm after?
You can use ALAssetsLibrary . But this will cost you more effort. First time use – enumerateGroupsWithTypes:usingBlock:failureBlock: to list all album and remember user's choice. And at second time. Just use that album:ALAssetsGroup's – enumerateAssetsUsingBlock: to list all the images and videos. Apple has a few demo you can have a look PhotosByLocation MyImagePicker
keep a UIImagePickerController obj in .h class (for example imagePicker)
alloc the obj once (for example in viewDidLoad)
imagePicker = [[UIImagePickerController alloc] init];
imagePicker.delegate = self;
imagePicker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
[self.view addSubview:imagePicker.view];
imagePicker.view.hidden = YES;
imagePicker.view.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height);
imagePicker.view.bounds = CGRectMake(0,20,self.view.frame.size.width, self.view.frame.size.height);
In didFinishPickingMediaWithInfo
if([[info valueForKey:UIImagePickerControllerMediaType] isEqualToString:#"public.image"]){
imagePicker.view.hidden = YES;
}
When you want to show the imagePickerView just do
imagePicker.view.hidden = NO;
Just to point you to right direction. You can use asset library to show the images as a picker. You can use the apple sample code MyImagePicker. The method [[assetsLibrary] enumerateGroupsWithTypes:ALAssetsGroupAlbum usingBlock:^(ALAssetsGroup *group, BOOL *stop) can be used for photo album. Using the asset library you can check which image was selected last and then use the method,
- (void)enumerateAssetsAtIndexes:(NSIndexSet *)indexSet options:(NSEnumerationOptions)options usingBlock:(ALAssetsGroupEnumerationResultsBlock)enumerationBlock;
You can use this method next time to enumerate which image onwards you want to enumerate. This method can accept an indexSet as [NSIndexSet indexSetWithIndexesInRange:NSMakeRange(index, count)] which should help to indicate the last selected image.
To know more about how to use asset library check this.
It should be possible to reach into the resulting UITableView and then find its content offset. You can do this by searching the subviews of the UIImagePickerController's view property for a table view.
for (UIView *view in controller.view) {
if ([view isKindOfClass:[UITableView class]]) {
contentOffset = [(UITableView *)view contentOffset];
}
}
When you represent the view controller, you will want to restore the content offset in a similar fashion.
Note, I haven't actually tested to see the view hierarchy of the UIImagePickerController. Verify its structure by printing its subviews. There is also no guarantee that the structure will stay the same, since you are diving into the private implementation (though it's important to note you are not actually using any private APIs so this is okay).
Use AlAssetsLibrary. It control the image & video capture under the application. there a demo on apple.
https://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAssetsLibrary_Class/Reference/Reference.html
look for this or
if you want to make a cutomize album for the image and video here a great example.
https://github.com/Kjuly/ALAssetsLibrary-CustomPhotoAlbum
When an NSTextView is a subview of an NSView that is layer-backed (-wantsLayer == YES), it does not render the squiggly red underlines for misspelled words. All it takes to reproduce this is to make an empty Cocoa project, open the nib, drag NSTextView into the window, and toggle the window's content view to want a layer. Boom - no more red underlines.
I've done some searching, and this appears to be a known situation and has been true since 10.5. What I cannot find, though, is a workaround for it. Is there no way to get the underlines to render when NSTextView is in a layer-backed view?
I can imagine overriding NSTextView's drawRect: and using the layout manager to find the proper rects with the proper temporary attributes set that indicate misspellings and then drawing red squiggles myself, but that is of course a total hack. I also can imagine Apple fixing this in 10.7 (perhaps) and suddenly my app would have double underlines or something.
[update] My Workaround
My current workaround was inspired by nptacek's mentioned spell checking delegate method which prompted me to dig deeper down a path I didn't notice before, so I'm going to accept that answer but post what I've done for posterity and/or further discussion.
I am running 10.6.5. I have a subclass of NSTextView which is the document view of a custom subclass of NSClipView which in turn is a subview of my window's contentView which has layers turned on. In playing with this, I eventually had all customizations commented out and still the spelling checking was not working correctly.
I isolated what, I believe, are two distinct problems:
#1 is that NSTextView, when hosted in a layer-backed view, doesn't even bother to draw the misspelling underlines. (I gather based on Google searches that there may have been a time in the 10.5 days when it drew the underlines, but not in the correct spot - so Apple may have just disabled them entirely to avoid that problem in 10.6. I am not sure. There could also be some side effect of how I'm positioning things, etc. that caused them not to appear at all in my case. Presently unknown.)
#2 is that when NSTextView is in this layer-related situation, it appears to not correctly mark text as misspelled while you're typing it - even when -isContinuousSpellCheckingEnabled is set to YES. I verified this by implementing some of the spell checking delegate methods and watching as NSTextView sent messages about changes but never any notifying to set any text ranges as misspelled - even with obviously misspelled words that would show the red underline in TextEdit (and other text views in other apps). I also overrode NSTextView's -handleTextCheckingResults:forRange:types:options:orthography:wordCount: to see what it was seeing, and it saw the same thing there. It was as if NSTextView was actively setting the word under the cursor as not misspelled, and then when the user types a space or moves away from it or whatever, it didn't re-check for misspellings. I'm not entirely sure, though.
Okay, so to work around #1, I overrode -drawRect: in my custom NSTextView subclass to look like this:
- (void)drawRect:(NSRect)rect
{
[super drawRect:rect];
[self drawFakeSpellingUnderlinesInRect:rect];
}
I then implemented -drawFakeSpellingUnderlinesInRect: to use the layoutManager to get the text ranges that contain the NSSpellingStateAttributeName as a temporary attribute and render a dot pattern reasonably close to the standard OSX misspelling dot pattern.
- (void)drawFakeSpellingUnderlinesInRect:(NSRect)rect
{
CGFloat lineDash[2] = {0.75, 3.25};
NSBezierPath *underlinePath = [NSBezierPath bezierPath];
[underlinePath setLineDash:lineDash count:2 phase:0];
[underlinePath setLineWidth:2];
[underlinePath setLineCapStyle:NSRoundLineCapStyle];
NSLayoutManager *layout = [self layoutManager];
NSRange checkRange = NSMakeRange(0,[[self string] length]);
while (checkRange.length > 0) {
NSRange effectiveRange = NSMakeRange(checkRange.location,0);
id spellingValue = [layout temporaryAttribute:NSSpellingStateAttributeName atCharacterIndex:checkRange.location longestEffectiveRange:&effectiveRange inRange:checkRange];
if (spellingValue) {
const NSInteger spellingFlag = [spellingValue intValue];
if ((spellingFlag & NSSpellingStateSpellingFlag) == NSSpellingStateSpellingFlag) {
NSUInteger count = 0;
const NSRectArray rects = [layout rectArrayForCharacterRange:effectiveRange withinSelectedCharacterRange:NSMakeRange(NSNotFound,0) inTextContainer:[self textContainer] rectCount:&count];
for (NSUInteger i=0; i<count; i++) {
if (NSIntersectsRect(rects[i], rect)) {
[underlinePath moveToPoint:NSMakePoint(rects[i].origin.x, rects[i].origin.y+rects[i].size.height-1.5)];
[underlinePath relativeLineToPoint:NSMakePoint(rects[i].size.width,0)];
}
}
}
}
checkRange.location = NSMaxRange(effectiveRange);
checkRange.length = [[self string] length] - checkRange.location;
}
[[NSColor redColor] setStroke];
[underlinePath stroke];
}
So after doing this, I can see red underlines but it doesn't seem to update the spelling state as I type. To work around that problem, I implemented the following evil hacks in my NSTextView subclass:
- (void)setNeedsFakeSpellCheck
{
if ([self isContinuousSpellCheckingEnabled]) {
[NSObject cancelPreviousPerformRequestsWithTarget:self selector:#selector(forcedSpellCheck) object:nil];
[self performSelector:#selector(forcedSpellCheck) withObject:nil afterDelay:0.5];
}
}
- (void)didChangeText
{
[super didChangeText];
[self setNeedsFakeSpellCheck];
}
- (void)updateInsertionPointStateAndRestartTimer:(BOOL)flag
{
[super updateInsertionPointStateAndRestartTimer:flag];
[self setNeedsFakeSpellCheck];
}
- (void)forcedSpellCheck
{
[self checkTextInRange:NSMakeRange(0,[[self string] length]) types:[self enabledTextCheckingTypes] options:nil];
}
It doesn't work quite the same way as the real, expected OSX behavior, but it's sorta close and it gets the job done for now. Hopefully this is helpful for someone else, or, better yet, someone comes here and tells me I was missing something incredibly simple and explains how to fix it. :)
Core Animation is awesome, except when it comes to text. I experienced this firsthand when I found out that subpixel antialiasing was not a given when working with layer-backed views (which you can technically get around by setting an opaque backgroundColor and making sure to draw the background). Subpixel anti-aliasing is just one of the many caveats encountered while working with text and layer-backed views.
In this case, you've got a couple of options. If at all possible, move away from layer-backed views for the parts of your program that utilize the text views. If you've already tried this, and can't avoid it, there is still hope!
Without going so far as overriding drawRect, you can achieve something that is close to the standard behavior with the following code:
- (NSArray *)textView:(NSTextView *)view didCheckTextInRange:(NSRange)range types:(NSTextCheckingTypes)checkingTypes options:(NSDictionary *)options results:(NSArray *)results orthography:(NSOrthography *)orthography wordCount:(NSInteger)wordCount
{
for(NSTextCheckingResult *myResult in results){
if(myResult.resultType == NSTextCheckingTypeSpelling){
NSMutableDictionary *attr = [[NSMutableDictionary alloc] init];
[attr setObject:[NSColor redColor] forKey:NSUnderlineColorAttributeName];
[attr setObject:[NSNumber numberWithInt:(NSUnderlinePatternDot | NSUnderlineStyleThick | NSUnderlineByWordMask)] forKey:NSUnderlineStyleAttributeName];
[[inTextView layoutManager] setTemporaryAttributes:attr forCharacterRange:myResult.range];
[attr release];
}
}
return results;
}
We're basically doing a quick-and-dirty delegate method for NSTextView (make sure to set the delegate in IB!) which checks to see if a word is flagged as incorrect, and if so, sets a colored underline.
Note that there are some issues with this code -- Namely that characters with descenders (g, j, p, q, y, for example) won't display the underline correctly, and it's only been tested for spelling errors (no grammar checking here!). The underline dot pattern (NSUnderlinePatternDot) does not match Apple's style for spellchecking, and the code is still enabled even when layer backing is disabled for the view. Additionally, I'm sure there are other problems, as this code is quick and dirty, and hasn't been checked for memory management or anything else.
Good luck with your endeavor, file bug reports with Apple, and hopefully this will someday be a thing of the past!
This is also a bit of a hack, but the only thing I could get working was to put an intermediate delegate on the NSTextView's layer, so that all selectors are passed through, but drawLayer:inContext: then calls the NSTextView's drawRect:. This works, and is probably a little more future proof, although I'm not sure if it will break any CALayer animations. It also seems you have to fix the CGContextRef's CTM (based on the backing layer frame?).
Edit:
You can get the drawing rect as in the drawInContext: documentation, with CGContextGetClipBoundingBox(ctx), but there might be an issue with flipped coordinates in the NSTextView.
I'm not entirely sure how to fix this as calling drawRect: as I did is a bit hackish, but I'm sure someone on the net has a tutorial on doing it. Perhaps I can make one if/when I have time and work it out.
It might be worthwhile looking for an NSCell backing the NSTextView, as it's probably a lot more appropriate to use this instead.
I have a CGContext, which I can turn into an NSGraphicsContext.
I have an NSWindow with a clipRect for the context.
I want to put a scrollview into the context and then some other view into the scrollview so I can put an image into it... However, I can't figure out how to attach the scrollview into the context.
Eventually the view will probably be coming from a nib, but I don't see how that would matter.
I've seen this thread, (http://lists.apple.com/archives/quartz-dev/2006/Nov/msg00010.html) But they seem to leave off the step of how to attach the view into the context, unless there's something obvious I'm missing.
EDIT:
The reason I'm in this situation is that I'm writing a Mozilla Plugin. The browser gives me a CGContext (Quartz) and a WindowRef (QuickDraw). I can turn the CGContext into an NSGraphicsContext, and I can turn the windowRef into an NSWindow. From another data structure I also have the clipping rectangle...
I'm trying to draw an image into that context, with scrollbars as needed, and buttons and other UI elements... so I need (want) an NSView...
You can't put a view into a graphics context. A view goes either into another view, or as the content view of a window.
You can draw a view into a context by setting that context as the current context and telling the view to draw. You might do this as a means of rendering the view to an image, but otherwise, I can't think of a reason to do it. (Edit: OK, being a Netscape plug-in is probably a good reason.)
Normally, a view gets its own graphics context in NSView's implementation of the lockFocus method, which is called for you by display, which is called for you by displayIfNeeded (only if the view needs display, obviously), which is called for you as part of the event loop.
You don't need to create a context for a view except in very rare circumstances, such as the export-to-an-image case I mentioned. Normally, you let the view take care of that itself.
A partial solution?
What I have done currently is create a nib with a button in an IKImageView inside an NSScrollView. I load this in my plugin.
Then, since I have the NSWindow, I can get the contentView of the window. Then, I add the scrollview as subview of contentView.
It appears, but there seems to be some coordinate confusion about where the origin is. (top vs bottom) and since I'm mucking with the contentview of the WHOLE WINDOW, I'm doing some stuff very globally that perhaps I should be doing more locally. Like, the view never disappears, even when you close the tab, or go to another tab. (it does close when you close the window of course)
So, does this sound like a reasonable way of doing this? it feels a bit ... kludgy...
For future generations (and me when I forget how I did this and Google leads me back to my own question) Here's how I'm doing this:
I have a NIB with all my views, I load this on start-up.
on SetWindow, I set the clip rect and actually do the attaching:
NP_CGContext* npContext = (NP_CGContext*) window->window;
NSWindow* browserWindow = [[[NSWindow alloc] initWithWindowRef:npContext->window] autorelease];
NSView* cView = [browserWindow contentView];
NSView* hitView = [cView hitTest:NSMakePoint(window->x + 1, clip.origin.y + 1)];
if (hitView == nil || ![[hitView className] isEqualToString:#"ChildView"])
{
return;
}
superView = [hitView retain];
[superView addSubview: topView];
[superView setNextResponder: topView];
[topView setNextResponder: nil];
[browserWindow makeFirstResponder: topView];
To make sure I only addSubView once, I have a flag...
And then in handleEvent, I actually draw, Because I'm using an IKImageView, I can use the undocumented method: [imageView setImage: image]; which takes an NSImage.
So far this seems to be working for me. Hopefully this helps someone else.