NSMutableArray removeAllObjects issue - objective-c

Here is my scenario:
I am building a location finder using the iPhone mapkit. I have an array stored in the application delegate to hold the information about the location of a store (name, address, etc.). When a certain button is pressed, a view slides in with a textfield and a button which performs a lookup of the users input, and returns all of the necessary information.
All of this works fine and the points get plotted onto the map. However, if I go and try to do a search a second time, the application crashes. I am trying to remove all of the objects from the array when the xml parser begins:
- (void)parserDidStartDocument:(NSXMLParser *)parser {
[dataTempForSearch removeAllObjects];
}
and the debugger simply puts an arrow on the method call with no real explanation as to why...
Has anyone run into a scenario like this before? any thoughts as to why this might be happening only on the second time the action is performed?

It's hard to tell based on just that one line of code.
It's probably a memory management issue, but what specifically I can't say.
Shots in the dark:
I would destroy and re-create my parser each time I did a search.
I would also clear the dataTempForSearch immediately after passing the data to the app delegate, not right when you go to do another search.

MapKit has some very nasty problems. When you get that odd behaviour of the debugger putting an arrow at that line, take a look at the call stack provided to you (this can be seen in debug mode on the left side usually). I'll bet you it's to do with MapKit.

Related

OSX 10.10 App doesn't get focus or event loop

Restarting app development after a 15 year hiatus. Current project is conversion of old windows-type command line utility into interactive OS X windowed app.
I created a view delegate inside main window and can draw and update NSTable view.
The updates are generated in the App's main loop which takes a UDP/TCP stream, parses and updates view via appropriate delegation.
Here's the problem: When I run the app, the main window does not apparently get
focus (window control buttons on upper left are grey), the Menu created from my .xib is inert, and the window itself does not respond to resizing or to mouse hits inside the table view scroll bars. Also, the mouse pointer is the spinning beachball when over the App's window.
My guess is that I am not providing time to the Objective C run loop for event processing. I do send a "display" to my window on every iteration of my app loop, but I guess it is not sufficient (Apple is not very clear about what objects get what messages when sending this kind of update message). Am I on the right track?
Is there a way to let the system Event loop run an iteration each time through my app main loop?
Thanks!
Update: I tried explicitly providing event loop time in my app's main loop with:
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
There was no apparent change in app behavior.
My guess is that I am not providing time to the Objective C run loop for event processing.
In this case, I recommend you read about Grand Central Dispatch, which provides concurrency, allowing the GUI to remain responsive.
There's a good explanation of GCD here and whilst it looks like a large and complicated subject, you'll probably only need a few lines of code to make use of GCD. For example: -
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// do some work
});
Despite comparing the nibs and connections between my code and a freshly created .xib prototype, i could find no reason for my code to not get focus.
The fix was to move my code guts into the project with a new .xib.
This works but is unsatisfying

Catch key events in a NSWindow

I'm trying to get a very basic web browser with 3 Webviews (2 hidden and 1 visible at all times).
I'd like to switch between these 3 webviews by pressing CMD+1, CMD+2, CMD+3.
I have created a basic Cocoa app, added 3 webviews in it, referenced the Webkit framework and I'm up and running with it, this part is working.
Now I wonder:
1) How to catch key event? It seems so overly complicated that browsing the event structure docs gave me a headache.
[rant]From someone who did lots of Windows forms, GTK, QT and Java/C#/C++ work it seems that XCode is getting worse every release by moving everything around and creating 3 different ways of achieving the same thing, etc. Each time I have to use it's always like I have to learn everything once again.[/rant]
2) How to specifically catch CMD+NUMBERS ?
This is just for a quick productivity app I'm building to use in conjunction with JIRA (project management).
I'd appreciate if somebody could point me in the good direction.
Every time I stumble upon a good tutorial, it was outdated or was for iOS dev which most of the time doesn't use the same APIs as OS X anymore.
Sorry about the rant and thanks about your help!
What you are looking to override is the NSResponder method "keyDown:", and what I would recommend doing is subclassing "WebView" and create your own "keyDown" method (make certain to call "[super keyDown: theEvent]" somewhere in your implementation, though).
Now, within your "keyDown" implementation, to look for the command key, "NSEvent" objects respond to the "modifierFlags" method and one of the flags is "NSCommandKeyMask".
E.G.:
NSUInteger flags = [theEvent modifierFlags];
if( flags == NSCommandKeyMask ){
// Got it!
}
Don't worry about what kind of views they are.
Put each one inside a tab of an NSTabView, with or without tab control displayed.
With tab control, set the desired key combo per tab in IB.
With or without it, create a menu item in a menu in the menu bar for each tab with the same key combos. This is the recommended way since long ago. Apple recommends adding a menu bar menu item for everything that has a keyboard shortcut. The menu bar will receive the keys before anything else.

Display waiting screen when user selects UITable row

When the user selects a row in my table, I need to load a bunch of data with CoreData. This takes several seconds (at least when running on the simulator - haven't tested on a device yet, but I imagine it will still be pretty long). I want to display a loading popover screen (I'm developing for iPad) when the user selects a row, then have it disappear once the data is loaded.
When working with UIButtons, I've always done this by triggering both a TouchDown and TouchUp method. I put the code to display the popover on the TouchDown, then do all my actual work (so in this case loading from CoreData) in the TouchUp. Then at the end of TouchUp, I close the popover.
Is there a way to split up touches in a table view the same way as ones on a button?
So while I was typing up the question I came across something in the Apple Docs for willSelectRowAtIndexPath and it looks like that is there to address this very issue. I didn't see any other topics on this on SO, so I figured I'd make use of this "Answer own question - Q&A" feature on here, so maybe people looking for this in the future will find it a little easier.
Huge FYI block:
I should note that willSelectRowAtIndexPath is actually supposed to return an NSIndexPath, unlike didSelectRowAtIndexPath (which returns void). However, I originally had
-(void)tableView:(UITableView *)tableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
//stuff
}
Note the void return type. Xcode (Version 4.5) allowed this without any errors or warnings (and even had it for me in the autocomplete list). When I ran the code this way, I always hit an invisible breakpoint in libobjc.A.dylib'objc_autorelease: (maybe becaues I have the "All Exceptions" breakpoint enabled?). I couldn't get past this breakpoint, I never got any debug information in the debugger output console, and the app never terminated. I happened to look at the Apple Doc again and notice that it was supposed to return an NSIndexPath (specifically, the index path the code should interpret as being the one that was selected), but if I didn't happen to see it there, I never would have known what the problem was.

Popup window similar to Xcode's code completion?

I would like to build something similar to the code-completion feature in Xcode 4. (The visual style and behavior, not the data structure type work required for code completion).
As the user is typing, a pop-up window presents other word choices that can be selected.
The Feature in action:
I'm not exactly sure where to start. I am mainly concerned with the visual appearance of the window and how I should populate the list with a given set of words. Later I will get into making the window follow the cursor around the screen and etc.
I am mainly looking for an overview of how to display such data in a "window", and how to cusomize the appearance of the thing so it looks like a nice little informational popup rather than a full-on OS X window.
Just add a subview to your current view that happens to be a tableview. Programmatically cause it to be visible on an event (such as mouseDown), and adjust it's position based on where you want it. You will need to instantiate the proper delegate/datasource methods, but it should be pretty straight forward. You will also need a source for the words you want to use in your autocomplete, and put them in an array or something for your tableview datasource to go through.
Like I said, it's not terribly hard, provided you are comfortable using tableviews and adding views to your existing view. If this doesn't explain enough, leave a comment and I can flesh this comment out more.
Add a (completions) subview to your view and set its visible property to NO. Create a separate AutoComplete object that includes the subview as a property and fills it with potential completions. Your controller can respond to key pressed (key typed) events and give the last word (substring the text from the end to the 1st preceding blank) to the AutoComplete on each event. The basic logic in AutoComplete could be something like:
Given an AutoComplete with a list of known words "dog, spaghetti, minute, horse, spare, speed"
When asked to complete a fragment "sp"
Then the following words should be offered as potential completions, "spaghetti, spare, speed"
Which would imply that you need to instantiate it with a list of words (this could be done in the init of your controller), and create a method "-(NSArray*) completeFragment:(NSString*)fragment;". You could drop this into an OCUnit test case and iterate over your implementation of autocomplete until you get it right. You would then write a method that takes the completions from AutoComplete and lists them in the subview. Even better you might create currentWord and potentialCompletions properties in AutoComplete that gets updated as you send it "newFragment:(NSString*)fragment;" messages. Throw that into OCUnit and work it out then use that potentialCompletions property to drive updated of a the subview (which is probably best modeled as a custom tableView).

Why does the execution order of touchesBegan, target-action and touchesEnded change with fast touches of UIButton?

UPDATE: With the blush of shame I discovered that the order had nothing to do with the speed of tapping. I was calling the visual code before the super touchesEnded:withEvent call, which was why if you tapped really fast, the display never got a chance to draw the highlighted state before being dismissed again. Because the code that was actually causing the main thread to block just a few milliseconds, the highlighted state would stay visible until the main thread unblocked again, where as if you tapped really fast, it looked like nothing happened at all. Moving the super call up to the top of the overridden method fixed it all. Sorry, if any moderator sees this post it can be deleted. shame
This problem must have been asked a 1000 times at SO, yet I can't find the explanation to match my specific issue.
I have a UIButton subclass with a custom design. Of course the design is custom enough that I can't just use the regular setSomething:forControlState: methods. I need a different backgroundcolor on touch, for one, and some icons that need to flash.
To implement these view changes, I (counter-intuitively) put the display code in (A) touchesBegan:withEvent and (Z) touchesEnded:withEvent:, before calling their respective super methods. Feels weird, but it works as intended, or so it seemed at first.
After implementing addTarget:action:forControlEvents was used to bind the UIControlEventTouchUpInside to the method (X) itemTapped:, I would expect these methods to always fire in the order (A)(X)(Z). However, if you tap the screen real fast (or the mouse in simulator), they fire in the order (A)(Z)(X). Where (A) and (Z) follow each other in such rapid succession, that the whole visual feedback for tapping is invisible. This is unwanted behavior. This also can't be the way to go, for so many apps need similar behavior, right?
So my question to you is: What am I doing wrong? One thing I'm guessing is that the visual appearance of the buttons shouldn't be manipulated in the touchesBegan:withEvent and touchesEnded:withEvent, but then where? Or am I missing some other well known fact?
Thanks for the nudge,
Eric-Paul.
I don't know why the order is different, but here's 2 suggestions to help deal with it.
What visual changes are you making to the button? If it's things like changing title/image/background image, you can do all this by modifying the highlighted state of the button. You can set a few properties like title and background image per-state. When the user's finger is down on the button, the highlighted state is turned on, so any changes you make to this state will be visible at this time. Do note that if you're making use of the selected state on the button, then you'll need to also set up the visual appearance for UIControlStateHighlighted|UIControlStateSelected, otherwise it will default back to inheriting from Normal when both highlighted & selected are on.
The other suggestion is to ditch touchesBegan:withEvent: and touchesEnded:withEvent: and switch over to using the methods inherited from UIControl, namely beginTrackingWithTouch:withEvent: and endTrackingWithTouch:withEvent:. You may also want to implement continueTrackingWithTouch:withEvent: and use the touchInside property to turn off your visual tweaks if the touch leaves the control.