I have implemented the JTRevelSideBar into my project, and it's working quite well, but I have been trying to figure out if it was possible to do something like with the Facebook app, where if the user presses the main viewcontroller, while the sidebar is revealed, the sidebar should be dismissed. Do anybody know this is possible to implement?
Never having used it before, I will say: probably.
Though, the github page for the pod you mention says that it's no longer being supported due to its use being discouraged at the WWDC14. Though, one of the alternatives that JT mentions is PKRevealController2 and seems to be fairly simple to use.
Though the reason I bring it up at all is that usually devs will give you some hint as to how to do what you are asking for in one of the project's main header files. For example in PKRevealController.h it lists the property
/// Whether to use the front view's entire visible area to allow pan based reveal.
#property (nonatomic, assign, readwrite) BOOL recognizesPanningOnFrontView;
Which is exactly what you'd want to set as YES in your project. I would recommend to take a look at the header files in the JTRevealSideBar pod to see if there is something similar.
Now, I have used MMDrawerController before (it's pretty great!) and similarly it has a MMCloseDrawerGestureMode which can be set to MMCloseDrawerGestureModeBezelPanningCenterView (The user can close the drawer by starting a pan anywhere within the bezel of the center view.)
So you see, you'll just have to do a little digging. Otherwise you'll need to implement a pan gesture recognizer... but I can't really say for sure where you'll be putting it in your particular implementation.
Related
Generally, when editing text in a text field/input/area/editor, pressing ⌘A will select all of the current text in said field/input/area/editor. I've subclassed NSTextField and NSTextFieldCell, and no matter what I try, I can't seem to get basic "Select All" functionality working. I've tried implementing delegate protocols, intercepting events, manipulating commands made by selectors, and every other thing I can think of. What gives? (I can already hear the "Ever heard of Google?" refrains because of how simple this probably is, but I haven't found a single answer out there. I guess I can thank iOS for that.)
And before I forget to mention it, I also dragged a standard NSTextField into my nib to see if a non-subclassed NSTextField implements Select All behavior by default, and to my shock, it doesn't. Am I going crazy here, or am I completely overlooking something? Isn't Select All almost a requirement when implementing a text field? Apple's First Responder proxy handles everything under the sun (including two versions of selectAll (selectAll and selectAll:), but the n00b is strong with me, and I can't seem to make sense of any of this.
Any help/ideas would be immensely appreciated. Cheers!
The application menu handles sending the keyboard shortcut actions to the application's current first responder. The missing connection would explain why your regular NSTextField objects are missing this functionality as well.
In my application, we have NSTabViewItem inside NSTab , and there could be More then one NSTabViewItem are possible,
To Support one of the feature, we need to support Drag-n-Drop in NSTabViewItem, i.e. i should be able to rearrange NSTabViewItem and if its dropped else where another control of our application, then some particular action should happen,
I went through documentation but didn't find any support for it, but it seems to be quite obvious use - case ,
Am i missing something
To be honest I never saw such a behaviour. I would give you the advice to use an NSTabView without tabs and to use AMButtonBar for example in order to select a tabviewitem with selectTabViewItemAtIndex: programmatically. You could adopt NSDraggingSource and NSDraggingDestination protocols then within this code to make the elements (of the button bar and not the table view itself) rearrangeable.
My app is using a 2nd UIWindow to show a special screen if iOS wants to take a screenshot of the app.
By accident I used [UIWindow makeKeyWindow:] on my main window if I wanted to remove the 2nd window again. This really should be makeKeyAndVisible: instead but I'm wondering why it worked at all.
I mean: most of the time (99%), my 2nd window was removed as expected and my main window became visible.
I'm asking because I'm wondering if I have really found the problem or if there might still be something else?
Or could it be that the method was incorrectly bound in (previous) MonoTouch versions?
Each method maps to the selector of the same name makeKeyAndVisible and makeKeyWindow.
Or could it be that the method was incorrectly bound in (previous) MonoTouch versions?
GIT history shows (me ;-) that both never changed since they were first added (more than two years ago).
Documentation about the former states:
You can also hide and reveal a window using the inherited hidden property of UIView.
Maybe this happens in your code (or even within the iOS code).
I'm trying to learn objective-c by making a GUI application using Xcode 3. I'm wondering if it is possible to change the text of a button outside of a tab view depending on which tab in the tabview is selected? As I said I am trying to learn objective-c so please act as though I know next to nothing in your answer. I should probably mention that I tried making a NSObject and tried to define an IBAction in a .h and .m file but that didn't seem to work. (I have tried setting some breakpoints in those files none of which were ever reached leading me to think something isn't wired the way think it is.)
Sorry for the long winded explanation.
Thanks for all the help!
I actually figured it out. What I needed was object delegation, this page, and of course Google to help get the correct syntax for the actual implementation.
UPDATE: With the blush of shame I discovered that the order had nothing to do with the speed of tapping. I was calling the visual code before the super touchesEnded:withEvent call, which was why if you tapped really fast, the display never got a chance to draw the highlighted state before being dismissed again. Because the code that was actually causing the main thread to block just a few milliseconds, the highlighted state would stay visible until the main thread unblocked again, where as if you tapped really fast, it looked like nothing happened at all. Moving the super call up to the top of the overridden method fixed it all. Sorry, if any moderator sees this post it can be deleted. shame
This problem must have been asked a 1000 times at SO, yet I can't find the explanation to match my specific issue.
I have a UIButton subclass with a custom design. Of course the design is custom enough that I can't just use the regular setSomething:forControlState: methods. I need a different backgroundcolor on touch, for one, and some icons that need to flash.
To implement these view changes, I (counter-intuitively) put the display code in (A) touchesBegan:withEvent and (Z) touchesEnded:withEvent:, before calling their respective super methods. Feels weird, but it works as intended, or so it seemed at first.
After implementing addTarget:action:forControlEvents was used to bind the UIControlEventTouchUpInside to the method (X) itemTapped:, I would expect these methods to always fire in the order (A)(X)(Z). However, if you tap the screen real fast (or the mouse in simulator), they fire in the order (A)(Z)(X). Where (A) and (Z) follow each other in such rapid succession, that the whole visual feedback for tapping is invisible. This is unwanted behavior. This also can't be the way to go, for so many apps need similar behavior, right?
So my question to you is: What am I doing wrong? One thing I'm guessing is that the visual appearance of the buttons shouldn't be manipulated in the touchesBegan:withEvent and touchesEnded:withEvent, but then where? Or am I missing some other well known fact?
Thanks for the nudge,
Eric-Paul.
I don't know why the order is different, but here's 2 suggestions to help deal with it.
What visual changes are you making to the button? If it's things like changing title/image/background image, you can do all this by modifying the highlighted state of the button. You can set a few properties like title and background image per-state. When the user's finger is down on the button, the highlighted state is turned on, so any changes you make to this state will be visible at this time. Do note that if you're making use of the selected state on the button, then you'll need to also set up the visual appearance for UIControlStateHighlighted|UIControlStateSelected, otherwise it will default back to inheriting from Normal when both highlighted & selected are on.
The other suggestion is to ditch touchesBegan:withEvent: and touchesEnded:withEvent: and switch over to using the methods inherited from UIControl, namely beginTrackingWithTouch:withEvent: and endTrackingWithTouch:withEvent:. You may also want to implement continueTrackingWithTouch:withEvent: and use the touchInside property to turn off your visual tweaks if the touch leaves the control.