How does "Cinch App" do it? - objective-c

If you aren't familiar with Cinch, its an application on Mac App Store that allows you to resize ANY window to half/full screen size if you drag the window to the edge of the screen. Exactly like the functionality in windows 7.
Now my question is, how is it done? I have looked all over cocoa apis looking for notifications/delegate methods for whenever a window is being dragged (ALL windows, not just windows owned by the app from which code is running from) but can't find it. Looked in Core Graphics API...Quartz Display Services....but can't find it.
Any help will be greatly appreciated as I have been looking for the past week....Thanks!
Edit: Resize the window is easy since it can be done through applescript bridge..

Are you developer behind i-Snap or some other Mac App Store clone of Cinch?
I'm the developer behind Cinch, and while I try to maintain an "abundance mentality" which basically says "There's enough out there for everyone", I've been upset by the Mac App Store lowering the barrier for entry to this market which has produced a number of half-backed competitors.
I would be thrilled to see some real innovation around the work I have done, and not just clones looking to make a quick buck.
Anyway, you want to look at the Accessibility APIs. It's a Carbon C API. This is probably your best reference: http://developer.apple.com/library/mac/#samplecode/UIElementInspector/Introduction/Intro.html%23//apple_ref/doc/uid/DTS10000728

I've not used the Cinch app, but if I were to do this I'd expect to be using cocoa events. (Also see here) Specifically the mouse handling events, combined with where the mouse is currently on-screen. They probably set a variable when a window is grabbed and then track the mouse pointer until it hits an edge or until they release the mouse button.
Events are very powerful and provide very low level access to what is happening, but can also be very complex. Good luck!

I'm not sure. Maybe the developers combine apple script and carbon events. You can create carbon events to know when the mouse has been clicked or dragged

Related

In Objective-C, When monitoring other apps windows using accessibility APIs, is there a way to know if a window was moved pressing the command key?

I am writing a Cocoa Objective-C application for the macOS, and I am using this very good answer here to be notified of window position changes in other applications. Within my app, I can use the event returned by [NSApp currentEvent] to determine the key modifiers. How can I ascertain the same for another app that I am monitoring ? To be simple, I would like to know if a window of an application was moved or resized while pressing the command (or option) key. Many thanks for any help.

How to implement the "New Document" window like Pages app does?

As a OS X user, you must be quite familiar with the "New Documents" window when you start some Apple's apps, like Pages.
This window is so common, that I thought it must be some system-provided window, am I right?
I tried to mimic the window, and it turns out to be quite tedious. As a developer, I have to first create a finder-like browser view with list-style selection support. Also, iCloud integration is quite difficult not to mention search support.
So, my question is how can I implement this window, is there any documentation writing about this window? Big thanks!
Look at NSDocument. That is Apple's suggested way of creating document-based apps. You create a subclass of NSDocument to handle tasks specific to your data format, and the superclass handles all the heavy lifting, including iCloud support. I haven't used it myself, but I know that it handles open and save chores, so that's probably where the panel is coming from.
As it turns out, to get this window, all I need to do is to enable iCloud support in Capabilities tab of project properties page.
A little heads up: doesn't work for Swift in Xcode6 Beta. I don't know why to be frankly. And it took me a day to searching around...

Windows Phone 7 WebBrowser control swallows manipulation events?

If I place a WebBrowser control on any page, the page no longer responds to manipulation events under the WebBrowser. Other areas of the page work fine.
It's easily confirmed by overriding OnManipulationCompleted in a page, then placing a WebBrowser control on the page. Try swiping over the WebBrowser, and OnManipulationCompleted is never called.
I can't set the WebBrowser to IsHitTestVisible=false because I need to be able to click on links. But I want the page to respond to left/right swipes.
Anyone got any bright ideas? Or know if this is a bug in the current release?
I'd like to extend what Skeet already written.
The point is, that the MS WP7 dev team has published "guidelines", where they highly discourage putting (on the same page) multiple layout controls that accept and react to the same set of gestures. For example, you shouldn't try to embed a Pivot inside a Pano, because the horizontal-swipe will clash and it will be hard do distinguish which of them should execute its actions. The same case is with the browser: it responds to all swipes and pans.. so should not be put in almost any scrolling control!!
Now, having said that, I want to tell you it is possible to overcame it - although it may turn not easy, depending on your actual case.
The most trivial thing to do, if you want to still be notified about the gestures is to use GestureService/GestureListener from the Silverlight Toolkit library. Even when the WebBrowser extinguishes the raw manipulations events, the GestureListener will still be able to notify you - because it apparently listens on some "other layer", I don't exactly want to get in to it now. Just fetch the library, add-reference it, do something like:
GestureService.GetListener( targetcontrol ).Flick( myBrowserFlickHandler );
and it's done - you get the notification whenever someone flicks on the control, with completely no regard of the manipulation events being e.handled=true or not. Small disclaimer here: I don't remember if on 7.0 it works, because the WebBrowser is build a bit differenlty there. On 7.1 and 7.5 it should work.
However, if you apply that on a WebBrowser - you will get the notif - but the webbrowser will get it too. That means, that 2 controls will react, and it turn to be visually quite rejecting if you start some storyboards from within the handler..
On 7.1 and almost-current 7.5, it is possible to play hard with the WebBrowser and to completely control which manipulation-event it will see. Thus, by filtering the mani-events for the WB, and by using GestureListener to see the events yourself, you can both block the WB from doing anything, and at the same time you can respond with your own action instead. I've written about that extensively in a response to similar problem, see WP7 Pivot control and a WebBrowser control for details. It is not a quick/easy/funny thing to do though.
EDIT: and MOST importantly, it is NOT guaranteed to work in the future. Throughout the 7.1 and 7.5 SDK/OS/API versions, inside the WebBrowser control some major internal undergoing changes are visible, and I would not be surprised, if it would dramatically change in the next few releases. Don't play with the things I've wrote there about if you do not want to have to revisit the subject again in the next 1-2 years.
This is a consequence of the way we implemented WebBrowser. The touch events are handed off directly to the browser engine. Once that happens Silverlight is basically out of the picture. Unfortunately I can't think of any workarounds that might give you what you want. -Skeets, MS dev
If you really want it:
<Grid>
<phone:WebBrowser Source="http://www.microsoft.com" />
<Rectangle Fill="Transparent" ManipulationCompleted="HandleManipulationCompleted"/>
</Grid>
But of course it completely locks down interaction with web browser control and there's just no way to echo manipulation events to browser...
I think you have a better way capturing the manipulation events, if it is in WP7.5 Mango since the browser controls are completely different, which I read from this link

Locking a screen in 10.6

How would I go about locking a screen like Keychain does, meaning preventing all access to Dock, menubar, desktop, etc. Basically just a black screen that I can add a password field to, for the user to return to the desktop? I am well aware of the Carbon method, but I want the NSApplication method because this is an all Cocoa application.
Thanks~
If you can get away with not writing this code yourself, all for the better. It is usually a terrible idea to write your own code to lock the screen, considering the number of vulnerabilities that have been found in screen locking code over the years. If you have a Carbon call that can do it, go ahead and use that... don't worry about the "purity" of your Cocoa code.
However, if you decide to write this yourself, here's what you do:
First, capture all the screens using CoreGraphics. See: http://developer.apple.com/mac/library/documentation/GraphicsImaging/Conceptual/QuartzDisplayServicesConceptual/Articles/DisplayCapture.html
Next, create a new NSWindow and put it in front of the window that's used for capturing the screens. You'll have to call a CG function to get the "order" of the black window covering each screen, and order the new window in front of that. Normally, the black window has an order so far forward that everything is behind it. Put a password field in the window. Do NOT use an ordinary text field or write your own code for password input. The password input field has a ton of special code in it so you can't copy text out of it, and other programs can't listen to keystrokes while you're typing into a password field. So use the one that Apple provides.
Last, put the computer in "kiosk mode". This mode allows you to disable alt-tab, user switching, the menubar and dock, and even the ability to force quit. See: http://developer.apple.com/mac/library/technotes/KioskMode/Introduction/Introduction.html
It's not a lot of code, it just uses a few different APIs so you'll spend most of your time bouncing between API docs. I suggest writing the screen lock code as its own application (just add a new application target to your Xcode project) and then put the screen locker inside your application bundle. This used to be (as of 10.4) how Apple Remote Desktop implemented the "Lock Screen" functionality, but I can't find the app anymore.
I believe the Cocoa replacement to the SetSystemUIMode API was not introduced until 10.6.
If you can live with Snow-Leopard-only code, the answer is - setPresentationOptions: on NSApplication.

Multiple windows or "pages" in an application

I am a newbie in Mac application development. I want to write a GUI application in Cocoa using Interface Builder. I want multiple screens i.e. when one button on a screen is clicked, another screen should be displayed. How can I activate a new screen at button click event?
I would heartily recommend Aaron Hilegass's book Cocoa Programming for Mac OS X. It took me from feeling like everything was impossible to being relatively competent in the space of a few short weeks. I was very impressed with it.
Apple's documentation is amazingly good, but it takes a while to get used to the style, and you will need to know which objects actually exist before you can look up how to use them, which is where Aaron's book comes in.
Your library may have a copy of it, or be able to order one for you if they don't.
I think you mean windows, not screens. Screens are the displays (monitors) on which all the user's windows from all the user's applications appear.
And I second Jonathan's recommendation of the Hillegass book.
The button has a target. That should link to the new window. As its action you can tell the window to show itself.
Take a look at:
http://developer.apple.com/DOCUMENTATION/Cocoa/Conceptual/WinPanel/WinPanel.html
I think what you want is the type of interface like that seen in Coda, or System Preferences where there is a toolbar on the top of the screen that can be used to select between the content of the window.
The simplest method I have found is to use BWToolkit.
Another method is to use a series of views, and switch between them when the toolbar is clicked. I've found one description here, but that's not the one I used first (which may have been originally in Ruby Cocoa, IIRC).
NSTabView.