How do you integrate a Leap Motion Controller into a Mac application? - objective-c

I'm trying to start my own Leap Motion project on the Mac, but I'm having some problems.
I'd like to use Objective-C for this project, and I have read that there is a Leap Motion library for this language. However, I'm not sure how to integrate Leap Motion controls using this library into a Mac application.
Something similar was asked here, only they were asking using the Python Leap Motion library.
How can you add Leap Motion controls to an Objective-C Mac application?

I did this recently, so I can provide the steps that I used to add Leap Motion controls to a Mac application. In fact, the source code to my Leap-enabled Molecules application is available on GitHub if you want an example to work from. All you need to build it is the Leap SDK.
The Leap Motion Objective-C headers are basically wrappers around their underlying C++ API, but you don't need to care about that because you can access this via only Objective-C.
To add the library to your project, first install the Leap SDK somewhere in your system. From there, add references to the Leap.h, LeapMath.h, LeapObjectiveC.h, and LeapObjectiveC.mm files in your project. Add the libLeap.dylib library to your linked libraries.
To avoid compiler and linker errors during the beta period (which may have since been resolved), I needed to go to my build settings and change the C++ Standard Library to libstdc++.
You need to make sure the Leap library gets bundled with your application, so make sure it is copied over into the bundled frameworks during a build phase. I also needed to use the following Run Script build phase to make sure its internal path was set right (again, not sure if this is needed now):
echo TARGET_BUILD_DIR=${TARGET_BUILD_DIR}
echo TARGET_NAME=${TARGET_NAME}
cd "${TARGET_BUILD_DIR}/${TARGET_NAME}.app/Contents/MacOS"
ls -la
# then remap the loader path
install_name_tool -change #loader_path/libLeap.dylib #executable_path/../Resources/libLeap.dylib "${TARGET_NAME}"
One last caution when setting this up is that if you are sandboxing your Mac application, you need to enable the outgoing network connections entitlement, or the application will not be able to connect to the Leap Motion server application running on your Mac.
Once all the setup is completed, you can start getting input from the Leap Motion Controller. A central LeapController object is what will provide delegate callbacks for Leap Motion events. You set one up using code like the following:
controller = [[LeapController alloc] init];
[controller addListener:self];
Your delegate needs to satisfy the LeapListener protocol, which has a bunch of optional callback methods:
// Controller object has initialized
- (void)onInit:(NSNotification *)notification
// Controller has connected
- (void)onConnect:(NSNotification *)notification;
// Controller has disconnected
- (void)onDisconnect:(NSNotification *)notification;
// Exiting your LeapController object
- (void)onExit:(NSNotification *)notification;
// New frame data has arrived from the controller
- (void)onFrame:(NSNotification *)notification;
The connection and disconnection callbacks are obvious, although one thing you'll notice is that the disconnection callback is never triggered while debugging your application in Xcode. You need to run your application outside of a debugger to get that to fire when you disconnect the Leap Motion Controller.
You'll spend most of your time in the -onFrame: callback, because that's where you get your positioning updates. You get your LeapController back as the notification object from that, and you can extract the current frame data using the following:
LeapController *aController = (LeapController *)[notification object];
LeapFrame *frame = [aController frame:0];
The LeapFrame has within it all the scene data observed by the Leap, including hand and finger positions as well as overall scaling of those. The -hands method on LeapFrame gives you an array of all the detected LeapHand objects. In turn, you get each LeapFinger for a LeapHand from its -fingers method. You can also extract the palmPosition, palmNormal, and overall direction of the hand.
Directions are provided as LeapVectors, which are wrapper objects around 3-D vectors. You can extract X, Y, and Z components from them, or perform vector manipulations or comparisons between other LeapVectors.
In Molecules, I adjust the scale and orientation of my molecular structures by reading the movement in-and-out or left-to-right relative to the screen. I do this by comparing the position of an open hand from one frame to the next. I store the previous frame and then do a comparison between now and the last time we saw this hand using
LeapVector *handTranslation = [firstHand translation:previousLeapFrame];
You can also compare scale, rotation, etc. between frames for hands or for all objects in the frame as one group. From the frame-to-frame translation I obtained in the above code, I can extract the individual X, Y, and Z components to rotate my model in response to X and Y translations and scale based on the Z.
I use gross positioning for my application, but you can obviously go a lot finer by tracking individual fingertips. After you've got your application up and running with the device, I recommend reading through the Leap SDK header files for the Objective-C side to see what other capabilities are exposed to you. Also, experiment with different modes of interaction, because you might be surprised at what does and does not work well in 3-D space.

#Brad Larson's answer pretty much covered everything, so refer to it for more details. However, his answer shows how to use the LeapListener protocol which works using NSNotifications.
If you either don't like NSNotificationCenter :p, or don't care about what thread your updates happen on here's an example of how to use LeapDelegate.
#interface MyClass () < LeapDelegate >
#property (strong, nonatomic) LeapController *controller;
#end
- (void)startLeapMotion
{
if (!_controller) {
_controller = [[LeapController alloc] initWithDelegate:self];
//
// Could also be...
//
// [_controller addDelegate:self];
// [_controller removeDelegate];
}
}
- (void)onInit:(LeapController *)controller
{
NSLog(#"Init");
}
- (void)onConnect:(LeapController *)controller
{
NSLog(#"Connect");
// Some settings that came bundled with the Leap sample project. Use if needed
//
// [controller setPolicyFlags:LEAP_POLICY_DEFAULT];
// [controller enableGesture:LEAP_GESTURE_TYPE_CIRCLE enable:YES];
// [controller enableGesture:LEAP_GESTURE_TYPE_KEY_TAP enable:YES];
// [controller enableGesture:LEAP_GESTURE_TYPE_SCREEN_TAP enable:YES];
// [controller enableGesture:LEAP_GESTURE_TYPE_SWIPE enable:YES];
}
- (void)onFocusGained:(LeapController *)controller
{
NSLog(#"Focus Gained");
}
- (void)onFrame:(LeapController *)controller
{
// Write awesome code here!!!
NSLog(#"Frame");
}
- (void)onFocusLost:(LeapController *)controller
{
NSLog(#"Focus Lost");
}
- (void)onDisconnect:(LeapController *)controller
{
NSLog(#"Disconnected");
}
- (void)onExit:(LeapController *)controller
{
NSLog(#"Exited");
}

Related

How do I re-define size/origin of app window in code, overriding nib/xib file parameters, with Obj-C, Xcode 11.3.1 on Mac OS X 10.15.2 (Catalina)?

I'll try to keep it short. I want to create a 3D FPS game, just for myself, that can run on multiple platforms, but I figured that to keep it simple, perhaps it is best to start off with something that is exclusively for macOS. I opted for Objective-C because
(a) Window Application projects in Xcode can only be coded either in Obj-C or Swift (since we are dealing with Cocoa API) and
(b) Obj-C is closer to old-school then Swift.
But before I learn to draw/render 2D-shapes on the window's canvas by writing code, I have to learn to invoke an application window with its properties set to my liking. I've spent hours doing research and experimenting with chunks of code. This is what I've tried: I open with
#import <Cocoa/Cocoa.h>
int main(int argc, const char * argv[]) {
#autoreleasepool {
Then I go with ...
1)
NSWindow *window = [[[NSApplication sharedApplication] windows] firstObject];
NSRect frame = [window frame];
frame.origin.x = 100;
frame.origin.y = 200;
frame.size.width = 100;
frame.size.height = 500;
[window setFrame: frame display: YES];
... and close with ...
NSApplicationMain(argc, argv); // runs the win display function.
}
return (0) ;
}
But no visible changes. Nothing really gets reset. So instead of (1) I tried ...
2)
NSWindow *window = [[[NSApplication sharedApplication] windows] firstObject];
NSPoint newOrigin;
newOrigin.x = 400;
newOrigin.y = 100;
[window setFrameOrigin : newOrigin];
Still nothing. Then instead of (2) I tried:
3)
NSWindowController* controller = [[NSWindowController alloc]
initWithWindowNibName:#"MainMenu"];
[controller showWindow:nil];
Great. Now it's spitting out something I don't understand, especially since I'm new to Obj-C:
2020-02-08 21:53:49.782197-0800
tryout_macApp2[14333:939233] [Nib Loading] Failed
to connect (delegate) outlet from
(NSWindowController) to (AppDelegate): missing
setter or instance variable
I remember dicing around with an ApplicationDelegate, with CGSizeMake(), etc., but it just made the experience really inundating and frustrating. Nothing happened. Then there are NSView, NSViewController, and other classes, which is really mindboggling and begs the question: why are there so many classes when all I want to do is override the preset origin of the window and the dimensions preset by the MainMenu.xib file? (By the way, this project is derived from a Window Application project provided by Xcode.)
I really can't think of anything else to add to give you the entire picture of my predicament, so if you feel that something is missing, please chime in.
[Edit:] Moving forward to phase 2 of my project here: How do I paint/draw/render a dot or color a pixel on the canvas of my window with only a few lines in Obj-C on Mac OS X using Xcode?.
The short answer is that main() is too early to be trying to do this. Instead, implement -applicationDidFinishLaunching: on your app delegate class, and do it there. Leave main() as it was originally created by Xcode's template.
After that, I would say to obtain the window (if there's only going to be one main one), it's better to add an outlet to your app delegate and then, in the NIB, connect that outlet to the window. Then, you can use that outlet whenever you want to refer to the window.
Also, make sure that Visible at Launch is disabled for the window in the NIB. That's so you configure it as you want before showing it.
For a more complex app, it's probably better to not put a window into the Main Menu NIB. Instead, make a separate NIB for the window. Then, load it using a window controller object and ask that for its window.
I love Objective-C but also feel your pain, it has this testy ability to frustrate you endlessly.
I have not really developed a game but let me try and point you in the right direction. I think you need a UIViewController.
Now each UIViewController has a built in UIView that sort of represents the visible portion of it. You can use this or add a UIView and use that, whichever depends on your implementation. For now I'd suggest add a separate UIView and use that rather. Once you're comfortable you can then move the implementation to the UIViewController's view if you need to.
Anyhow, for now, create a UIView subclass, say MyGame or something, as for now all your code will end up there.
To do all of the above is not easy, especially if its the first time. If you can follow some tutorial it will be great. Even if the tutorial just adds a button, you can use it and replace the button with your view.
Anyhow, now that you've got that running and the view you've added shows up in green or some other neon colour just to verify that you can indeed change its properties, you're good to go.
Now you start. In MyGame, implement the
-(void)drawRect:(CGRect)rect
message, grab the context through
UIGraphicsGetCurrentContext
and start drawing lines and stuff on it, basically the stuff I understand you are interested in doing. You can also, through the same context, change the origin of what you are doing.
Hope this helps.

Cocoa window dragging

I'm using addGlobalMonitorForEventsMatchingMask to listen to events in Cocoa:
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDraggedMask
handler:^(NSEvent *event) {
NSLog(#"Dragged...");
}];
Though I would like to know if I'm dragging/moving a window (and which window that is, I can find the focused window though when holding command and dragging a window it doesn't get focus as far as I know.)
So, can I detect whether or not I'm dragging a window?
Update:
I now have a class: "SATest : NSObject <NSWindowDelegate>" in which I implement the windowDidMove method (and later perhaps windowWillMove too.) Though, now the next step would be attaching this to a window, right? So my question now is: How do I attach the delegate to all windows of all apps?
Update 2:
I now can find a list of all open windows on the screen:
AXUIElementRef _systemWideElement;
_systemWideElement = AXUIElementCreateSystemWide();
CFArrayRef _windows;
AXUIElementCopyAttributeValues(_systemWideElement, kAXWindowsAttribute, 0, 100, &_windows);
Now I have to iterate over the windows, and of each get it's NSWindow so I can add my delegate to it: [window setDelegate:self];
Update 3: To be clear, this question is about detecting the dragging of ALL windows of ALL apps. Not only the windows of my own app.
Also, I'm very new to this event and window management stuff, so no need to keep your answer short I'm happy to read a lot :P
Thanks!
-P
To find out if a window is being dragged you need to have an object that acts as the delegate of the window by responding to the following messages of the NSWindowDelegate protocol:
windowWillMove - this tells the delegate that the window is about to move.
windowDidMove - this tells the delegate that the window has moved.
You can retrieve the NSWindow object in question by sending object to the notification parameter sent to these methods:
e.g.
NSWindow draggedWindow = [notification object];
More information can be found here.
Update:
In response to your request about getting this information for all windows the NSApplication class provides a method which returns an array of all windows owned by the application. The typical way of getting this information is to use one of the NSApplicationDelegate methods to get a reference to your application object.
For example, in your app delegate (pseudocode):
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
NSApplication * app = [aNotification object];
// you now have a reference to your application.
// and can iterate over the collection of windows and call
// [window setDelegate:self]; for each window.
}
Note that you will need to add / remove you delegates as windows are added and removed. The best method for doing this is – applicationDidUpdate:.
This should be enough to get you started solving your problem.
As suggested by Benjamin the answer lies in the accessibility API. I was looking around in this for a while, even before I asked this question, but never got it to do what I wanted. I now have found a pretty nice solution.
At a high-level I do the following:
Listen to the mouse down event and remember both on which window you clicked and it's location.
Listen to the mouse up event and check if the location has changed, if so you know you moved a window
You can do something similar for the size if you also want to know if you resized. There might be a better solution, but after days of trying stuff this is the only way I got it to work the way I wanted.
Hope this helps anyone who was looking for something similar.
-Pablo

Create single .xib for Universal app in Interface Builder? (iOS)

Apologies if this is a silly question, but I've done some googling and searched SO and haven't found anyone asking this exact question.
I have been doing iOS development for some time now, but I am completely new to the Interface Builder. What I want to know is this: is there any way to just create ONE .xib file and then use it for both iPhone and iPad in a Universal application?
It seems silly to me to have to create them separately; why do twice the work laying something out more than once in Interface Builder when I could do it once (with minor adjustments for screen size) in code?
Please let me know if I'm missing/misunderstanding something here. Like I said, I'm a complete Interface Builder newbie :)
EDIT: I have submitted non-interface-builder games to the App Store in the past where the iPhone and iPad versions were identical, so I'm not concerned with making the game look/feel different on each device. I intend for them to look exactly the same, aside from some slight positioning changes due to the difference in aspect ratio.
If you know what the resulting view would look like, based on autoresizing, you can indeed use only one .xib. May come in handy if the view is just some sort of a shared component that autoresizes as you want it to. However, if you need the view to look way different on iPad than on iPhone, just use two .xibs. It’s possible then to load the appropriate one as needed, for example in instance initializer, like this controller’s -init:
- (id)init
{
if ([UIDevice currentDevice].userInterfaceIdiom == UIUserInterfaceIdiomPad)
{
self = [super initWithNibName:#"YourNibForPad" bundle:nil];
}
else
{
self = [super initWithNibName:#"YourNibForPhone" bundle:nil];
}
if (self) { /* initialize other ivars */ }
return self;
}
The main reason that XIBs are separate files is because Apple feel that UIs designed for iPhones/iPod touches and iPads should be tailored to each respectively. This is echoed in their their iOS App Programming Guide, which says the following:
For views, the main modification is to redesign your view layouts to support the larger screen. Simply scaling existing views may work but often does not yield the best results. Your new interface should make use of the available space and take advantage of new interface elements where appropriate. Doing so is more likely to result in an interface that feels more natural to the user—and not just an iPhone app on a larger screen.
Whilst it can take time to maintain two XIBs for what is effectively one UI, I feel it is more straightforward than using one XIB and then having to connect up most of your UI elements in order to move them around programmatically when that XIB loads. After all, with two XIBs at least you can see what each UI looks like, and then make visual changes easily.
As an aside, don't forget iOS 5's Storyboards (read about them here), which make managing a view/view controller hierarchy much simpler.
Try to name them
MyCell.xib and MyCell ~ ipad.xib
then:
[self.tableView registerNib: #"MyCell" forCellReuseIdentifier: #"MyUniqueIdentifier"];
If your using IB, you need to create 2 separate xib files for iPhone and iPad. You need a separate iPad xib to make your app comply with the Apple iPad UI guidelines.

How to activate a custom screensaver preview in Cocoa/Obj-C?

I have created a fairly simple screensaver that runs on Mac OS 10.6.5 without issue.
The configuration screen has accumulated quite a few different options and I'm trying to implement my own preview on the configureSheet window so the user (just me, currently) can immediately see the effect of a change without having to OK and Test each change.
I've added an NSView to the configureSheet and set the custom class in Interface Builder to my ScreenSaverView subclass. I know that drawRect: is firing, because I can remove the condition for clearing the view to black, and my custom preview no longer appears with the black background.
Here is that function (based on several fine tutorials on the Internet):
- (void)drawRect:(NSRect)rect
{
if ( shouldDrawBackground )
{
[super drawRect:rect];
shouldDrawBackground = NO;
}
if (pausing == NO)
[spiroForm drawForm];
}
The spiroForm class simply draws itself into the ScreenSaverView frame using NSBezierPath and, as mentioned, is not problematical for the actual screensaver or the built-in System Preferences preview. The custom preview (configureView) frame is passed into the init method for, um, itself (since its custom class is my ScreenSaverView subclass.) The -initWithFrame method is called in configureSheet before returning the configureSheet object to the OS:
[configureView initWithFrame:[configureView bounds] isPreview:YES];
Maybe I don't have to do that? It was just something I tried to see if it was required for drawing.
I eventually added a delegate to the configureSheet to try triggering the startAnimation and stopAnimation functions of my preview via windowWillBeginSheet and windowWillEndSheet notifications, but those don't appear to be getting called for some reason. The delegate is declared as NSObject <NSWindowDelegate> and I set the delegate in the configureSheet method before returning the configureSheet object.
I've been working on this for days, but haven't been able to find anything about how the OS manages the ScreenSaverView objects (which I think is what I'm trying to emulate by running my own copy.)
Does anybody have any suggestions on how to manage this or if Apple documents it somewhere that I haven't found? This isn't really required for the screensaver to work, I just think it would be fun (I also looked for a way to use the OS preview, but it's blocked while the configureSheet is activated.)
OK, there are a couple of 'duh' moments involved with the solution:
First of all, I was setting the delegate for the sheet notifications to the sheet itself. The window that the sheet belongs to gets the notifications.
Secondly, that very window that the sheet belongs to is owned by System Preferences, I don't see any way to set my delegate class as a delegate to that window, so the whole delegate thing doesn't appear to be a viable solution.
I ended up subclassing NSWindow for the configureSheet so that I could start and stop animation on my preview by over-riding the makeKeyWindow and close methods.
- (void) makeKeyWindow
{
if (myPreview != nil)
if ( ! [myPreview isAnimating])
{
[myPreview startAnimation];
}
[super makeKeyWindow];
}
I also had to add an IBOutlet for my preview object itself and connect it in Interface Builder.
Still working out a couple of issues, but now when I click on my screensaver Options button, my configureSheet drops down and displays its own preview while you set options. Sheesh. The hoops I jump through for these little niceties. Anyway, I like it. Onward and upward.

Problems with an unusual NSOpenGLView setup

I'm trying to set a subclassed NSOpenGLView in an unusual way and I am running into some problems. Basically, I am writing a program to perform a bioengineering simulation for my PhD and I need to be able to compile it under both MacOSX and Unix (my machine is a Mac, but the sim will eventually run on a more powerful Unix machine). Since the code will get longer and longer over the next year and a half I'd rather not have to keep track of two completely different versions of the program. So, I'm hoping to be able to compile the ObjectiveC code under Unix by avoiding ObjectiveC-2.0 and keeping the interface optional (it will mostly be there to perform setup before the long simulations and monitor things for the short ones during development).
The current version works well without the interface - the simulation is performed correctly and the program is capable of rendering OpenGL frames and exporting them into image and video files without any problems. Since I am now adding the interface (right now just a simple window with an NSOpenGLView subclass and a "start" button") on top of that (so that I can run the code with an alternate version of main() without it) I have to "wire" OpenGL together in a weird way, since the drawing code is not in the drawRect function, or even anywhere in the subclassed view, but instead in the "basic" program.
What I've done so far is this:
The main program (using an object called "Lattice") performs all the simulations and rendering, correctly outputing images and video to files. This also contains the NSOpenGLContext and calls [renderContext flushBuffer];
A subclass of NSOpenGLView called PottsView contains an instance of a lattice, which is initialized together with the view like this:
- (id)initWithFrame:(NSRect)frame {
if(![super initWithFrame:frame])
return nil;
// code
frameSize.width = WIN_WIDTH;
frameSize.height = WIN_HEIGHT;
[self setFrameSize:frameSize];
init_genrand64(time(0));
latt = [Lattice alloc];
if (SEED_TYPE) {
[latt initWithRandomSites];
} else {
[latt initWithEllipse];
}
[[latt context] makeCurrentContext];
return self;
}
drawRect() is empty.
PottsController is the object instanced in the InterfaceBuilder which connects the start button to the view. The start button simply tells the lattice to run for a number of steps.
Now, pressing start results in the simulation running correctly (i.e. output to files and terminal), but the PottsView is not working correctly. It remains white, but if I cmd+tab parts if it change to sections of a rendered frame. Same if I press Expose (F3).
I've tried several combinations of flushing, setNeedsDisplay, etc, but frankly speaking I'm lost. I haven't done any programming before this April and with this being (as far as I can tell) a completely backwards way of using NSOpenGLView I'm out of ideas. I'm hoping someone can suggest how I can make the current setup work or how to completely rewire the program (while still keeping the interface optional).
It's not clear how you think that you have 'wired' the context and the view together. You can have as many openglContexts as you like - just by drawing into one won't make it's contents show up in a random NSOpenGLView. Apologies if i have missed something.
NSOpenGLView is a fairly simple subclass of NSView that creates the context and pixel format. As you already have those you can do away with NSOpenGLView and use a custom NSView subclass.
You should look at this instruction.. http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_drawing/opengl_drawing.html
To draw to the screen you must flush the graphics context from -drawRect:
This will block the main thread while the gpu processes your instructions, this could be a problem if you have many instructions. It also can not happen more than 50fps.
If you are already rendering your frames to files woudn't you be better observing the output directory and drawing the image each time a new one is added, no opengl required?