Tracking tool using leap motion - tracking

I have just got my hands on a leap motion sensor and am trying to test it for tracking of inanimate objects such as pencils and pens without hands being present but it doesn't seem like it recognizes any object while it is not held in hand. Has anyone tried to test this and is it possible to do so? I am trying to develop an application which would track the almost static objects to figure out the very small movements and the SDK doesnt provide any option to do so.

The Frame object tools list gives you all the tools whether it is associated with a hand or not. I just tested it and the Leap detected tools properly even if no hand has in view.
If you are using the JavaScript API, which doesn't provide a separate tools list, use the Frame pointables list and check the tool property.

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

RealityKit How to create custom meshes at runtime?

RealityKit has a bunch of useful functionality like built-in multiuser synchronization over a network to support shared worlds, but I can’t seem to find much documentation regarding mesh / object creation at runtime. RealityKit has some basic mesh generation functions (box, sphere, etc.) but I’d like to create my own procedural meshes at runtime (vertices and indices), and likely regenerate them every frame immediate-mode rendering style.
Firstly, is there a way to do this, or is RealityKit too closed-in without a way to do much custom rendering?
Secondly, would there be an alternative solution that might let me use some of RealityKit’s synchronization? For example, is that part really just another library I can use with ARKit 3? What is it called? I’d like to be able to synchronize arbitrary data between users’ devices as well, so the built-in system would be helpful as well.
I can’t really test this because I don’t have any devices that can support the beta software at the moment. I am trying to learn whether I’ll be able to do what I want for my program(s) if I do get the necessary hardware, but the documentation is sparse.
Feb 2022
As of macOS 12 / iOS 15, RealityKit includes API to allow you to provide your own procedurally generated meshes, primarily through the following methods:
generate(from:)
generate(from:)
generateAsync(from:)
generateAsync(from:)
These provide means to do create the MeshResource instances - synchronously and asynchronously - either constructing the models and instances yourself, or by providing a list of MeshDescriptor that you create yourself.
The Apple documentation (as I'm writing this) is non-existent, but the APIs themselves are reasonably well documented if you look into the generated swift interfaces. Max Cobb has an article (on Medium): Getting Started with RealityKit: Procedural Geometries that goes into some description of how to use a MeshDescriptor to describe a surface mesh, and also has a swift package with some additional geometries that use this technique: RealityGeometries that's not hard to read through to see examples of using it in action.
As far as I know RealityKit can only use primitives or usdz files as models. While you can generate usdz files using ModelIO on device but that isn't feasible for your use case.
The synchronization however is built into ARKit although you have to do a little bit more work when you are not using RealityKit.
Create a MultipeerConnectivity session between the devices (that's something you need to to for RealityKit as well)
Configure your ARSession and set isCollborationEnabled which makes your session output CollaborationData in the session(_:didOutputCollaborationData:) delegate callback.
Send this data using your MultipeerConnectivity session.
When receiving data from other users integrate it into your session using update(with:)
To send arbitrary information between users you can either send them via MultipeerConnectivity independently from ARKit or use custom ARAnchors, which is the preferred option when your dealing with positional data, e.g. when a users has placed an object at a specific location.
Instead of adding objects directly (by using something like scene.rootNode.addChildNode() in SceneKit you create a special ARAnchor subclass with all the information needed to add your model and add it to your session.
Then you add the object in the rendered(_:didAdd:forAnchor:) callback. This has the benefits of better tracking around your object (because you added an anchor to the position, indicating to ARKit that it should remember the position) and that you don't need to do anything special for multiuser experiences, because ARKit calls the rendered(_:didAdd:forAnchor:) method for both manually added anchors as well as automatically added ones, for example when it receives collaboration data.

Create kinect skeleton for comparison

I'm going build an application where the user is supposed to try to mimic a static pose of a person on a picture. So I'm thinking that a Kinect is the most suitable way to get the information about the users pose.
I have found answers here on Stackoverflow suggesting that the comparison of the two skeletons (the skeleton defining the pose on the picture and the skeleton of the user) is best done by comparing the joint angles etc. I was thinking that there already would be some functionality for comparing poses of skeletons in the SDK but haven't found any information saying otherwise.
One thing makes me very unsure:
Is it possible to manually define a skeleton so I can make the static pose from the picture somehow? Or do I need to record it with help of Kinect Studio? I would really prefer some tool for creating the poses by hand...
If you are looking for users to pose and get recognized for the correct pose made by the user. Then you can follow these few steps to have it implemented in c#.
You can refer to the sample project Controls Basics-WPF provided by microsoft in the SDK Browser v2.0( Kinect for Windows)
Steps:
Record in Kinect studio 2 the position you want the pose to be.
open up Visual gesture builder to train your clips( selection of the clip that is correct)
build the vgbsln in the visual gesture builder to produce a gbd file( this will be imported into your project as the file that the gesturedetector.cs will read and implement into your project.
code out your own logic on what will happen when user have matching poses in the gestureresultview.cs.
Start off with one and slowly make the files into an array to loop when you have multiple poses.
I would prefer this way instead of coding out the exact skeleton joints of the poses.
Cheers!

Leap motion and Integration Tests

I am currently using Leap Motion alongside Unity3D in a VR-project I am making.
I finished my unit tests finding inspiration from this site https://blogs.unity3d.com/2014/06/03/unit-testing-part-2-unit-testing-monobehaviours/. Now I need to make some integration tests to e.g. ensure that a Leap Motion hand pressing one of my UI buttons in Unity executes succsesfully.
I am using the UnityTestTools and have tried to explore the features presented to me, but the main problem I am seeing is that in order to use the Integration Test Tool you have to "drag" your Game Objects into the test scene. This is although not possible with the Leap Motion, due to the Game Objects being create when the hands enter the scene.
I was thinking the best way to get around this was to just trigger the onClick event on the Leap UI button, but I fail to find a way to do this in Unity Test Tools which leads me to the overall question:
What are the general approach to integration testing when using Leap Motion?
I would love some input on this matter.

3d files in vb.net

I know this will be a difficult question, so I am not necessarily looking for a direct answer but maybe a tutorial or a point in the right direction.
What I am doing is programing a robot that will be controlled by a remote operator. We have a 3D rendering of the robot in SolidWorks. What I am looking to do is get the 3D file into VB (probably using DX9) and be able to manipulate it using code so that the remote operator will have a better idea of what the robot is doing. The operator will also have live video to look at, but that doesn't really matter for this question.
Any help would be greatly appreciated. Thanks!
Sounds like a tough idea to implement. Well, for VB you are stuck with MDX 1.1(Comes with DirectX SDK) or SlimDX (or other 3rd party Managed DirectX wrapper). The latest XNA (replacement for MDX 1.1/2.0b) is only available for C# coder. You can try some workaround but it's not recommended and you won't get much community support. These are the least you need to get your VB to display some 3d stuffs.
If you want to save some trouble, you could use ready made game engine to simplified you job. Try Ogre, and it's managed wrapper MOgre. It was one of the candidate for my project. But I ended up with SlimDX due to Ogre not supporting video very well. But since video is not your requirement, you can really consider it. Most sample would be in C# also, so you need to convert to VB.Net to use. It won't be hard.
Here comes the harder part, you need to export your model exported from SolidWorks to DirectX Format (*.x). I did a quick search in google and only found a few paid tools to do that. You might need to spend a bit on that or spend more time looking for free converter tools.
That's about it. If you have more question, post again. Good Luck
I'm not sure what the real question is but what I suspect that you are trying to do is to be able to manipulate a SW model of a robot with some sort of a manual input. Assuming that this is the correct question, there are two aspects that need to be dwelt with:
1) The Solidworks module: Once the model of the robot is working properly in SW, a program can be written in VB.Net that can manipulate the positional mates for each of the joints. Also using VB, a window can be programmed with slide bars etc. that will allow the operator to be able to "remotely" control the robot. Once this is done, there is a great opportunity to setup a table that could store the sequencial steps. When completed, the VB program could be further developed to allow the robot to "cycle" through a sequence of moves. If any obstacles are also added to the model, this would be a great tool for collission detection and training off line.
2) If the question also includes the incorporation of a physical operator pendent there are a number of potential solutions for this. It would be hoped that the robot software would provide a VB library for communicating and commanding the Robot programatically. If this is the case, then the VB code could then be developed with a "run" mode where the SW robot is controlled by the operator pendent, instead of the controls in the VB window, (as mentioned above). This would then allow the opertor to work "offline" with a virtual robot.
Hope this helps.