Considering engagement - kinect

I've just read the Kinect SDK 1.7 Human Interface Guidelines and was very impressed by the fact that they consider the user engagement (page 86).
I was wonderin how we're supposed to implement this. I could not find anything in the SDK Documentation which confused me. Are we supposed to do all this by hand?
Since 1.7 is pretty new I don't expect many other to have much experience with this but maybe someone stumbled upon something useful.

Page 93 of the Kinect for Windows Human Interface Guidelines describes an example of implementing an engagement philosophy. The example applications it is referring to is "Control Basics-WPF" and "Interaction Gallery-WPF" (SDK 1.7); or the older "Basic Interactions" example (SDK 1.6). You can run the examples to see them in action and examine the code to see how it is implemented.
Are we supposed to do all this by hand?
Microsoft provides then EngagementStateManager in Microsoft.Samples.Kinect.InteractionGallery.Utilities, which is demonstrated in the "Interaction Gallery-WPF" example. It will provide you with the engagement logic defined by Microsoft. If you wish to have a different engagement model, you need to write your own.

Related

Alternative for HID API to implement Game controller for MacOS application?

I am working on a game targeting Mac and iOS, which has game controller support implemented recently.
I use Game Controller (GC) framework for PS4/ XBox controllers and HID API included in IOKit for the others (Steelseries Nimbus and non-mainstream brands), simply because their controllers do not seem to work with GC Framework.
After submitting to review, the game has been rejected for the reason of "The app references non-public symbols" which list out all of the functions related to HID API used in the game, including officially documented functions which considered essential for the implementation, such as:
https://developer.apple.com/documentation/iokit/1438383-iohidmanagercreate?language=objc
https://developer.apple.com/documentation/iokit/1438369-iohidmanageropen?language=objc
https://developer.apple.com/documentation/iokit/1438399-iohidmanagerregisterdevicematchi?language=objc
I have contact Apple Developer support about this, but everything I received was auto-reply emails state that they can only provide administrative level support and advice me to direct this question toward forums and community, thus, I'm here.
If you happen to have an alternative solution for this or any experience regard similar issues please do consider leaving an answer or comment.
Thank you,
P/S: Please also comment if you have a source listing out the public API(s) that developers can use. I have been following the idea "if it's documented, then I can use" but after this incident, it seems to not be the case.

No skeletal data with RealSense SDK 2.0?

Does anyone have info about support for human skeletal data on the SDK 2.0 (or lack thereof)?
The Intel site is oddly silent on the matter.
Would love to know if it's at least in the works. The D400 series cameras look promising.
You could look into Nuitrack which offers sensor independent tracking.
Note: I'm not affiliated, but I'm also trying to transition from KINECT v2 to RealSense.
To date, Intel does not have any formal plans to release skeletal tracking for the RealSense sensors, though they might provide this via third-party partners in the future.
As #zeFrenchy's answer mentioned, Nuitrack is the only skeleton tracking middleware available for the RealSense at the moment.
(If you want to know for sure, you'll have to find a contact at Intel.)
As for official online sources, here are some hints from Intel Customer support.
March 2018 - GitHub librealsense issue #1376
The RealSense SDK 2.0 is focused on providing depth across multiple
operating systems and wrapper. All open source. We provide a few code
samples which we hope the community will add to. We also provide tools
like the viewer. What we are hoping to do is expand into middleware
like person tracking or scanning via third-party partners. Please
watch our site as we bring these partners on board.
Oct 2017 - GitHub librealsense issue #743
We cannot comment on development roadmaps at the moment but please
provide more details on your use case and requirements are so that we
may scope this feature for the communities needs.

Evernote API in Unity3D

Since I haven't got any response on the Unity3d or Evernote forums, I'll try it here.
The last year I have worked a lot with Unity3D, mostly because the good integration with the Vuforia Augmented Reality library and the fact that publishing for multiple platforms is a piece of cake.
Now I want to show notes in an AR setting and am looking at the Evernote API for this. I couldn't find anything about using this with Unity, I can see why this is not the most common combination.
My question is: do you think I can access the Evernote API through Unity? If so, how should I do this? Or is it for this purpose perhaps wiser to make (parts of) the application with Eclipse/xCode?
Hope to hear from you!
Link to Evernote API: http://dev.evernote.com/doc/
The Evernote API has a C# SDK which you should be able to call through Unity. In terms of how to do it, you will probably need to download the SDK and follow the instructions yourself. Their github seems like a good starting point.
One thing to note is that Unity's .Net library for mobile clients are quite limited and with webplayer you will need to deal with sandbox security issues. But start with the standalone build first and see how you go

cryptoapi for dummies

Can some one point me to some books or online resources to help learn about the windows cryptoapi package? I did find "Cryptography for Visual Basic" by Richard Bondi. I'd be more interested in something aimed at C++ or the package in general. The MDSN is overwhelming!
Here is a simple tutorial that could point you in the right direction. I hope it helps.
http://www.codeproject.com/KB/security/EncryptionCryptoAPI.aspx
MSDN can be overwhelming however there are some pieces of light. This page would provide you some context:
http://msdn.microsoft.com/en-us/library/ms867086.aspx
In any case, it really depends on what are you planning to do. If you're just using CryptoAPI to perform cryptographic operations you're fine with MSDN or just have a look to Wincrypt.h (there's a lot of info inside that header).
However, if you're panning to develop you own CSP (cryptographic service provider) with or without hardware you would need further information.
If you give me more details I can point you to the appropriate place (I did both things time ago).
Regards

Is there any Subtext IDE or equivalent Example-driven Visual Programming Language/Interface published on the Internet?

I'm really excited about this new and experimental language named Subtext. But it's author haven't released nothing about it besides some papers and videos. Should I clone it? There are similar alternatives?
UPDATE I'm looking for an example-driven VPL, not just a VPL.
As Edwards' says in his related work section, the Self programming language is very similar. It shares subtext's emphsis on directness, uniformity, and liveness, but doesn't emphasize a tabular format (Schematic tables).
A lot of of work went into the Solaris version:
http://research.sun.com/self/papers/papers.html
seems there's a Mac & linux version, not sure how mature it is:
http://selflanguage.org/
Here's a video demo'ing Self, where they emphasize directness, uniformity, and liveness:
http://www.smalltalk.org.br/movies/
When you say "any VPL", do you mean none at all, or not a run-of-the-mill one? From the wording of the title question, I'll assume the latter. Here're a couple with some serious programming theory behind them:
Morphic is/was a/the UI piece of Self, and is now ported to Squeak:
http://wiki.squeak.org/squeak/2139
Prograph was a way-cool system, but I don't know of an available version.
A bit further out there is Kahn's Toontalk, based on Pictorial Janus:
http://www.toontalk.com/
I am sure you are aware of VPL On Wikipedia that lists many different VPL languages. You have not supplied information on what you are trying to achieve but another site is Synopsis. This is a commercial product.
From their website:
Synopsis is a completely visual RAD tool for Windows that frees you from having to write textual code and learning unnecesary programming details. With Synopsis you can concentrate on creating software instead of wrestling with mundane and complex low-level development tasks.
The image below shows how this application looks:
(source: codemorphis.com)
Granted my knowledge on this subject is limited and I do follow this to see if something really powerful can be created. I did see a project on CodeProject or CodePlex that was written in C# that allowed VPL but I cant find that URL.
If I ever do find that application I will edit this post!
You haven't provided more information about features you expect from such a VPL environment, but I think that "Tersus" could be interesting thing to look at. There're many VPLs, but mainly they're targeted as educational tools or addition to particular technologies (i.e VPL for Microsoft Robotics Studio) to simplify common tasks programming. The "Tersus" is full blown application development platform. It's open source and free to download for many OSes.
http://www.tersus.com
Coherence — The Director’s Cut
The Coherence home page is up at http://coherence-lang.org. The submitted version of the paper is there, with a new intro and a surprise ending.
Coherence claims to be an experimental programming language, a continuation of Subtext using other means.
Intentional shipped, but they are still kind of alpha, with limited distribution and testing. You can make example driven DSLs, but I don't know if the environment itself works that way.
http://lambda-the-ultimate.org/node/3287
You could look at the work on eve that is happening too:
http://incidentalcomplexity.com/