How to stop head rotation tracking in Google VR SDK v1.5 in unity 5.6? - google-vr-sdk

SDK v0.8 had trackRotation variable in gvrHead Script. Making that false would do the job.

I don't a direct interface either, but you can get it by InputTracking.Recenter()
trackRotation variable is useful in Unity without Google VR inside, such as Unity 5.6 before.

Related

Setting near mode for depth cam in Kinect using OpenNI

I have been unable to find out how i can set the Kinect depth mode to near (40-300mm) using OpenNi. I am programming in C++ and would like to keep using the OpenNi libraries for retreiving the depth data, however i cannot find any property of setting that i should use for changing modes.
I too wanted to do the same thing. I googled up a bit and found following
libfreenect which is a fork from OpenKinect
This fork with the branch "nearMode" provides capability to work in NEAR_MODE. This is a compile time solution only. I plan to compile and use it.

Evernote API in Unity3D

Since I haven't got any response on the Unity3d or Evernote forums, I'll try it here.
The last year I have worked a lot with Unity3D, mostly because the good integration with the Vuforia Augmented Reality library and the fact that publishing for multiple platforms is a piece of cake.
Now I want to show notes in an AR setting and am looking at the Evernote API for this. I couldn't find anything about using this with Unity, I can see why this is not the most common combination.
My question is: do you think I can access the Evernote API through Unity? If so, how should I do this? Or is it for this purpose perhaps wiser to make (parts of) the application with Eclipse/xCode?
Hope to hear from you!
Link to Evernote API: http://dev.evernote.com/doc/
The Evernote API has a C# SDK which you should be able to call through Unity. In terms of how to do it, you will probably need to download the SDK and follow the instructions yourself. Their github seems like a good starting point.
One thing to note is that Unity's .Net library for mobile clients are quite limited and with webplayer you will need to deal with sandbox security issues. But start with the standalone build first and see how you go

Kinect motor control via Processing

I'm hacking the Kinect using some simple-openni based processing apps for a talk I plan to give soon and I found an API that appears to control the motor. There is a moveKinect method that appears to be added to the main ContextWrapper interface but I can't seem to get it to work. Looking through the svn history and release notes it appears to have been added last year with a note that explains it doesn't work with the newest drivers(5.1.02,Linux64). I've tried calling the method giving it values in degress and radians but nothing happens. I get no error and no movement. Has anyone else played with this? I'm running with the 2nd to latest processing 2.0 build (the link to processing 2.0.1 doesn't work) and the latest SImpleOpenNI package I could download.
SimpleOpenNI is the wrapper for OpenNI which allows access to the RGB/IR/Depth streams and the middleware for body/hand detection, but does not allow access to hardware like the LED, accelerometer or motor.
You should try Kinect P5 which uses libfreenect behind the scenes and supports motor control. Bare in mind you won't have support for the middleware.
If you need both middleware and hardware access you can try OpenFrameworks with the ofxOpenNI addon. It has a has a hardware class that works on OSX and Linux (as sudo) allowing use of both the middleware and motor.

Guidelines for Gesture Recognition using Kinect, OpenNI,NITE

I know this has been all over the net. I browsed a lot and found a lot of information about Gesture recognition .However i got more confused after reading most of them seem to work on Kinect Official SDK , Microsoft's SDK and i don't want to use them.
I need to develop it for Linux and was wondering if OpenNI, NITE gives me the flexibility to recognize a gesture.
I am trying to develop a sign language program, which recognizes the gestures and draws stuff(spheres,cubes etc) on screen,
Could any one give me a clear guidelines to begin this project.I have no clue where to begin it from.
Any help is welcomed.
Thanks
For getting starting with understanding gestures I suggest checking out a blog post I made a while back:
http://www.exceptontuesdays.com/gestures-with-microsoft-kinect-for-windows-sdk-v1-5/
This, plus the post it links to, go into what makes up a "gesture" and how to implement them using Kinect for Windows SDK.
You can also check out thine Kinect Toolbox, which also does gestures for the official Kinect SDK:
http://kinecttoolbox.codeplex.com
These will give you a good understanding of how to deal with gestures. You can then move those concepts and ideas into an OpenNI environment.
If in your sign language system you want to use hand gestures you can have a look at Kinect 3D handtracking. If it includes whole body movement then have a look at KineticSpace. Both tools work for linux but the first tool requires CUDA enabled GPU as well.
I think the Gesture recognition is not depended on whether you use Kinect Official SDK or OpenNI, if you can get the skeleton data or depeth image from the Kinect,then you can extract the gesture or postion from the relations between skeltons with times. As far as i know ,those SDKs are all provide those informations.
I developed kinect on windows ,but the priciple are the same, I still suggest to know the principle about the Gesture recognition, then you can find the way to do recognition using other SDKs.

How check the availably of all objective-c function in source code for Cocoa

When you read the Class Reference of every object of iOS, you will find:
Available in iOS 2.0 and later.
There are a program or a way to list all function and the minimum iOS system?
How can I know if the iPhone with iOS 3.0 will run all iOS function? I can check it in runtime with respondToSelector, but can be more easy with source code?
Set your project's base SDK to iOS 3, and see if it builds.
AFAIK there is no way to list all the APIs you use in your app into one list and check that you are building past the minimum for all those APIs. You will just have to check each one, one by one. Highlight the API in Xcode, and then click escape and it will tell you very easily.
But also I have to mention that this won't be extremely necessary since you should test on the minimum OS you are building for and if it crashes at any point then you have your issue for that certain API.