Test methodology for Hand gesture recognition application in Kinect - testing

I am developing an Immersive image navigation project in Kinect, which uses hands-free gestures. I have decided upon a set of gestures, which I shall use in the project.
I am working on the algorithms. What I want to know what is the general methodology for testing the various components of such Kinect related projects.
How must I design the test suite, what will be its fields? How must the different gesture recognition algorithms be tested? What is an optimum number of tests which establish data worth presenting? How many participants must be involved? And what set of data must be collected for the required coverage of test information?

With Kinect SDK you can record the gestures and then replay them to verify that your code can properly detect them.
Doing gestion recognition is hard, though - try to record different people doing different gestures and then see if your code can recognize them all.
Most of the time machine learning is involved in creating such recognizers - for machine learning to be working you need to record many many people for training the recognizer - I would advise against this and reuse the gestures offered by the Kinect SDK (grip, release for example) unless you know what you are doing.

Related

Relocalize a smartphone on a preloaded point cloud

Being a novice I need an advice how to solve the following problem.
Say, with photogrammetry I have obtained a point cloud of the part of my room. Then I upload this point cloud to an android phone and I want it to track its camera pose relatively to this point cloud in real time.
As far as I know there can be problems with different cameras' (simple camera or another phone camera VS my phone camera) intrinsics that can affect the presision of localisation, right?
Actually, it's supposed to be an AR-app, so I've tried existing SDKs - vuforia, wikitude, placenote (haven't tried arcore yet cause my device highly likely won't support it). The problem is they all use their own clouds for their services and I don't want to depend on them. Ideally, it's my own PC where I perform 3d reconstruction and from where my phone downloads a point cloud.
Do I need a SLAM (with IMU fusion) or VIO on my phone, don't I? Are there any ready-to-go implementations within libs like ARtoolKit or, maybe, PCL? Will any existing SLAM catch up a map, reconstructed with other algorithms or should I use one and only SLAM for both mapping and localization?
So, the main question is how to do everything arcore and vuforia does without using third party servers. (I suspect the answer is to device the same underlay which vuforia and other SDKs use to employ all available hardware..)

Advice on accessing Logitech BRIO features

For my university project I must develop a windows app which recognises a user based on two biometrics - fingerprint and facial heat signature. This is very new and exciting territory for me as I will encounter difficult challenges that I have not yet faced and the learning curve will be very steep but fruitful.
My question relates to the camera which I will attempt to use for facial heat signature recognition. This is it: http://support.logitech.com/en_us/product/brio
It is relatively new and Logitech have not released any dev SDK for it and as such I am stuck on how to get under its hood/bonnet and integrate it with my app. I am looking for advice on how I can go about doing it and assess whether it is feasible, in any case. If it is not then I can not afford to waste my time on it and will have to come up with new ideas.
As an aside, it can be used for Windows Hello.
In short, I am looking for advice on how I can approach this challenge or whether I should at all. Thank you.
Try to access through MediaCapture and MediaFrameSource classes. It works for me. But its only 340x340 30fps camera. And IR diode blinking about 15-20 Hertz so there is blinks in IR frame.
C# used.

How to modify DirectX camera

Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.

How to detect heart pulse rate without using any instrument in iOS sdk?

I am right now working on one application where I need to find out user's heartbeat rate. I found plenty of applications working on the same. But not able to find a single private or public API supporting the same.
Is there any framework available, that can be helpful for the same? Also I was wondering whether UIAccelerometer class can be helpful for the same and what can be the level of accuracy with the same?
How to implement the same feature using : putting the finger on iPhone camera or by putting the microphones on jaw or wrist or some other way?
Is there any way to check the blood circulation changes ad find the heart beat using the same or UIAccelerometer? Any API or some code?? Thank you.
There is no API used to detect heart rates, these apps do so in a variety of ways.
Some will use the accelerometer to measure when the device shakes with each pulse. Other use the camera lens, with the flash on, then detect when blood moves through the finger by detecting the light levels that can be seen.
Various DSP signal processing techniques can be used to possibly discern very low level periodic signals out of a long enough set of samples taken at an appropriate sample rate (accelerometer or reflected light color).
Some of the advanced math functions in the Accelerate framework API can be used as building blocks for these various DSP techniques. An explanation would require several chapters of a Digital Signal Processing textbook, so that might be a good place to start.

Simulation framework

I am working on embedded software for an industrial System. The system consists of several stepper-motors, sensors, cameras, etc. Currently, the mechanics as well as the electronics are not available - only specification.
I've implemented the simulation for some parts of mechanics/elektronics, but it takes a consiredable amount of efort. So my question:
Are there good portable (Win/Linux) Hardware simulation frameworks? Easy to install/use and affordable in prise?
My basic requirements are:
Send command to stepper -get interrupt from light-barrier
recognize object with camera ( not necessary)
mechanical parts should move according to steppers but stop
on obstacles.
objects should fall, if there is no ground underneath
fluids should increase/decrease volume in bassins according to
physical laws
My Application is in C++/Qt, so it would be the best, if such a framework had C/C++ bindings.
Thank you for any advice!
I face the same problem as I have to develop systems to interface with several types of automation devices (robots, firmware devices, etc). I still want to provide unit test for my code, but after writing 3 or 4 simulated devices, I thought it got to be a better way.
Fortunately in my case, my code was all in C# and the final solution was to use Moq to create simple mocks of those devices. I'm not familiar with mocking frameworks for C++/Qt, but a simple search rendered a couple of results, including one made by google (googlemock).