Green screen kinect - kinect

The green screen kinect sample which is available in SDK 1.5 is not showing the head part fully, whereas the hair part is coming with some blur effect.
I'm unable to see the hair portion in the sample. Is there any improved version of kinect green screen that is available?

That is not problem of the sample. See this: http://www.youtube.com/watch?v=6BaWwx5x7nM
The Kinect project a IR grid to detect the depth, but some "rays" pass through the space amoung the hair so can not detect that part.
It's possible "fill" that missing part, but you have to implement by yourself.

Related

Light problems nwjs and threejs

I'm having problems with the lighting in my proyect, I'm just using normal direct light.
light = new THREE.PointLight( 0xfefffe);
But the problem is that with the 0.12.3 version of nwjs the objects in the scene are black (like if there was no lights) and sometimes the start flickering in red, black and green.
If I change the original libEGL.dll and libGLESv2.dll with the ones in the 0.13.0 version of nwjs it works fine but only in some hardware... I don't know whtas going on, what can I do to make everything work just fine?
Thanks
So, this is what hardware limitation it seems. I used PowerVr device driver *.dll ( EGL and opengles ) for solution. Curious to know this problem happen on ubuntu/linux devices?
Since you are creating material which take light vector as input check on the basic material and see the result.
Plus try to make the custom material shader which has (ambient,specular, (diffused optional) ) and then see result on machine.
Since the dll contains the implemenation of gles stuff onto window machine I believe you see this issue on window itself.
Black only comes when it require light vec into fragment shader and it is not been passed so texture2D result with light of undefined give you blackish output

Focusing box on QRCode scanner on iOS 7

I'm investigating if it's possible to implement the same functionalities of ZBar library with iOS 7 api.
Everything was good so far thanks to this tutorial.
However, I now want to have a green box shown on the screen whenever the camera detects a QRCode. The green box is supposed to wrap around the QRCode.
From the delegate of AVCaptureMetadataOutput, I can grab AVMetadataObject but the bounds getting from this object is always very small which is not correct, given the fact that my QRCode is very big on the screen.
Anyone has any ideas on how to achieve the green focusing box?
P/S: I came across the documentation and couldn't understand this line "If the metadata originates from video, bounds
may be expressed as scalar values from 0. - 1.". This is for the bounds property of AVMetadataObject
You can look at this tutorial for qrCode scanning using iOS 7.
I had to do the same thing in my scanner app. Here is a link that I found very useful and pretty much answered all my questions.
He goes step by step from setting up the scanner to adding the bounding box.

Cocos2d - 4inch screen displace the game

i developed a game for the iphone4. Now i got problems with the iphone5 and the 4inch screen. My game is on the left side of the 4inch screen and i have a big black border on the right side. But the buttons from the game are in the middle of screen, they have same position like on the iphone4. I checked everythin but i dont know why the background-images and the sprites are on the left side and the buttons are in the middle. I want that everything is in the middle or on the left side. It would be great if anybody could help me!! Thanks!!
COCOS2d-iPhone:
If you're using the latest beta, the only change you should need to
make is export all your images at twice the size and use the "-hd"
suffix, similar to Apple's "#2x". The documentation also says you need
to set the content scale factor of the director.
You can find the documentation here.
and more detail here.
COCS2d-X:
Cocos2D-x has a very easy solution for multi-resolution problem.
In fact you just need to set your DesignResolution and then just imagine your target device will has this resolution.
If target device really has this resolution ( or some other but with same ratio) cocos2d will handle to fix screen and your game looks same in all devices.
And when ratio of target device is different, you have many option ( as cocos2d language, policy) to manage that.
For example if you use Exact fit policy, cocos2d will force your game ( and design) to fit target device screen (without that black boarder).
Exact fit
The entire application is visible in the specified area without trying
to preserve the original aspect ratio. Distortion can occur, and the
application may appear stretched or compressed.
For more detail just take a look at this link : http://www.cocos2d-x.org/projects/cocos2d-x/wiki/Multi_resolution_support

Kinect: How to obtain a skeleton from back view?

Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?

All frames from Kinect at 30FPS

I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps.
Yes, it is possible to capture color/depth at 30fps while capturing the skeleton.
See image below, just in case you think me dodgy. :) This is a raw Kinect Explorer running, straight from Visual Studio 2010. My work development platform is an i5 Dell laptop.