I have a question with Kinect. I am trying to run the code from here!, the code is under the video the second one with OpenNI.
The problem is I have Windows and OpenNI and this code runs on Linux. I can't convert it to Windows. Do you know how? Or if there is another code on windows for finger detection?
When I run this I get errors such as:
(unsigned int no variable declared as =)
unsigned short near = depth - 100; // near clipping plane
Note I tried to run the first code for Windows but it needs the CL NUI which I didn't install as I prefer OpenNI.
Related
I am trying to use Kinect 2 and SDK v2 for capturing Infrared Images/videos.
Kinect shows Depth and RGB images properly, But when i try to visualize Infrared Basics in Kinect for Window. It does not show any image, rather a black screen.
What is the reason for it. I reinstalled SDK v2, but still the same problem. In a similar post some one suggested that reinstall a newer version, which I did. But still the same problem. Can any one suggest any solution?
thanks
it is better to use "KinectConfigurationVerifierSetup" for test system requirements. and i suggest you that use Infrared Basic-WPF Samples in SDK Browser, also you could use that sample code and install them to your computer. if still infrared data source not show, you could test Kinect on other computer
I fixed my problem by updating GPU Driver. It has a conflict/bug/error with older version. However Nvidia removed it. And if one install new driver, it start showing infrared images.
Attention for your graphics card setting, Maybe changing your computer to auto or Inter HD Graphics will work.
Using the supplied Android demo from
https://developer.sony.com/downloads/all/sony-camera-remote-api-beta-sdk/
Connected up to the WIFI connection on a Sony QX1. The sample application finds the camera device and is able to connect to it.
The liveview is not displaying correctly. At most, one frame is shown and the code hits an exception in SimpleLiveViewSlicer.java
if (commonHeader[0] != (byte) 0xFF) {
throw new IOException("Unexpected data format. (Start byte)");
}
Shooting a photo does not seem to work. Zooming does work - lens is moving. Camera is working fine when using the PlayMemories app directly, so no hardware issue.
Hoping from advice from Sony on this one - standard hardware and demo application should work.
Can you provide some details of your setup?
What version of Android SDK are you compiling with?
What IDE and OS are you using?
Have you installed the latest firmware? (http://www.sony.co.uk/support/en/product/ILCE-QX1#SoftwareAndDownloads)
Edit:
We tested the sample code using a QX1 lens and the same setup as you and were able to run the sample code just fine.
One thing to check is whether the liveview is ready to transfer images. To confirm whether the camera is ready to transfer liveview images, the client can check “liveviewStatus” status of “getEvent” API (see API specification for details). Perhaps there is some timing issue due to connection speed that is causing the crash.
I have had a fragment shader working for long time on every phone I tried. After Android 5.0 upgrade is out, neither phone could run the app.
Through debugging, I see that the app crashes at GLES20.glLinkProgram(program)
I see the following error after compiling the shader, which only happens when running Android 5.0
E/Adreno-ES20﹕ : Invalid texture format! Returning error!
E/Adreno-ES20﹕ : Framebuffer color attachment incomplete. Returning GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT!
What I have in my shader is:
#extension GL_OES_EGL_image_external : require
// Image texture.
uniform samplerExternalOES sTexture;
precision mediump float;
. . .
Has anybody seen this issue before?
Some docs:
https://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image_external.txt
Because of these restrictions, it is possible to
bind EGLImages which have internal formats not otherwise supported by
OpenGL ES. For example some implementations may allow EGLImages with
planar or interleaved YUV data to be GLES texture target siblings. It is
up to the implementation exactly what formats are accepted.
Sounds like maybe the accepted formats changed? I'd check your format and see if support was dropped for it, or if it's a bug.
Ok, so after Android 6.0 update, this problem disappeared.
So, for Android 4.x it works, 5 it doesn't work, and 6.0 it works. I am calling this Android 5.0 issue.
I'm developing a program where a user can move a human mesh taking joints from kinect sensor!
I started with Sinbad example, loading it into a project: everything works correctly!
Next step:
- open Sinbad.blend in Blender
- take only the skeleton
- build my own mesh and "attach" it to the skeleton
Problem: when i load MyMesh instead of Sinbad.mesh, my mesh appear totally distorced and a lot of bones move in wrong way!
Is this the right approach to my problem?!
I'd like to be able to decide if the display on the computer where my app is running is currently active or shutdown. I need this for a media center software so I know if I need to activate the display before starting the playback of movies.
So far I tried to use this code:
CGError err0 = CGDisplayNoErr;
CGError err1 = CGDisplayNoErr;
CGDisplayCount dspCount = 0;
err0 = CGGetActiveDisplayList(0, NULL, &dspCount);
CGDisplayCount onlineCount = 0;
err1 = CGGetOnlineDisplayList(0, NULL, &onlineCount);
// Error handling omitted for clarity ;)
NSLog(#"Found %d active and %d online displays", dspCount, onlineCount);
But this code out puts always the same. When I try it on my mac mini, with the display turned off I get the following output:
Found 1 active and 1 online displays
The display is not in a standby mode as I disconnect the power to it when it is not in use. I also tried this on my mac book, which has an internal and an external display. And there it returns:
Found 2 active and 2 online displays
Here it is the same, I deactivate the display and disconnect the power to it but is still returns as beeing active.
The display on the mac mini is a tv-set connected with a dvi to hdmi cable. The display on the mac book is connected with a dvi to vga connector.
I hope somebody has an idea how to solve this. Thanks in advance.
It sounds like you want to know whether any connected display is asleep or not?
Have you looked at the CGDisplayIsAsleep function?
http://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/Quartz_Services_Ref/Reference/reference.html#//apple_ref/c/func/CGDisplayIsAsleep
To close this open question. My final findings were that as soon as an external monitor is connected to the computer the given methods will return that it is there. And this also works when the monitor is powered of and not connected to the power source.
So as far as I can tell there is no way to find out what I would like to know :(
As I control the event which activates the monitor from my application (in my case its a TV which I control with a usb to ir box) I can get the state of the monitor in this way, but this only has the downside that when the application is crashing, I will lose the state. But thats the best solution I could find.