We have been trying to get the skeleton tracking working from KinectA. We are using a Mac. Our kinect has already been working with the camera connect to the computer. We used the programs Homebrew and terminal. We need to get the kinect to conect with KinectA but the camera is not showing on the program. Is there an alternativee option we can use or does anyone know how to make it work.
Thanks
Related
So instead of buying another webcam I decided to try out the Xbox 360 Kinect stored away in our storage. I successfully paired it to my cMP 5,1 running OSX Mojave and windows 10 in bootcamp. I bought the adapter to connect the kinect to USB from amazon, $12 ("JETEHO 1 Pc Xbox 360 Kinect Sensor USB AV Adapter"). Now with Kinect for Windows Runtime v1.8 installed, the devices that show up when the Kinect is connected are as follows:
Kinect for Windows recognizes the kinect as a camera, this is not what Teams recognizes though
So the camera there is recognized by the computer, but my problem is that when I open MS Teams this does not get recognized as one of the devices. The tricky thing is that MS teams recognized the Kinect as an audio device, but I need it to be recognized as a video device.
MS teams recognized the Kinect as a USB audio device
Kinect as a usb audio device can also be seen in the device manager:
Kinect is recognized as a usb audio device, but I need it to be video as well so I can use it as a webcam in MS Teams
Below is a snip of the MS teams settings not showing the camera as being there: Settings in MS teams showing no camera detected
So, I wanted to know could there be something that I am missing, a driver perhaps of some sort? I have the KinectCam.ax file installed as well and registered, again with no change.
I've tried installing the various SDK's from Microsoft website including v1.8 and v2.0 and even tried installing the most recent runtime from Microsoft for Kinect v2.2 with no avail. The machine still doesn't recognize the kinect as a video or camera device in the device controllers and its driving me bonkers! I have OpenNI virtual cam v0.9.5.0 installed and it recognized the Kinect. The SDKs I've installed also recognize the camera from the Kinect but not MS Teams and that is what I was hoping to use as a webcam device. Any solutions I will try at this point. Also, this website proved somewhat useful, but for some reason I am not getting the same results as them.
https://answers.microsoft.com/en-us/msoffice/forum/all/using-a-kinect-in-teams/3891a30a-2d4e-4d22-8d92-aa7b3ffd779c
GammalSkinka found the solution by using virtual cam for Teams to recognize the device, but I'm not getting the same results as well as cbscript_chris.
working with runtime installed, and w/o sdk, but not for me =(
Again, any help would be greatly appreciated!
The Azure Kinect sensor SDK requires Kinect to connect directly to pc via usb, but it's not appropriate to me. Is it posible to use something like raspberry pi to transfer kinect sensor data and process on a remote server? Do you have any suggestions for this?
I found the Kinect sdk does not support ARM architecture(raspberry pi),what other device can I use?
Processing the depth camera image requires GPU compute, so we don't currently support headless operation. There are some users that have successfully enabled headless operation on Linux, but it is not a straight forward path. See https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/810 for more info.
So I've got a few Kinect v2s and am hoping to set up an array of them to get a 3D recording of an area in space (Eventual goal is to build a 360 image from multiple point clouds) But at the moment I can't even get one working on a machine.
I've installed the official SDK onto a windows 10 device and when opening the kinect studio I get nothing but a grey screen when connected to the kinect. Running the kinect configuration verfier says the USB controller is unknown and the system is waiting for the kinect to respond. The kinect itself does not light up, and it's cooling fan does not turn on.
I have reinstalled the SDK, tried 3 different kinects, tried various drivers and troubleshooting guides, and still cannot get anything out of the kinect.
The best answer I've found is that only some controllers are compatible, but every PC I have tried (currently 5 machines) have "Intel(R) USB 3.0 eXtensible Host Controller - 1.0 (Microsoft)" So basically do I really have to get a PCI USB controller or another machine, or is there any way to get the current system to work with the kinect v2 at all.
Also if I do need to buy a new device or PCI card are there any recommended for a setup that would idealy run 4-5 kinects?
Unfortunately, the Kinect V2 prevents you from simultaneously running more than one Kinect on a system:
Sensor Acquisition and Startup
Kinect for Windows supports one sensor, which is called the default sensor. The KinectSensor Class has static members to help configure the Kinect sensor and access sensor data.
Kinect API Overview
A workaround that I've used in the past is to have 1 computer for every Kinect (doesn't have to be fancy, just enough to run it) and then network all the machines together with a router. Designate one machine to be the controlling machine (handles turning on/off of other Kinects). Depending on what you plan on doing to the data, it may be helpful to have those other machines perform some pre-processing and leave the stitching off all the Kinects' feeds up to the controlling machine.
As far as the USB controller goes, I'm running Kinect v2 on that exact one, so other than that malfunctioning, I think you're fine. Have you tried running the "Kinect v2 Configuration Verifier" to see what it suggests? Kinect v2 Configuration Verifier
I am trying to make Hand Gestures like grabbing and Dragging, for example Windows on my Desktop, using the Kinect Camera, which I connected to a Raspberry Pi running with Raspbian. I already installed OpenNI+Nite, freenect and the PrimeSense driver.
So how do I start? Because I'm rather new to this, I would appreciate it if somebody could give an example or a general idea of how something like this could be done.
When using kinect studio without a sensor connected, the studio will not find the instance of the app like it does when using it with a sensor connected.
My goal is to be able to test stuff in my program without having to connect the actual sensor.
Anyone knows what do I have to do to make my app be able to work offline with the xed file?
I am currently using kinect for windows 1.7.
You can play back Kinect Studio .xed files in Kinect Studio without a Kinect sensor, but you cannot connect to an application or force the recorded data into the application without a Kinect sensor.
If you really need to run your application without a sensor, you would have to create your own data recording and playback feature in your application. For example, after the Kinect SDK provides you data, serialize and optionally compress it to a file or series of files. Later, you would have to have your application load and playback the files through the same application code path but not involving the Kinect SDK.