So I've got a few Kinect v2s and am hoping to set up an array of them to get a 3D recording of an area in space (Eventual goal is to build a 360 image from multiple point clouds) But at the moment I can't even get one working on a machine.
I've installed the official SDK onto a windows 10 device and when opening the kinect studio I get nothing but a grey screen when connected to the kinect. Running the kinect configuration verfier says the USB controller is unknown and the system is waiting for the kinect to respond. The kinect itself does not light up, and it's cooling fan does not turn on.
I have reinstalled the SDK, tried 3 different kinects, tried various drivers and troubleshooting guides, and still cannot get anything out of the kinect.
The best answer I've found is that only some controllers are compatible, but every PC I have tried (currently 5 machines) have "Intel(R) USB 3.0 eXtensible Host Controller - 1.0 (Microsoft)" So basically do I really have to get a PCI USB controller or another machine, or is there any way to get the current system to work with the kinect v2 at all.
Also if I do need to buy a new device or PCI card are there any recommended for a setup that would idealy run 4-5 kinects?
Unfortunately, the Kinect V2 prevents you from simultaneously running more than one Kinect on a system:
Sensor Acquisition and Startup
Kinect for Windows supports one sensor, which is called the default sensor. The KinectSensor Class has static members to help configure the Kinect sensor and access sensor data.
Kinect API Overview
A workaround that I've used in the past is to have 1 computer for every Kinect (doesn't have to be fancy, just enough to run it) and then network all the machines together with a router. Designate one machine to be the controlling machine (handles turning on/off of other Kinects). Depending on what you plan on doing to the data, it may be helpful to have those other machines perform some pre-processing and leave the stitching off all the Kinects' feeds up to the controlling machine.
As far as the USB controller goes, I'm running Kinect v2 on that exact one, so other than that malfunctioning, I think you're fine. Have you tried running the "Kinect v2 Configuration Verifier" to see what it suggests? Kinect v2 Configuration Verifier
I'm trying to make a Windows 10 Universal App to make a third party tile for my Microsoft Band but it doesn't say in the documentation how to get my App to recognize the Band through USB. It only says how to do it through Bluetooth. The documentation also doesn't tell me how to access the GPS sensor. How do I do all these things?
To answer your first question, USB for the Band is used for charging. In order for you to test, you need to go through via Bluetooth. Your setup should be
Visual Studio > Launch App on your Device (mobile is connected to your laption via USB) > Test App on Mobile (which talks to Band via Bluetooth).
Note: Make sure Band is paired with the Mobile you are using to test.
To answer your second question, you can not subscribe to GPS on the band. It is not opened for thrid party app access. GPS is exclusive at this point.
I've been asked to work on a project for Windows 8 where I have to detect:
Type of device inserted to USB port (mass storage drive or android phone or windows phone etc.)
Port in which device was inserted. (if I have 4 USB ports in PC then identify which port received new device)
Detect when the device was ejected from the PC
Are there some managed C# API's that can be used to query or if there are some callbacks that can be subscribes to.
Any help or direction will be very useful.
Thanks
You don't specify whether you are writing a desktop app, or a Modern UI app. If it is the latter, I'm afraid you are going to be out of luck as this level of information is simply not passed down to the app's sandbox.
You may have better luck with a desktop app. I don't have any direct experience of doing what you ask for, but I do remember having read that it may be possible through .NET.
I'm trying to connect a usb sensor (see Toradex) to an android phone (Desire Z) running android 4.0.3.
To test this, I wrote a small app to enumerate the attached device(s).
This supposed to have USB HOST mode implemented and to power the usb sensor (HID)... but it doesn't.
I got a USB OTG cable and now, when I attach the cable, a small icon appears in the status bar (car mode).
I'm disappointed since I waited for this feature for awhile now...
Any thoughts? I read almost everything out there related to this (Sven work and whatnot) but I might have missed something...
Thanks!
I have worked a lot in the past year and a half to build custom android platform. Some was under Froyo but mostly on Gingerbread. Most on the hardware I added was on either a UART or on USB, which is what you want to do. Unfortunately, it is not as easy to add a USB peripheral on an Android device than on a PC or a MAC. PCs and MACs have virtually unlimited memory space (hard drive). They can hold the drivers of a very large number of devices. That makes it possible to do auto-detection and automatic loading of drivers. On an Android device, it is a lot more lean therefor, just the required drivers are stored on the device. Every time I added a new device, I had to compile the driver for my platform and make some modification in my configuration. It is also possible to load the driver as a module instead of compiling it with the kernel (gives a file.ko output). Although, the driver must have been written accordingly. But, you will have to install it by modifying the "init.rc" which requires root privilege.
here is a few link of question/answer about about drivers in Android. That should give you a little bit more info:
USB touchscreen driver
Hope it helps but unfortunately, it is quiet a lot of work do do.
The joys of multimonitor programming are countless, I think there are about 5 blog posts on Coding Horror on the topic alone!
I often code in Windows on my main machine, and have my Mac laptop set up to the side. I use the Mac both to compile Mac builds but also as my "reference web browser". There's no KVM or anything.
However a casual conversation at a conference led me to the question, could I use two independent machines to share windows? Literally move some windows from one machine to another, so I could use one PC's display as "overflow" from the other.
Some googling suddenly shows that this is possible in some situations for sure:
Synergy and Maxivista
My question is whether any programmers have tried such a setup. We have unique needs especially with multiple text windows and editors, and this kind of tool may be a huge win or a huge hassle.
This solution feels like a combination of easy KVM switching AND multiple monitors.. it sounds like a programming dream! So advice or especially reports of actual experience in a programming environment would be greatly useful before I invest in the rather complex setup.
Followup:
Sounds like I'm asking for something that doesn't exist! It's kind of combination of a software KVM and VNC. But the VNC would need to break out the app windows and allow individual manipulation (like that maxivista commercial tool, which is Vista only).
Thanks for all the feedback. Looks like there's demand for a cool app if anyone has the drive to be first in this new nich!
Synergy doesn't allow you to move windows between machines (that would require a silly amount of work behind the scenes), but it does allow you to share a keyboard and mouse between two machines so they "appear" to be all one machine, but actually run separately.
I personally use Input Director, as I found it more stable than Synergy. I have my laptop with an external monitor to the right, and my desktop to the left as an Input Director slave. My desktop runs a different O/S and is basically my guinea pig box for testing stuff and for anything I need to keep running when I leave the office. Cut + paste is pretty seamless, so I can quite happily fire up an RDP session to a server on my desktop, and cut+paste SQL scripts from that to my laptop.
It's a very useful thing to have if you have a few physical boxes and monitors kicking around :)
I've actually managed to use spare notebook as a second monitor to Desktop PC. This allows to move windows to second PC, but not vise-versa.
Solution would work basically with any OS.
The only requirement is a spare VGA (or DVI-I/DVI-A) port on server PC.
Make a dummy VGA plug http://www.overclock.net/t/384733/the-30-second-dummy-plug
This will also work for DVI-I/DVI-A port + DVI-VGA adapter
Detect virtual monitor with your OS. Monitor will be detected as very generic monitor, so you can set up any resolution. Set it to slave PC resolution.
Use any remote control software to connect from slave to server PC. Set it to display only "virtual" monitor.
That's all. Your slave PC is a second monitor for server PC.
I've used this on Windows 7 + TeamViewer. I've additionally set up Mouse Without Borders (Microsoft Synergy analog) to be able to use slave PC with same mouse&keyboard, though this is not required if you intend to transform it to monitor-only.
Xdmx - Distributed Multihead X Project (linux only)
Provides native X display on external machines, no VNC cons.
The following is not exactly what you want, but pretty close:
You can start a VNC server on the Windows machine, which will let you "export" its graphical screen.
Then, unplug the monitor from the Windows machine and use it as external laptop monitor instead, with your Mac laptop.
There, on your Mac, you just connect to the VNC session using Chicken of the VNC, which will give you the graphical screen content of the Windows machine as a Mac window (interactively, so you can actually control the windows machine as if you were working on it directly). You can put that on the external monitor, and you can also put other windows there, so you really have a shared environment.
I believe this solution also lets you copy and paste content from the Windows screen to Mac windows and vice versa.
I use MaxiVista on WinXP while programming. It works fantastically and lets me add a third screen to my multi-monitor configuration.
There is hope, here for windows users: http://virtualmonitor.github.io/ Looks like a work-in-progress and only supports windows 2000 - windows 7, but he's looking for help with windows 7 - 8.
Unfortunately, synergy doesn't allow moving windows across screens currently. It only forwards mouse&keyboard events from one set of physical devices to different computers.
Yes, and I love it. It allows you get past 2 screens on a laptop, and really I find 3 a great amount.
If your main machine is a Mac you want ScreenRecycler. You can then use monitors on other Mac, Windows, and Linux machines (anything with a VNC client). You will want something better than the Mac's crappy windows management though. I suggest Many Tricks' Moom and Witch.
On Windows, as #LachlanG said, MaxiVista works great. And it supports adding monitors from Windows, Mac, and Linux machines.
I am reusing my old laptop as a second monitor to see the live preview while coding. I am using SpaceDesk, which is free.
I use barrier and open source fork of synergy. Its a little hard to use but works really well. (To find it just search google for 'barrier github').