Possible to use the Microsoft Kinect with .NET Gadgeteer? - kinect

Based on what's public, is it possible that someone can interact with the Kinect with the .NET Gadgeteer?
What (if anything) probably needs to be done to the drivers?
If you're interested, here is a Channel9 video that shows you how to use VS2010 to create an embedded application. It is due to release in Spring of 2011.

You won't be able to use Kinect on the .NET microframe, which is the embedded CLR that powers .NET gadgeteer. You could however connect to the Kinect via a TCP Socket connections, which would be supported in gadgeteer (assuming you have a network connection) and the full .NET stack. Using the sockets you could pass the data back and forth you need.
From experience you want to pass as little information down this pipe as possible, so if you can look for something gesture controlled I would suggest you calculate this at the service end, and simply pass an event flag down the socket.

Related

Videocast throught WebRTC for Hololens 2 project

I have an assignment to display, into a Hololens 2 (Unity Project), two video feeds (stereo camera) coming from a LattéPanda. For now, I successfully manage to do the demo from the Mixed-Reality WebRTC project locally, but I have some difficulties with the remote streaming.
The problem is how to make my application based on the Mixed-Reality C# Core 3.1 connect to my NodeDSS signaler since the demo uses a NamedPipeSignaler class that can't reach out to localhost? So I look up the classes they provided in the hope of getting the required method to implement, also with the interaction it needs to do with the PeerConnection object. It started to be a little complicated, so we look up other solutions.
One of the solutions we found was the OWT-Server (Open WebRTC Toolkit) which seems to give us already dockerize application to videocast on its own. However, the documentation doesn't specify much other than we need to link the docker image to an "application", which is not clear what it is supposed to do. We don't have any way to specify the STUN/TURN server, nor the signaler IP address for that matter.
So my goal at this point is very simple: just make one feed appear into my Unity project. The LattéPanda's only objective right now is to cast the video without caring much for any interaction (for now): it won't receive or even need to listen to any feed coming back ever, and for now, there is no need to interact with other tools. I've been searching for about 2 weeks now and my GoogleFu is not that good apparently. Is there any tool that could achieve this?
A little disclaimer: I do believe I still lack an understanding of the Signaling process. It seems that WebRTC does not enforce any standard in that regard. What I understand is the communication protocol (WebSocket, HTTP/2) is not standardized, only the messaging is (what message needs to be sent/handle).
EDIT
To be clear, the LattéPanda currently runs a console application written in C# Core 3.1. The reason is, like I said, that the LattéPanda should not display any of its feed to a monitor connected to it, nor received/handle any feed from outside. We can see it like a surveillance camera that outputs its feed through WebRTC and doesn't need to receive any feed.

How to switch between different Kinect V2 connected to single PC?

I have two Kinect V2, connected to two different USB 3.0 ports on a single PC. I know it is not possible to use both based on SDK V2 concurrently, and I know I should get access to the Kinect V2 using this method:
_sensor = KinectSensor.GetDefault();
However, it always returns back one of the sensors as default and it doesn't matter which USB 3.0 port I connect it.
First of all, is there any methods that we can get list of connected Kinect V2 connected to a single PC and turn on that one based on our preferences?
I want to use one of in each time frame, but need to switch between them.
There is a workaround, which is annoying, but seems to work:
You can enable/disable the USB-port/-controller each Kinect is connected to. Disable all ports but the one you need, and KinectSensor.GetDefault(); should give you the correct sensor.
You can do this manually in the device manager. But I'm sure there is also some way to do this automatically in code.
For more details see the thread Connection to multiple Kinect V2, NOT for synchronous acquisition on the Microsoft support forum.
It's possible to use both Kinects at the same time. I have developed an application that uses 2 Kinects, video and skeleton flows from both in the same time. It developed for sdk 1.8.
So, you can get all available ready devices like that:
KinectSensor.KinectSensors.Where(kinect => kinect.Status == KinectStatus.Connected)
Why do you need to switch between them? You may just activate the needed flow from both at one time. In the sdk it says that both Kinects can't work when they are directed to the same object. But it's working. If you want to switch between them like this:
stop flow from first
activate flow from second
goto p.1
It's bad variant because p. 1 and 2 can take about a second time. It's a very slowly.

How to program midi messages to HUI Pro Tools uses?

I found some specs online but It wouldn't work for Play.
I tried
const UInt8 noteOn[] = {0x90, 127}; and it didn't work.
Does anybody know what midi messages to HUI Pro Tools uses for play and stop?
There are 2 main Protocols out there for controlling DAWs, Logic Control and Mackie Control (HUI). Unfortunately both are close protocols. Only recently Apple added support for TouchOSC (iOS application) and the OSC protocol (Open Sound Control) in general for Logic Pro, hopefully Pro Tools to follow (maybe it already did and I'm not updated, you better check it out).
If you want to reverse engineer the record/stop buttons and you own some sort of Mackie Control device, I recommend using Midi Monitor or LC Xmu to monitor what data gets in. Not sure whats there for PC users, on my PC era I used my Pro Soundcard.
If you don't own some sort of controller and looking around the internet for the answer please notice that these protocols have many versions that each manufacture tweak a little bit. On the other hand, there are not that many options, you can try them ALL :)
Anyhow, I program an iOS application that controls Logic Pro without using LC or MC at all. I opened Logic's Key Command and set the Midi Listen button of the Start/Stop on, then sent some Midi Note from my iOS application to calibrate the button. It worked well, but was not intuitive to users so I decided to give up.
You can send a midi machine control message through your virtual server with you virtual server, I had some success after reading this: http://en.wikipedia.org/wiki/MIDI_Machine_Control
Be sure to enable your virtual source as a mmc in your DAW. Also there is a Boolean check in the core midi docs you can use to verify that your program is sending the mmc messages, I believe it is something like kmidimachinecontrolenable, it is a coremidi constant and should not be hard to find.

Beginner Windows Service / WCF and front end GUI implementation Question

I am trying to figure out the best way to approach this design... Here is some background of what I'm trying to do:
I have a simple digital I/O controller that sends data to my computer via Ethernet. I have a program that can receive this data over Ethernet. I would like a separate front end application that presents this data in a GUI. I am trying to figure out the best way to interface the program that grabs the I/O data over Ethernet, and the program that displays this as the front end. This interface should run whenever the computer boots and constantly poll the I/O in the background.
I've read about Windows Communication Foundation (WCF) and this seems like a nice way to do this. As the windows service would quietly keep polling the I/O and any clients that attach to the WCF interface can present this data in a GUI.
Am I going about this all wrong? Does this seem like a good way to do things? How will my front end clients grab the data from the WCF service?
Thank you in advance.
That's precisely the way I have done it - hosting a WCF service in a Windows service. The Windows service is the process; the WCF service is where the work is done.
In my case, my WCF-based CollectionService is on standby most of the time. I use WCF to start and stop the collector because the WCF programming model makes this easy. However, to get the data from the collector to the UI, I use a TCP socket, not WCF. I know that WCF has a streaming mode, but (1) I've never used it and (2) I believe there is some amount of overhead using WCF this way. The socket is simply a comfortable fallback for me, but I think WCF could be made to work.
If you're just starting, you can refer to these two answers for getting your Windows service up and running using C#. From there, you'll just need to create the ServiceHost and close it in the OnStart() and OnStop() callbacks of your Windows service, respectively.
Easiest language for creating a
Windows service
How to make a
.NET Windows Service start right
after the installation?
If you are new to WCF, take a look at this SO question.
Good and easy books/tutorials to learn WCF latest stuff
One more thing. In the course of your work on this, you may find that you want the WCF service to provide events to your UI when certain things occur. For example, you might provide an event that periodically notifies the UI of the number of bytes that have been received. For this, I would strongly recommend this article by Juval Lowy, one of the WCF gods.
What You Need To Know About One-Way Calls, Callbacks, And Events
His Publish-Subscribe Framework is available for free at his website, IDesign.net, along with several other working WCF examples.
Hope this helps.

vb.net possible to monitor raise events across applications?

I may have gone crazy... but I am hoping there is a way to do this.
I have a base class that has event handling in it. My console application is running my workflow. Part of that workflow is to raise events at specific intervals in a separate thread to broadcast the workers' current state (a heartbeat I have heard many call it).
I also have another program in the same solution that is a windows form that I want it to be able to listen to what is going on in the console application so that it can display the worker states. I have tried running both at the same time and verified the events are triggering, but the monitor is not finding any of the raised events.
I am fearing that there is no way to do this, and I will need to go to a database logging method or something else... but in the off chance someone knew how to communicate between applications with event (or event-style) logic, I would appreciate it.
Currently the applications are running from the same location. The goal is that the monitor application will eventually be attached with a broadcaster for our network so that our workstations can monitor for certain worker states without being logged into the machine and the main monitor will show us the full status of all the workers.
Please let me know if I need to expand/clarify this, have a 2-year old watching Star Wars while I type this so I may have missed something.
There are several ways: using remoting, custom windows messages and named pipes. One way is How to use named pipes for interprocess communication in Visual Basic .NET or in Visual Basic 2005
Here's a remoting example: Simple Inter-Process Communication In VB.Net
Here an example of custom windows messages: VB.NET, VB6 and C# Interprocess communication via Window Messaging
Perhaps the most 'up-to-date' way is to use WCF Callback Channels: Using Callback Contracts in WCF for Asynchronous Publish/Subscribe Event-Style Communication