I want to do a simple kinect aplication in processing, I just want to when kinect detect a skeleton, show a simple jpeg image, just that. I wrote some code, all works but when someone appears in front of kinect, nothing happens, can anyone help me?
This is my code:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup()
{
// Começar o evento
kinect = new SimpleOpenNI(this);
// Ativar o RGB
kinect.enableRGB();
background(200,0,0);
// Criar a janela do tamanho do dephMap
size(kinect.rgbWidth(), kinect.rgbHeight());
}
void draw()
{
// update da camera
kinect.update();
// mostrar o depthMap
image(kinect.rgbImage(),0,0);
// Definir quantidade de pessoas
int i;
for (i=1; i<=10; i++)
{
// Verificar presença da pessoa
if(kinect.isTrackingSkeleton(i))
{
mostrarImagem(); // draw the skeleton
}
}
}
// Mostrar a imagem
void mostrarImagem()
{
PImage img;
img = loadImage("proverbio1.jpg");
image(img, 0, 0);
}
You haven't setup the callbacks for OpenNI user events.
Also if you simply want to display an image when someone is detected, you don't actually need to track the skeleton: simply use the scene image. You can get some information about the user's position without tracking the skeleton, like the user's centre of mass.
This way you'd have a simpler and faster application if you don't actually need skeleton data.
Here a basic example:
import SimpleOpenNI.*;
SimpleOpenNI context;//OpenNI context
PVector pos = new PVector();//this will store the position of the user
int user;//this will keep track of the most recent user added
PImage sample;
void setup(){
size(640,480);
context = new SimpleOpenNI(this);//initialize
context.enableScene();//enable features we want to use
context.enableUser(SimpleOpenNI.SKEL_PROFILE_NONE);//enable user events, but no skeleton tracking, needed for the CoM functionality
sample = loadImage("proverbio1.jpg");
}
void draw(){
context.update();//update openni
image(context.sceneImage(),0,0);
if(user > 0){//if we have a user
context.getCoM(user,pos);//store that user's position
println("user " + user + " is at: " + pos);//print it in the console
image(sample,0,0);
}
}
//OpenNI basic user events
void onNewUser(int userId){
println("detected" + userId);
user = userId;
}
void onLostUser(int userId){
println("lost: " + userId);
user = 0;
}
You can see some handy SimpleOpenNI samples in this Kinect article which is part of a workshop I held last year.
Related
I'm trying to use encoders to track the movement of three wheels on a robot, but as soon as any of the motors move the robot "locks up", it stops responding to commands, stops printing to the serial monitor, and just keeps spinning its wheels until I turn it off. I cut out everything except just the code to track one encoder and tried turning the wheel by hand to sus out the problem, but it still locked up. And even more strangely, now it will start spinning one of the wheels even though I've removed any code that should have it do that, even by mistake.
I used the Arduino IDE to program the pico since I've got no familiarity with python, but I can't find any information or troubleshooting tips for using interrupts with the pico that don't assume you're using micropython.
Here's the simplified code I'm using to try to find the problem. All it's meant to do is keep track of how many steps the encoder has made and print that to the serial monitor every second. Ive tried removing the serial and having it light up LEDs instead but that didn't help.
int encA = 10;
int encB = 11;
int count = 0;
int timer = 0;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
attachInterrupt(digitalPinToInterrupt(encA),readEncoder,RISING);
timer = millis();
}
void loop() {
// put your main code here, to run repeatedly:
if (timer - millis() > 5000) {
Serial.println(count);
timer = millis();
}
}
void readEncoder() {
int bVal = digitalRead(encB);
if (bVal == 0) {
count--;
}
else{
count++;
}
}
Does the mapping function digitalPinToInterrupt for the Pi Pico work?
Can you try just using the interrupt number that corresponds to the pi?
attachInterrupt(9,readEncoder,RISING); //Or the number 0-25 which maps to that pin
https://raspberrypi.github.io/pico-sdk-doxygen/group__hardware__irq.html
You have the wrong pin to encoder in your example (maybe you incorrectly copy and pasted)?
attachInterrupt(digitalPinToInterrupt(**encA**),readEncoder,RISING);
void readEncoder() {
int bVal = digitalRead(**encB**); ...}
There is similar code on GitHub that you could modify and try instead.
https://github.com/jumejume1/Arduino/blob/master/ROTARY_ENCODER/ROTARY_ENCODER.ino
It might help you find a solution.
Also,
https://www.arduino.cc/reference/en/libraries/rpi_pico_timerinterrupt/
The interrupt number corresponds to the pin (unless you have reassigned it or disabled it) so for pin 11 the code can be:
attachInterrupt(11, buttonPressed, RISING);
This works:
bool buttonPress = false;
unsigned long buttonTime = 0; // To prevent debounce
void setup() {
Serial.begin(9600);
pinMode(11, INPUT_PULLUP);
attachInterrupt(11, buttonPressed, RISING);
// can be CHANGE or LOW or RISING or FALLING or HIGH
}
void loop() {
if(buttonPress) {
Serial.println(F("Pressed"));
buttonPress= false;
} else {
Serial.println(F("Normal"));
}
delay(250);
}
void buttonPressed() {
//Set timer to work for your loop code time
if (millis() - buttonTime > 250) {
//button press ok
buttonPress= true;
}
buttonTime = millis();
}
See: https://raspberrypi.github.io/pico-sdk-doxygen/group__hardware__irq.html for disable, enable etc.
I'm trying to develop what seems to be a simple program that uses the Kinect for Xbox360 to calculate the distance traveled by a person. The room that the Kinect will be pointed at will be 10 x 10. After the user presses the button, the subject will move about this space. Once the subject reaches their final destination in the area, the user will press the button again. The Kinect will then output how far the subject traveled in between both button presses. Having never developed for the Kinect before, it's been pretty daunting to get started. My issue is that I'm not entirely sure what I should be using to measure the distance. In my research, I've found ways to calculate the distance an object is FROM the Kinect but that's about it.
What you have hear is a simple question of dealing with a Cartesian plane. The Kinect has 20 joints that exist in XYZ space, and distance is measured in meters. In order to access these joints, you have these statements inside a "Tracker" class (this is C#... not sure if you're using C# or C++ in the SDK):
public Tracker(KinectSensor sn, MainWindow win, string fileName)
{
window = win;
sensor = sn;
try
{
sensor.Start();
}
catch (IOException)
{
sensor = null;
MessageBox.Show("No Kinect sensor found. Please connect one and restart the application", "*****ERROR*****");
return;
}
sensor.SkeletonFrameReady += SensorSkeletonFrameReady; //Frame handlers
sensor.ColorFrameReady += SensorColorFrameReady;
sensor.SkeletonStream.Enable();
sensor.ColorStream.Enable();
}
These access the color and skeleton streams from the Kinect. The skeleton stream contains the joints, so you focus on that with these statements:
//Start sending skeleton stream
private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
//Access the skeleton frame
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
if (skeletonFrame != null)
{
//Check to see if there is any data in the skeleton
if (this.skeletons == null)
//Allocate array of skeletons
this.skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
//Copy skeletons from this frame
skeletonFrame.CopySkeletonDataTo(this.skeletons);
//Find first tracked skeleton, if any
Skeleton skeleton = this.skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault();
if (skeleton != null)
{
//Initialize joints
///<summary>
///Joints to be displayed, projected, recorded, etc.
///</summary>
Joint leftFoot = skeleton.Joints[JointType.FootLeft];
}
}
So, at the beginning of your program, you want to pick a joint (there are 20... choose one that will ALWAYS be facing towards the Kinect when you are executing the program) and get its location with something like the following statements:
if(skeleton.Joints[JointType.FootLeft].TrackingState == JointTrackingState.Tracked)
{
double xPosition = skeleton.Joints[JointType.FootLeft].Position.X;
double yPosition = skeleton.Joints[JointType.FootLeft].Position.Y;
double zPosition = skeleton.Joints[JointType.FootLeft].Position.Z;
}
At the end, you'll want to have a slight delay before you stop the stream... some time between the click and when you shut off the stream from the Kinect. You will then do the math you need to do to get the distance between the two points. If you don't have the delay, you won't be able to get your Cartesian point.
I'm writing a small program to change desktop background with one or two mouse clicks..
I know that I can right click on any Image file and set it as Desktop Background..
And exactly there is where the problem starts. I cant find the proper entry in any dll which would have the entry Set As Desktop Background or even New Desktop Background.
I know how I can create those in registry, but I don't want to edit registry for this, rather I would like to have it set right in my Tiny Program so with two clicks I would get control over all image files on my Computer to display them as Desktop Background. and this from any folder or even from any connected drive, without to have to return to Personalization menu.
If anyone of you knows where I can find the entry's of above mentioned Context menu Strings, so I would be very thankful.
This is just for personal use, neither to sell or give away..
Thank you Chris
P.S. Please forgive me my bad English, I'm from a non English speaking European country.
If you look at, for example, HKEY_CLASSES_ROOT\SystemFileAssociations.jpg\Shell\setdesktopwallpaper\Command
You'll notice that it has the DelegateExecute member set. This means that windows will attempt to use the IExecuteCommand interface in the specified DLL. Reading up on what that does on MSDN, and attempting to emulate explorer, I came up with this, which works.
I'm not sure why that Sleep() is needed though, I'd love if anyone could elaborate on that.
void SetWallpaper(LPCWSTR path)
{
const GUID CLSID_SetWallpaper = { 0xFF609CC7, 0xD34D, 0x4049, { 0xA1, 0xAA, 0x22, 0x93, 0x51, 0x7F, 0xFC, 0xC6 } };
HRESULT hr;
IExecuteCommand *executeCommand = nullptr;
IObjectWithSelection *objectWithSelection = nullptr;
IShellItemArray *shellItemArray = nullptr;
IShellFolder *rootFolder = nullptr;
LPITEMIDLIST idlist = nullptr;
// Initalize COM, probably shouldn't be done in this function
hr = CoInitialize(nullptr);
if (SUCCEEDED(hr))
{
// Get the IExecuteCommand interface of the DLL
hr = CoCreateInstance(CLSID_SetWallpaper, nullptr, CLSCTX_INPROC_SERVER, IID_IExecuteCommand, reinterpret_cast<LPVOID*>(&executeCommand));
// Get the IObjectWithSelection interface
if (SUCCEEDED(hr))
{
hr = executeCommand->QueryInterface(IID_IObjectWithSelection, reinterpret_cast<LPVOID*>(&objectWithSelection));
}
//
if (SUCCEEDED(hr))
{
hr = SHGetDesktopFolder(&rootFolder);
}
if (SUCCEEDED(hr))
{
hr = rootFolder->ParseDisplayName(nullptr, nullptr, (LPWSTR)path, nullptr, &idlist, NULL);
}
if (SUCCEEDED(hr))
{
hr = SHCreateShellItemArrayFromIDLists(1, (LPCITEMIDLIST*)&idlist, &shellItemArray);
}
if (SUCCEEDED(hr))
{
hr = objectWithSelection->SetSelection(shellItemArray);
}
if (SUCCEEDED(hr))
{
hr = executeCommand->Execute();
}
// There is probably some event, or something to wait for here, but we
// need to wait and relinquish control of the CPU, or the wallpaper won't
// change.
Sleep(2000);
// Release interfaces and memory
if (idlist)
{
CoTaskMemFree(idlist);
}
if (executeCommand)
{
executeCommand->Release();
}
if (objectWithSelection)
{
objectWithSelection->Release();
}
if (shellItemArray)
{
shellItemArray->Release();
}
if (rootFolder)
{
rootFolder->Release();
}
CoUninitialize();
}
}
Edit: After doing some more research on this, for my own sake, I realized that stobject.dll actually just uses the IDesktopWallpaper interface; which is part of CLSID_DesktopWallpaper
http://msdn.microsoft.com/en-us/library/windows/desktop/hh706946(v=vs.85).aspx
In Processing, I can successfully draw depth maps from 2 Kinects using SimpleOpenNI, but I'm now trying to draw 2 "scenes" (from enableScene() vs enableDepth()). Both Kinects are detected but when I draw the output, I see the same scene is drawn twice (whereas using enableDepth() always gave me 2 different depth images). Any ideas what I'm doing wrong? Thanks in advance.
/* --------------------------------------------------------------------------
* SimpleOpenNI Multi Camera Test
* --------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI cam1;
SimpleOpenNI cam2;
void setup()
{
size(640 * 2 + 10,480);
// start OpenNI, loads the library
SimpleOpenNI.start();
// init the cameras
cam1 = new SimpleOpenNI(0,this);
cam2 = new SimpleOpenNI(1,this);
// set the camera generators ** HAD TO REVERSE ORDER FOR BOTH KINECTS TO WORK
// enable Scene
if(cam2.enableScene() == false)
{
println("Can't open the scene for Camera 2");
exit();
return;
}
// enable depthMap generation
if(cam1.enableScene() == false)
{
println("Can't open the scene for Camera 1");
exit();
return;
}
background(10,200,20);
}
void draw()
{
// update the cams
SimpleOpenNI.updateAll();
image(cam1.sceneImage(),0,0);
image(cam2.sceneImage(),640 + 10,0);
}
I've done another text using the sceneMap() functionality but it looks like there is indeed an issue with SimpleOpenNI not updating properly internally:
/* --------------------------------------------------------------------------
* SimpleOpenNI Multi Camera Test
* --------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI cam1;
SimpleOpenNI cam2;
int numPixels = 640*480;
int[] sceneM1 = new int[numPixels];
int[] sceneM2 = new int[numPixels];
PImage scene1,scene2;
void setup()
{
size(640 * 2 + 10,480 * 2 + 10);
// start OpenNI, loads the library
SimpleOpenNI.start();
// init the cameras
cam1 = new SimpleOpenNI(0,this);
cam2 = new SimpleOpenNI(1,this);
// set the camera generators ** HAD TO REVERSE ORDER FOR BOTH KINECTS TO WORK
// enable Scene
if(cam2.enableScene() == false)
{
println("Can't open the scene for Camera 2");
exit();
return;
}
// cam2.enableDepth();//this fails when using only 1 bus
// enable depthMap generation
if(cam1.enableScene() == false)
{
println("Can't open the scene for Camera 1");
exit();
return;
}
cam1.enableDepth();
scene1 = createImage(640,480,RGB);
scene2 = createImage(640,480,RGB);
background(10,200,20);
}
void draw()
{
// update the cams
SimpleOpenNI.updateAll();
image(cam1.depthImage(),0,0);
image(cam1.sceneImage(),0,0);
cam1.sceneMap(sceneM1);
cam2.sceneMap(sceneM2);
updateSceneImage(sceneM1,scene1);
updateSceneImage(sceneM2,scene2);
image(scene1,0,490);
image(scene2,650,490);
}
void updateSceneImage(int[] sceneMap,PImage sceneImage){
for(int i = 0; i < numPixels; i++) sceneImage.pixels[i] = sceneMap[i] * 255;
sceneImage.updatePixels();
}
Using something like
cam1.update();
cam2.update();
rather than
SimpleOpenNI.updateAll();
doesn't change anything.
An issue was filed, hopefully it will be resolved.
In the meantime, try using OpenNI in a different language/framework.
OpenFrameworks has many similarities to Processing (and many differences
as well to be honest, but it's not rocket science).
Try the experimental ofxOpenNI addon to test multiple cameras, hopefully it will resolve your issue.
Our project is (http://www.play4health.com/p4h_eng/) using Ogre 3D
over Ubuntu 11.04. Except for core services all is based in a plugin architecture taking advantage of Ogre 3d plugin facilities.
In our plugin architecture plugins can be:
Videogames
Interaction methods
Users configure their session creating tuples (videogame, interaction
method). The flow is a session is:
* User load his session.
* User click of one of the tuples for the session and play to
videogame with a specific interaction method.
* Repeat it until end all activities of the session.
Plugin are loaded/unloaded dynamically by demand.
One of this interaction methods is hand tracking using openni. What is
the problem?
* Fist time that openni plugin is loading all work perfectly.
* Next time that plugin openni has to be loaded system is able to
detect gestures but not do hand tracking. Note that all plugin are
executed in the same process. Right now only solution is to reboot
platform.
This is the code for init and release OpenNI in our plugin
bool IPKinectPlugin::onInitialise()
{
mHandPointer.mId = "KinectHandPointer";
mHandPointer.mHasAbsolute = true;
mHandPointer.mHasRelative = false;
XnStatus nRetVal = XN_STATUS_OK;
nRetVal = gContext.InitFromXmlFile(String(this->getPluginInfo()-
>getResPath() + "SamplesConfig.xml").c_str());
CHECK_RC(nRetVal, bContext, "InitFromXml");
#if SHOW_DEPTH
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_DEPTH,gDepthGenerator);
bDepthGenerator = (nRetVal != XN_STATUS_OK);
if (bDepthGenerator)
{
nRetVal = gDepthGenerator.Create(gContext);
CHECK_RC(nRetVal, bDepthGenerator, "Find Depth generator");
}
#endif
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_USER, gUserGenerator);
bUserGenerator = (nRetVal != XN_STATUS_OK);
if (/*bUserGenerator*/false)
{
nRetVal = gUserGenerator.Create(gContext);
CHECK_RC(nRetVal, bUserGenerator, "Find user generator");
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_GESTURE, gGestureGenerator);
bGestureGenerator = (nRetVal != XN_STATUS_OK);
if (bGestureGenerator)
{
nRetVal = gGestureGenerator.Create(gContext);
CHECK_RC(nRetVal, bGestureGenerator, "Find gesture generator");
XnCallbackHandle hGestureCallbacks;
gGestureGenerator.RegisterGestureCallbacks(gestureRecognized, gestureProcess, 0,
hGestureCallbacks);
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_HANDS,gHandsGenerator);
bHandsGenerator = (nRetVal != XN_STATUS_OK);
if (bHandsGenerator)
{
nRetVal = gHandsGenerator.Create(gContext);
CHECK_RC(nRetVal, bHandsGenerator, "Find hands generator");
XnCallbackHandle hHandsCallbacks;
gHandsGenerator.RegisterHandCallbacks(handsNew, handsMove,handsLost, 0, hHandsCallbacks);
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_DEVICE, gDevice);
bDevice = (nRetVal != XN_STATUS_OK);
gContext.RegisterToErrorStateChange(onErrorStateChanged, NULL, hDummyCallbackHandle);
//Preparo la textura para la webcam
if (bGenerateRGBTexture)
mWebcamTexture = KinectTools::createDepthTexture("KinectWebCamTexture", sPluginName);
return true;
}
//-----------------------------------------------------------------------------
bool IPKinectPlugin::onShutdown()
{
if (bContext)
{
if (bHandsGenerator)
{
gHandsGenerator.StopTrackingAll();
}
if (bGestureGenerator)
{
gGestureGenerator.RemoveGesture(GESTURE_TO_USE);
gGestureGenerator.RemoveGesture(GESTURE_TO_START);
}
gContext.StopGeneratingAll();
gContext.Shutdown();
}
return true;
}
Any idea about this issue? Any wrong with this code?
Maybe you already found a solution in the meantime...
I normally work with the Java Wrapper, but what I see as difference to my code is that I call contect.startGeneratingAll() after creating the generators (Depth, Hands and so on). I had also problems when I did this multiple times at start up. Another difference is that I use a context.release at shutdown.
My procedure is normally:
Init config (License, Nodes, settings)
Create generators
Start Generating All
Run your code ...
Stop Generating ALL
Context release
From OpenNI Documentation
XN_C_API void XN_C_DECL xnShutdown ( XnContext * pContext )
Shuts down an OpenNI context, destroying all its nodes. Do not call
any function of this context or any correlated node after calling this
method. NOTE: this function destroys the context and all the nodes it
holds and so should be used very carefully. Normally you should just
call xnContextRelease()