SimpleOpenNI: Multiple Kinects and enableScene()/sceneImage() in Processing - kinect

In Processing, I can successfully draw depth maps from 2 Kinects using SimpleOpenNI, but I'm now trying to draw 2 "scenes" (from enableScene() vs enableDepth()). Both Kinects are detected but when I draw the output, I see the same scene is drawn twice (whereas using enableDepth() always gave me 2 different depth images). Any ideas what I'm doing wrong? Thanks in advance.
/* --------------------------------------------------------------------------
* SimpleOpenNI Multi Camera Test
* --------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI cam1;
SimpleOpenNI cam2;
void setup()
{
size(640 * 2 + 10,480);
// start OpenNI, loads the library
SimpleOpenNI.start();
// init the cameras
cam1 = new SimpleOpenNI(0,this);
cam2 = new SimpleOpenNI(1,this);
// set the camera generators ** HAD TO REVERSE ORDER FOR BOTH KINECTS TO WORK
// enable Scene
if(cam2.enableScene() == false)
{
println("Can't open the scene for Camera 2");
exit();
return;
}
// enable depthMap generation
if(cam1.enableScene() == false)
{
println("Can't open the scene for Camera 1");
exit();
return;
}
background(10,200,20);
}
void draw()
{
// update the cams
SimpleOpenNI.updateAll();
image(cam1.sceneImage(),0,0);
image(cam2.sceneImage(),640 + 10,0);
}

I've done another text using the sceneMap() functionality but it looks like there is indeed an issue with SimpleOpenNI not updating properly internally:
/* --------------------------------------------------------------------------
* SimpleOpenNI Multi Camera Test
* --------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI cam1;
SimpleOpenNI cam2;
int numPixels = 640*480;
int[] sceneM1 = new int[numPixels];
int[] sceneM2 = new int[numPixels];
PImage scene1,scene2;
void setup()
{
size(640 * 2 + 10,480 * 2 + 10);
// start OpenNI, loads the library
SimpleOpenNI.start();
// init the cameras
cam1 = new SimpleOpenNI(0,this);
cam2 = new SimpleOpenNI(1,this);
// set the camera generators ** HAD TO REVERSE ORDER FOR BOTH KINECTS TO WORK
// enable Scene
if(cam2.enableScene() == false)
{
println("Can't open the scene for Camera 2");
exit();
return;
}
// cam2.enableDepth();//this fails when using only 1 bus
// enable depthMap generation
if(cam1.enableScene() == false)
{
println("Can't open the scene for Camera 1");
exit();
return;
}
cam1.enableDepth();
scene1 = createImage(640,480,RGB);
scene2 = createImage(640,480,RGB);
background(10,200,20);
}
void draw()
{
// update the cams
SimpleOpenNI.updateAll();
image(cam1.depthImage(),0,0);
image(cam1.sceneImage(),0,0);
cam1.sceneMap(sceneM1);
cam2.sceneMap(sceneM2);
updateSceneImage(sceneM1,scene1);
updateSceneImage(sceneM2,scene2);
image(scene1,0,490);
image(scene2,650,490);
}
void updateSceneImage(int[] sceneMap,PImage sceneImage){
for(int i = 0; i < numPixels; i++) sceneImage.pixels[i] = sceneMap[i] * 255;
sceneImage.updatePixels();
}
Using something like
cam1.update();
cam2.update();
rather than
SimpleOpenNI.updateAll();
doesn't change anything.
An issue was filed, hopefully it will be resolved.
In the meantime, try using OpenNI in a different language/framework.
OpenFrameworks has many similarities to Processing (and many differences
as well to be honest, but it's not rocket science).
Try the experimental ofxOpenNI addon to test multiple cameras, hopefully it will resolve your issue.

Related

Raspberry Pi Pico locks up when I try to use interrupts

I'm trying to use encoders to track the movement of three wheels on a robot, but as soon as any of the motors move the robot "locks up", it stops responding to commands, stops printing to the serial monitor, and just keeps spinning its wheels until I turn it off. I cut out everything except just the code to track one encoder and tried turning the wheel by hand to sus out the problem, but it still locked up. And even more strangely, now it will start spinning one of the wheels even though I've removed any code that should have it do that, even by mistake.
I used the Arduino IDE to program the pico since I've got no familiarity with python, but I can't find any information or troubleshooting tips for using interrupts with the pico that don't assume you're using micropython.
Here's the simplified code I'm using to try to find the problem. All it's meant to do is keep track of how many steps the encoder has made and print that to the serial monitor every second. Ive tried removing the serial and having it light up LEDs instead but that didn't help.
int encA = 10;
int encB = 11;
int count = 0;
int timer = 0;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
attachInterrupt(digitalPinToInterrupt(encA),readEncoder,RISING);
timer = millis();
}
void loop() {
// put your main code here, to run repeatedly:
if (timer - millis() > 5000) {
Serial.println(count);
timer = millis();
}
}
void readEncoder() {
int bVal = digitalRead(encB);
if (bVal == 0) {
count--;
}
else{
count++;
}
}
Does the mapping function digitalPinToInterrupt for the Pi Pico work?
Can you try just using the interrupt number that corresponds to the pi?
attachInterrupt(9,readEncoder,RISING); //Or the number 0-25 which maps to that pin
https://raspberrypi.github.io/pico-sdk-doxygen/group__hardware__irq.html
You have the wrong pin to encoder in your example (maybe you incorrectly copy and pasted)?
attachInterrupt(digitalPinToInterrupt(**encA**),readEncoder,RISING);
void readEncoder() {
int bVal = digitalRead(**encB**); ...}
There is similar code on GitHub that you could modify and try instead.
https://github.com/jumejume1/Arduino/blob/master/ROTARY_ENCODER/ROTARY_ENCODER.ino
It might help you find a solution.
Also,
https://www.arduino.cc/reference/en/libraries/rpi_pico_timerinterrupt/
The interrupt number corresponds to the pin (unless you have reassigned it or disabled it) so for pin 11 the code can be:
attachInterrupt(11, buttonPressed, RISING);
This works:
bool buttonPress = false;
unsigned long buttonTime = 0; // To prevent debounce
void setup() {
Serial.begin(9600);
pinMode(11, INPUT_PULLUP);
attachInterrupt(11, buttonPressed, RISING);
// can be CHANGE or LOW or RISING or FALLING or HIGH
}
void loop() {
if(buttonPress) {
Serial.println(F("Pressed"));
buttonPress= false;
} else {
Serial.println(F("Normal"));
}
delay(250);
}
void buttonPressed() {
//Set timer to work for your loop code time
if (millis() - buttonTime > 250) {
//button press ok
buttonPress= true;
}
buttonTime = millis();
}
See: https://raspberrypi.github.io/pico-sdk-doxygen/group__hardware__irq.html for disable, enable etc.

How to optimize the code for reading SPI through ARDUINO in SLAVE mode

Not important:
I am doing a project to integrate a bluetooth module into a car radio pioneer. I understand perfectly well that it's easier to buy a new one =) but it's not interesting. At the moment, the byproduct was an adapter on arduino of resistor buttons, which the pioneer did not understand. The same adapter also controls the bluetooth board, it can switch the track forward and backward (there is no button on the steering wheel for pause). Now I want the bluetooth to turn on only in AUX mode. But there is a problem, which mode can be understood only by reading the signal from the SPI bus of the commutation microcircuit. I was able to read this data using arduino nano. I do not have an analyzer, but it is not necessary that I would understand something additional with it.
Essence of the question:
Using the scientific poke method, I found sequences indicating the launch of a particular mode, for example:
10110011
1
111
1000000
I'm sure I'm doing it wrong, but in the meantime I get duplicate results. But, when I try to detect them using IF, the nano speed is not enough and the chip starts to pass data.
#include "SPI.h"
bool flag01, flag02, flag03, flag11, flag12, flag13, flag31, flag32, flag33;
void setup (void)
{
Serial.begin(9600);
pinMode(MISO, OUTPUT);
SPCR |= _BV(SPE);
SPI.attachInterrupt();
}
// Вызываем функцию обработки прерываний по вектору SPI
// STC - Serial Transfer Comlete
ISR(SPI_STC_vect)
{
// Получаем байт из регистра данных SPI
byte c = SPDR;
Serial.println(c, BIN);
if (c == 0b1) {
Serial.println("1 ok");
flag11 = true;
} else {
flag11 = false;
}
if (c == 0b11 && flag11) {
Serial.println("11 ok");
flag12 = true;
} else {
flag12 = false;
flag11 = false;
}
if (c == 0b1100000 && flag11 && flag12) {
Serial.println("1100000 ok");
flag13 = true;
} else {
flag13 = false;
flag12 = false;
flag11 = false;
}
}
void loop(void)
{}
I myself am scared to look at this code, but I cannot think of anything better. It seems like I heard about some kind of buffer, but I don't know how to screw it to this solution. After all, the data packets go with dropping the CS signal and I can’t figure out how to determine the beginning and end of the packet from the commands in order to write it to a buffer or array and only then go through it with a comparison.
I will be grateful if someone will tell me at least in which direction to move.
There is also esp8266, but there is a limitation on the size of a data packet of 32 bits in a slave mode and I do not know how to get around it correctly.
So, actually the question.
How can I optimize the code so that the arduino has time to process the data and I can catch the pattern?
Perhaps, if we implement reading of data of arbitrary length on esp8266, or at least fill them to the required length, it would help me. But I still can't figure it out with the spi.slave library.
First you should keep your ISR as short as possible, certainly don't use Serial print inside the ISR.
Secondly, if you don't know exactly how long the data is, then you need to have a buffer to capture the data and try to determine the data length first before you trying to analysis it.
volatile uint8_t byteCount = 0;
volatile bool dataReady = false;
byte data[32];
// SPI interrupt routine
ISR (SPI_STC_vect)
{
data[byteCount++] = SPDR;
dataReady = true;
}
void setup (void)
{
// your SPI and Serial setup code
}
void loop (void)
{
// for determine the data stream length
if (dataReady) {
Serial.println(byteCount);
dataReady = false;
}
}
Once you know how long the data stream is (let assumed it is 15-byte long), you can then change your sketch slightly to capture the data and analysis it.
volatile uint8_t byteCount = 0;
volatile bool dataReady = false;
byte data[32];
// SPI interrupt routine
ISR (SPI_STC_vect)
{
data[byteCount++] = SPDR;
if (byteCount == 15)
dataReady = true;
}
void loop (void)
{
if (dataReady) {
dataReady = false;
// do your analysis here
}
}

Problem with comparing screenshots using selenium

I want to cover ui test cases with automation and was suggested to imply compare screenshots
So the algorithm is the following:
1.I take screenshot of the page using selenium takescreenshot method and stored it in expected results folder
2.That I'm running test case which take screenshot of the same page and then compare it with expected screenshot from the step1
I was using the following method:
try {
// take buffer data from both image files //
BufferedImage biA = ImageIO.read(fileA);
DataBuffer dbA = biA.getData().getDataBuffer();
int sizeA = dbA.getSize();
BufferedImage biB = ImageIO.read(fileB);
DataBuffer dbB = biB.getData().getDataBuffer();
int sizeB = dbB.getSize();
// compare data-buffer objects //
if (sizeA == sizeB) {
for (int i = 0; i < sizeA; i++) {
if (dbA.getElem(i) != dbB.getElem(i)) {
return false;
}
}
return true;
} else {
return false;
}
} catch (Exception e) {
LOGGER.error("Failed to compare image files");
return false;
}
And previous week it was working well, but today I ran the same test and it failed, I opened images properties and see that screenshot made today is larger by 0,1 KB then the expected one.
Can't understand the reason
Can it be somehow related to chrome constant background updates and for example browser has some update during the weekend and now the screenshot is little bit different and this way of comparing is wrong?
And if yes, then how can wee do this, I tried popular library Ashot and it also tells me that the screenshot are different

Processing, Simple kinect app don't start a event

I want to do a simple kinect aplication in processing, I just want to when kinect detect a skeleton, show a simple jpeg image, just that. I wrote some code, all works but when someone appears in front of kinect, nothing happens, can anyone help me?
This is my code:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup()
{
// Começar o evento
kinect = new SimpleOpenNI(this);
// Ativar o RGB
kinect.enableRGB();
background(200,0,0);
// Criar a janela do tamanho do dephMap
size(kinect.rgbWidth(), kinect.rgbHeight());
}
void draw()
{
// update da camera
kinect.update();
// mostrar o depthMap
image(kinect.rgbImage(),0,0);
// Definir quantidade de pessoas
int i;
for (i=1; i<=10; i++)
{
// Verificar presença da pessoa
if(kinect.isTrackingSkeleton(i))
{
mostrarImagem(); // draw the skeleton
}
}
}
// Mostrar a imagem
void mostrarImagem()
{
PImage img;
img = loadImage("proverbio1.jpg");
image(img, 0, 0);
}
You haven't setup the callbacks for OpenNI user events.
Also if you simply want to display an image when someone is detected, you don't actually need to track the skeleton: simply use the scene image. You can get some information about the user's position without tracking the skeleton, like the user's centre of mass.
This way you'd have a simpler and faster application if you don't actually need skeleton data.
Here a basic example:
import SimpleOpenNI.*;
SimpleOpenNI context;//OpenNI context
PVector pos = new PVector();//this will store the position of the user
int user;//this will keep track of the most recent user added
PImage sample;
void setup(){
size(640,480);
context = new SimpleOpenNI(this);//initialize
context.enableScene();//enable features we want to use
context.enableUser(SimpleOpenNI.SKEL_PROFILE_NONE);//enable user events, but no skeleton tracking, needed for the CoM functionality
sample = loadImage("proverbio1.jpg");
}
void draw(){
context.update();//update openni
image(context.sceneImage(),0,0);
if(user > 0){//if we have a user
context.getCoM(user,pos);//store that user's position
println("user " + user + " is at: " + pos);//print it in the console
image(sample,0,0);
}
}
//OpenNI basic user events
void onNewUser(int userId){
println("detected" + userId);
user = userId;
}
void onLostUser(int userId){
println("lost: " + userId);
user = 0;
}
You can see some handy SimpleOpenNI samples in this Kinect article which is part of a workshop I held last year.

hand tracking not working after a reload of openni dynamic library

Our project is (http://www.play4health.com/p4h_eng/) using Ogre 3D
over Ubuntu 11.04. Except for core services all is based in a plugin architecture taking advantage of Ogre 3d plugin facilities.
In our plugin architecture plugins can be:
Videogames
Interaction methods
Users configure their session creating tuples (videogame, interaction
method). The flow is a session is:
* User load his session.
* User click of one of the tuples for the session and play to
videogame with a specific interaction method.
* Repeat it until end all activities of the session.
Plugin are loaded/unloaded dynamically by demand.
One of this interaction methods is hand tracking using openni. What is
the problem?
* Fist time that openni plugin is loading all work perfectly.
* Next time that plugin openni has to be loaded system is able to
detect gestures but not do hand tracking. Note that all plugin are
executed in the same process. Right now only solution is to reboot
platform.
This is the code for init and release OpenNI in our plugin
bool IPKinectPlugin::onInitialise()
{
mHandPointer.mId = "KinectHandPointer";
mHandPointer.mHasAbsolute = true;
mHandPointer.mHasRelative = false;
XnStatus nRetVal = XN_STATUS_OK;
nRetVal = gContext.InitFromXmlFile(String(this->getPluginInfo()-
>getResPath() + "SamplesConfig.xml").c_str());
CHECK_RC(nRetVal, bContext, "InitFromXml");
#if SHOW_DEPTH
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_DEPTH,gDepthGenerator);
bDepthGenerator = (nRetVal != XN_STATUS_OK);
if (bDepthGenerator)
{
nRetVal = gDepthGenerator.Create(gContext);
CHECK_RC(nRetVal, bDepthGenerator, "Find Depth generator");
}
#endif
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_USER, gUserGenerator);
bUserGenerator = (nRetVal != XN_STATUS_OK);
if (/*bUserGenerator*/false)
{
nRetVal = gUserGenerator.Create(gContext);
CHECK_RC(nRetVal, bUserGenerator, "Find user generator");
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_GESTURE, gGestureGenerator);
bGestureGenerator = (nRetVal != XN_STATUS_OK);
if (bGestureGenerator)
{
nRetVal = gGestureGenerator.Create(gContext);
CHECK_RC(nRetVal, bGestureGenerator, "Find gesture generator");
XnCallbackHandle hGestureCallbacks;
gGestureGenerator.RegisterGestureCallbacks(gestureRecognized, gestureProcess, 0,
hGestureCallbacks);
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_HANDS,gHandsGenerator);
bHandsGenerator = (nRetVal != XN_STATUS_OK);
if (bHandsGenerator)
{
nRetVal = gHandsGenerator.Create(gContext);
CHECK_RC(nRetVal, bHandsGenerator, "Find hands generator");
XnCallbackHandle hHandsCallbacks;
gHandsGenerator.RegisterHandCallbacks(handsNew, handsMove,handsLost, 0, hHandsCallbacks);
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_DEVICE, gDevice);
bDevice = (nRetVal != XN_STATUS_OK);
gContext.RegisterToErrorStateChange(onErrorStateChanged, NULL, hDummyCallbackHandle);
//Preparo la textura para la webcam
if (bGenerateRGBTexture)
mWebcamTexture = KinectTools::createDepthTexture("KinectWebCamTexture", sPluginName);
return true;
}
//-----------------------------------------------------------------------------
bool IPKinectPlugin::onShutdown()
{
if (bContext)
{
if (bHandsGenerator)
{
gHandsGenerator.StopTrackingAll();
}
if (bGestureGenerator)
{
gGestureGenerator.RemoveGesture(GESTURE_TO_USE);
gGestureGenerator.RemoveGesture(GESTURE_TO_START);
}
gContext.StopGeneratingAll();
gContext.Shutdown();
}
return true;
}
Any idea about this issue? Any wrong with this code?
Maybe you already found a solution in the meantime...
I normally work with the Java Wrapper, but what I see as difference to my code is that I call contect.startGeneratingAll() after creating the generators (Depth, Hands and so on). I had also problems when I did this multiple times at start up. Another difference is that I use a context.release at shutdown.
My procedure is normally:
Init config (License, Nodes, settings)
Create generators
Start Generating All
Run your code ...
Stop Generating ALL
Context release
From OpenNI Documentation
XN_C_API void XN_C_DECL xnShutdown ( XnContext * pContext )
Shuts down an OpenNI context, destroying all its nodes. Do not call
any function of this context or any correlated node after calling this
method. NOTE: this function destroys the context and all the nodes it
holds and so should be used very carefully. Normally you should just
call xnContextRelease()