Kinect V2 IR Video Stream - kinect

I'm working on a new application which can access the video stream from the Kinect V2 sensor. I've got the application working with the standard RGB and Depth video streams... but I am running into an issue with the IR video stream. I've modified the example found here to fit my application... but the pixel values that I am returning as part of my bitmap are always black (ie. value = 0). Here's the part of the code that I have running in my MultiSourceFrameArrived event handler:
using (InfraredFrame IRFrame = framew.InfraredFrameReference.AcquireFrame())
{
if (IRFrame != null)
{
FrameDescription FrameDesc = IRFrame.FrameDescription;
ushort[] IRData = new ushort[FrameDesc.Width * FrameDesc.Height];
IRImgBuffer = new byte[4 * FrameDesc.Width * FrameDesc.Height];
IRFrame.CopyFrameDataToArray(IRData);
int colorIndex = 0;
for (int IRIndex = 0; IRIndex < IRData.Length; ++IRIndex)
{
ushort depth = IRData[IRIndex];
ushort ir = IRData[IRIndex];
byte intensity = (byte)(ir >> 8);
IRImgBuffer[colorIndex++] = (byte)ir; // Blue
IRImgBuffer[colorIndex++] = (byte)ir; // Green
IRImgBuffer[colorIndex++] = (byte)ir; // Red
++colorIndex;
}
gotframe = true;
}
}
It's even more frustrating as I can't seem to launch this in the debugger. If I put a breakpoint in the code to see what the Blue pixel value (for example) is - the debugger never seems to catch it (not entirely sure why). Can anyone help me understand why the intensity value is always 0?

It appears that the issue was happening because of an outdated graphics card. See this thread: Kinect Infrared Camera Not working. I have a relatively new laptop, but once I updated my Nvidia drivers it now seems to be working.

Related

How can I convert "rs2::video frame" to "CvCapture* "?

I'm newbie to the Intel Realsense SDK and coding in Visual Studio 2017(C or C++) for Intel Realsense camera D435.
In my example I have the following,
static rs2::frameset current_frameset;
auto color = current_frameset.get_color_frame();
frame = cvQueryFrame(color);
I've got an error on line 3 as "can not convert 'rs2::video_frame' to 'CvCapture'"
I've not being able to find a solution to this issue and it's proving difficult and resulted in more errors.
Does anyone know how I can overcome this problem?
Thanks for the help!
The cvQueryFrame accepts cvCapture instance, and it is used to retrieve the frame from camera. In LibRS, the frame you retrieved back can be used already, you don't have to get back it again. attached the snippet of CV example in LibRS, you can refer to the complete code here
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
using namespace cv;
const auto window_name = "Display Image";
namedWindow(window_name, WINDOW_AUTOSIZE);
while (waitKey(1) < 0 && cvGetWindowHandle(window_name))
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame());
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
// Update the window with new data
imshow(window_name, image);
}

Naudio panning is not working

I can't get the panning to work in Naudio.
here is my code:
void Play(double Amp, double Left, double Right)
{
BBeats = new binaural_beats();
BBeats.Amplitude = Amp;
BBeats.Amplitude2 = Amp;
BBeats.Frequency = Left;
BBeats.Frequency2 = Right;
BBeats.Bufferlength = 44100 * 2 * 3; // will play for 3 sec
waveout = new WaveOut();
WaveChannel32 temp = new WaveChannel32(BBeats);
temp.PadWithZeroes = false;
temp.Pan = 0.0f;
waveout.Init(temp);
waveout.Play();
}
I tried 0.0F, 1.0F and 100F but it is not working.
I want it to play completely from one speaker and not from the other one.
or from one channel and not the other channel.
I just spent the entire night with same problem.
AND the solution was a whole different place than expected. I tried using pan and PanningSampleProvider, and MultiplexingWaveProvider, to obtain control over the pan, but I could only hear a minor change in sound, not really a pan. On my output meters, I could see maybe 10% variation.
Now I must translate from Danish, so it might not be 100% accurate. But under your sound device in windows, select the play device you are using, press properties, press extensions, and tick the "Deactivate all sound effects". BAM, 100% control over pan.
Guess windows have some kind of auto-level algorithm between stereo channels selected as default - don't know why and what it should do.
The Pan setting on WaveChannel32 goes from -1 (left only) to 1 (right only)
Or for more control over panning strategies, look at the PanningSampleProvider class.
I had the same problem. I tried to use PanningSampleProvider (NAudio) but it didn't work. I found out the cause was window system setting. Just turn off mono audio from Audio Setting.
Here is my source code:
var _audioFile = new AudioFileReader("E://CShap/Test/speaker.wav");
var monofile = new StereoToMonoSampleProvider(_audioFile);
var panner = new PanningSampleProvider(monofile);
panner.PanStrategy = new SquareRootPanStrategy();
panner.Pan = -1.0f; // pan fully left
WaveFileWriter.CreateWaveFile16("E://CShap/Test/speaker_resampler_L.wav", panner);

createMediaElementSource plays but getByteFrequencyData returns all 0's

I am attempting to visualize audio coming out of an element on a webpage. The source for that element is a WebRTC stream connecting to an Asterisk call via sip.js. The audio works as intended.
However, when I attempt to get the frequency data using web audio api, it returns an array of all 0's, even though the audio is working. This seems be a problem with createMediaElementSource. If I call getUserMedia and use createMediaStreamSource to connect my microphone to the input, I indeed get the frequency data returned.
This was attempted in both Chrome 40.0 and Firefox 31.4. In my search I found similar errors with Android Chrome but my versions of desktop Chrome and Firefox seem like they should be functioning correctly. So far I have a feeling that the error may be due to the audio player getting it's audio from another AudioContext in sip.js, or something having to do with CORS. All of the demos that I have tried work correctly, but only use createMediaStreamSource to get mic audio, or use createMediaElementSource to play a file (rather than streaming to an element).
My Code:
var context = new (window.AudioContext || window.webkitAudioContext)();
var analyser = context.createAnalyser();
analyser.fftSize = 64;
analyser.minDecibels = -90;
analyser.maxDecibels = -10;
analyser.smoothingTimeConstant = 0.85;
var frequencyData = new Uint8Array(analyser.frequencyBinCount);
var visualisation = $("#visualisation");
var barSpacingPercent = 100 / analyser.frequencyBinCount;
for (var i = 0; i < analyser.frequencyBinCount; i++) {
$("<div/>").css("left", i * barSpacingPercent + "%").appendTo(visualisation);
}
var bars = $("#visualisation > div");
function update() {
window.requestAnimationFrame(update);
analyser.getByteFrequencyData(frequencyData);
bars.each(function (index, bar) {
bar.style.height = frequencyData[index] + 'px';
console.debug(frequencyData[index]);
});
};
$("audio").bind('canplay', function() {
source = context.createMediaElementSource(this);
source.connect(analyser);
update();
});
Any help is greatly appreciated.
Chrome doesn't support WebAudio processing of RTCPeerConnection output streams (remote streams); see this question. Their bug is here.
Edit: they now support this in Chrome 50
See the test code for firefox about to land as part of this bug:
Bug 1081819. This bug will add webaudio input to RTCPeerConnections in Firefox; we've had working WebAudio processing of output MediaStreams for some time. The test code there tests both sides; note it depends a lot on the test framework, so just use it as a guide on hooking to webaudio.

How to detect an image border programmatically?

I'm searching for a program which detects the border of a image,
for example I have a square and the program detects the X/Y-Coords
Example:
alt text http://img709.imageshack.us/img709/1341/22444641.png
This is a very simple edge detector. It is suitable for binary images. It just calculates the differences between horizontal and vertical pixels like image.pos[1,1] = image.pos[1,1] - image.pos[1,2] and the same for vertical differences. Bear in mind that you also need to normalize it in the range of values 0..255.
But! if you just need a program, use Adobe Photoshop.
Code written in C#.
public void SimpleEdgeDetection()
{
BitmapData data = Util.SetImageToProcess(image);
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
return;
unsafe
{
byte* ptr1 = (byte *)data.Scan0;
byte* ptr2;
int offset = data.Stride - data.Width;
int height = data.Height - 1;
int px;
for (int y = 0; y < height; y++)
{
ptr2 = (byte*)ptr1 + data.Stride;
for (int x = 0; x < data.Width; x++, ptr1++, ptr2++)
{
px = Math.Abs(ptr1[0] - ptr1[1]) + Math.Abs(ptr1[0] - ptr2[0]);
if (px > Util.MaxGrayLevel) px = Util.MaxGrayLevel;
ptr1[0] = (byte)px;
}
ptr1 += offset;
}
}
image.UnlockBits(data);
}
Method from Util Class
static public BitmapData SetImageToProcess(Bitmap image)
{
if (image != null)
return image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}
If you need more explanation or algorithm just ask with more information without being so general.
It depends what you want to do with the border, if you are looking at getting just the values of the edges of the region, use an algorithm called the Connected Components Region. You must know the value of the region prior to using the algorithm. This will navigate around the border and collect the outside region. If you are trying to detect just the outside lines get the gradient of the image and it will reveal where the lines are. To do this convolve the image with an edge detection filter such as Prewitt, Sobel, etc.
You can use any image processing library such as Opencv. which is in c++ or python.
You should look for edge detection functions such as Canny edge detection.
Of course this would require some diving into image processing.
The example image you gave should be straight forward to detect, how noisy/varied are the images going to be?
A shape recognition algorithm might help you out, providing it has a solid border of some kind, and the background colour is a solid one.
From the sounds of it, you just want a blob extraction algorithm. After that, the lowest/highest values for x/y will give you the coordinates of the corners.

Glut glLoadMatrixf camera equivalent

In my glut application I'm simulating a plane with the camera. When the planes speed is low I intend to have the nose start to point towards the ground as the camera falls. My first instinct was to just change the pitch until it was pointed downwards at -90degrees. However I can't just change the pitch because if the plane is tilted on its side or upside down then it would note be changing direction towards the ground.
Now i'm trying to do a rough simulation of this by shifting the 'lookAt.y' downwards. To do this I am trying to get all the current camera coordinates that I use to set the camera
(eye.x, eye.y, eye.z, look.x, look.y, look.z, up.x, up.y, up.z). Then recall the set with the new modified values.
I've been working with the Camera.cpp and Camera.h to control my camera functions. They can be found here
after adding methods to get all the values, only the eye values are actually updated when various camera motions are made. I guess my question is how do I retrieve these values.
The glLoadMaxtrix call is in this function
void Camera :: setModelViewMatrix(void)
{ // load model view matrix with existing camera values
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0] = u.x; m[4] = u.y; m[8] = u.z; m[12] = -eVec.dot(u);
m[1] = v.x; m[5] = v.y; m[9] = v.z; m[13] = -eVec.dot(v);
m[2] = n.x; m[6] = n.y; m[10] = n.z; m[14] = -eVec.dot(n);
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0;
look.x = u.y; look.y = v.y; look.z = n.y;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
Is there a way to get 'eye', 'lookAt', and 'up' values from the matrix here? Or should I do something else to get these values?
-Thanks in advance for your help
The camera class you link to is not an actual OpenGL class, but it should be simple enough to work with.
The function quoted just takes the current values of the camera object and sends them to OpenGL. If you look at the camera's set function, you can see how the program calculates the values it actually stores.
The eye value is stored directly. The lookAt value is just the value of (eye - n), by vector math. The up value is the hardest, but if I remember my vector math correctly, I believe that up = (n cross u).