Accessing Kinect Accelerometer - kinect

I want to access Kinect accelerometer to determine whether the device carrier is moving or not. Is it possible with Kinect accelerometer? If it so, how can i do that? is ofxKinect framework useful for this purpose?

The microsoft SDK contains a method to read the accelerometer: KinectSensor.AccelerometerGetCurrentReading
private void OnAllFramesReady(object sender, AllFramesReadyEventArgs e)
{
if (this.KinectSensor == null)
{
return;
}
Vector4 reading = this.KinectSensor.AccelerometerGetCurrentReading();
}
ofxKinect has a few methods that sound promising:
/// get the XYZ accelerometer values
///
/// ... yes, the kinect has an accelerometer
/// raw axis values
ofPoint getRawAccel();
/// axis-based gravity adjusted accelerometer values
///
/// from libfreeenect:
///
/// as laid out via the accelerometer data sheet, which is available at
///
/// http://www.kionix.com/Product%20Sheets/KXSD9%20Product%20Brief.pdf
///
ofPoint getMksAccel();
/// get the current pitch (x axis) & roll (z axis) of the kinect in degrees
///
/// useful to correct the 3d scene based on the camera inclination
///
float getAccelPitch();
float getAccelRoll();

Related

Rust bindgen clang error, incompatible constant for this __builtin_neon function

Trying to generate bindings for an Obj-C++ header which is a part of Superpowered crossplatform audio library with rust bindgen.
I am in Catalina 10.15.6,
clang version 10.0.1
Target: x86_64-apple-darwin19.6.0
Thread model: posix
InstalledDir: /usr/local/opt/llvm/bin
I got stuck with this error. Searched a lot but it seems rare.
I have tried to use the clang provided in the Xcode App bundle to try but couldn't manage to do it.
The library builds fine in Xcode, I didn't get the same error there.
The header :
#import <AVFoundation/AVFoundation.h>
/// #brief Output channel mapping for iOS audio I/O.
/// This structure maps the channels you provide in the audio processing callback to the appropriate output channels.
/// You can have multi-channel (more than a single stereo channel) output if a HDMI or USB audio device is connected. iOS does not provide multi-channel output for other audio devices, such as wireless audio accessories.
/// #em Example:
/// Let's say you have four output channels, and you'd like the first stereo pair on USB 3+4, and the other stereo pair on the iPad's headphone socket.
/// 1. Set deviceChannels[0] to 2, deviceChannels[1] to 3.
/// 2. Set USBChannels[2] to 0, USBChannels[3] to 1.
/// #em Explanation:
/// - Your four output channels are having the identifiers: 0, 1, 2, 3.
/// - Every iOS device has just one stereo built-in output. This is represented by deviceChannels, and (1.) sets your second stereo pair (2, 3) to these.
/// - You other stereo pair (0, 1) is mapped to USBChannels. USBChannels[2] represents the third USB channel.
/// #since Multi-channel output is available in iOS 6.0 and later.
typedef struct multiOutputChannelMap {
int deviceChannels[2]; ///< The iOS device's built-in output channels. Write only.
int HDMIChannels[8]; ///< HDMI output channels. Write only.
int USBChannels[32]; ///< USB output channels. Write only.
int numberOfHDMIChannelsAvailable; ///< Number of available HDMI output channels. Read only.
int numberOfUSBChannelsAvailable; ///< Number of available USB output channels. Read only.
bool headphoneAvailable; ///< Something is plugged into the iOS device's headphone socket or not. Read only.
} multiOutputChannelMap;
/// #brief Input channel mapping for iOS audio I/O.
/// Similar to the output channels, you can map the input channels to channels in the audio processing callback. This feature works with USB only.
/// Let's say you set the channel count to 4, so RemoteIO provides 4 input channel buffers. Using this struct, you can map which USB input channel appears on the specific buffer positions.
/// #since Available in iOS 6.0 and later.
/// #see #c multiOutputChannelMap
typedef struct multiInputChannelMap {
int USBChannels[32]; ///< Example: set USBChannels[0] to 3, to receive the input of the third USB channel on the first buffer. Write only.
int numberOfUSBChannelsAvailable; ///< Number of USB input channels. Read only.
} multiInputChannelMap;
#protocol SuperpoweredIOSAudioIODelegate;
/// #brief The audio processing callback prototype.
/// #return Return false for no audio output (silence).
/// #param clientData A custom pointer your callback receives.
/// #param inputBuffers Input buffers.
/// #param inputChannels The number of input channels.
/// #param outputBuffers Output buffers.
/// #param outputChannels The number of output channels.
/// #param numberOfFrames The number of frames requested.
/// #param samplerate The current sample rate in Hz.
/// #param hostTime A mach timestamp, indicates when this buffer of audio will be passed to the audio output.
typedef bool (*audioProcessingCallback) (void *clientData, float **inputBuffers, unsigned int inputChannels, float **outputBuffers, unsigned int outputChannels, unsigned int numberOfFrames, unsigned int samplerate, unsigned long long hostTime);
/// #brief Handles all audio session, audio lifecycle (interruptions), output, buffer size, samplerate and routing headaches.
/// #warning All methods and setters should be called on the main thread only!
#interface SuperpoweredIOSAudioIO: NSObject {
int preferredBufferSizeMs;
int preferredSamplerate;
bool saveBatteryInBackground;
bool started;
}
#property (nonatomic, assign) int preferredBufferSizeMs; ///< The preferred buffer size in milliseconds. Recommended: 12.
#property (nonatomic, assign) int preferredSamplerate; ///< The preferred sample rate in Hz.
#property (nonatomic, assign) bool saveBatteryInBackground; ///< Save battery if output is silence and the app runs in background mode. True by default.
#property (nonatomic, assign, readonly) bool started; ///< Indicates if the instance has been started.
/// #brief Constructor.
/// #param delegate The object fully implementing the SuperpoweredIOSAudioIODelegate protocol. Not retained.
/// #param preferredBufferSize The initial value for preferredBufferSizeMs. 12 is good for every iOS device (512 frames).
/// #param preferredSamplerate The preferred sample rate. 44100 or 48000 are recommended for good sound quality.
/// #param audioSessionCategory The audio session category. Audio input is enabled for the appropriate categories only!
/// #param channels The number of output channels in the audio processing callback regardless the actual hardware capabilities. The number of input channels in the audio processing callback will reflect the actual hardware configuration.
/// #param callback The audio processing callback.
/// #param clientdata Custom data passed to the audio processing callback.
- (id)initWithDelegate:(id<SuperpoweredIOSAudioIODelegate>)delegate preferredBufferSize:(unsigned int)preferredBufferSize preferredSamplerate:(unsigned int)preferredSamplerate audioSessionCategory:(NSString *)audioSessionCategory channels:(int)channels audioProcessingCallback:(audioProcessingCallback)callback clientdata:(void *)clientdata;
/// #brief Starts audio I/O.
/// #return True if successful, false if failed.
- (bool)start;
/// #brief Stops audio I/O.
- (void)stop;
/// #brief Call this to re-configure the channel mapping.
- (void)mapChannels;
/// #brief Call this to re-configure the audio session category (such as enabling/disabling recording).
- (void)reconfigureWithAudioSessionCategory:(NSString *)audioSessionCategory;
#end
/// #brief You MUST implement this protocol to use SuperpoweredIOSAudioIO.
#protocol SuperpoweredIOSAudioIODelegate
/// #brief The audio session may be interrupted by a phone call, etc. This method is called on the main thread when the interrupt starts.
#optional
- (void)interruptionStarted;
/// #brief The audio session may be interrupted by a phone call, etc. This method is called on the main thread when audio resumes.
#optional
- (void)interruptionEnded;
/// #brief Called if the user did not grant a recording permission for the app.
#optional
- (void)recordPermissionRefused;
/// #brief This method is called on the main thread when a multi-channel audio device is connected or disconnected.
/// #param outputMap Map the output channels here.
/// #param inputMap Map the input channels here.
/// #param externalAudioDeviceName The name of the attached audio device, such as the model of the sound card.
/// #param outputsAndInputs A human readable description about the available outputs and inputs.
#optional
- (void)mapChannels:(multiOutputChannelMap *)outputMap inputMap:(multiInputChannelMap *)inputMap externalAudioDeviceName:(NSString *)externalAudioDeviceName outputsAndInputs:(NSString *)outputsAndInputs;
#end
build.rs :
use std::env;
use std::path::PathBuf;
fn main() {
let manifest_dir = env::var("CARGO_MANIFEST_DIR").unwrap();
let out_dir = env::var("OUT_DIR").unwrap();
println!(
"cargo:rustc-flags=-l static=libSuperpoweredAudioIOS -L native={}/Superpowered",
manifest_dir
);
let bindings = bindgen::Builder::default()
.header("Superpowered/OpenSource/SuperpoweredIOSAudioIO.h")
.clang_arg("-ObjC++")
.clang_arg("-v")
.clang_arg("--target=arm64-apple-ios")
.clang_arg("--sysroot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk")
.parse_callbacks(Box::new(bindgen::CargoCallbacks))
.generate()
.expect("Unable to generate bindings");
let out_path = PathBuf::from(out_dir);
bindings
.write_to_file(out_path.join("SuperpoweredIOSAudioIOBindings.rs"))
.expect("Couldn't write bindings!");
}
Clang errors with :
--- stderr
clang version 5.0.2 (tags/RELEASE_502/final)
Target: arm64-apple-ios
Thread model: posix
InstalledDir:
ignoring nonexistent directory "/usr/include"
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/usr/include/c++/v1"
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/usr/local/include"
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/Library/Frameworks"
ignoring duplicate directory "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/usr/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/System/Library/Frameworks
/usr/local/opt/llvm#5/include/c++/v1
/usr/local/opt/llvm#5/lib/clang/5.0.2/include
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk/System/Library/Frameworks (framework directory)
End of search list.
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include/arm_neon.h:32395:25: error: incompatible constant for this __builtin_neon function
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include/arm_neon.h:32416:25: error: incompatible constant for this __builtin_neon function
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include/arm_neon.h:33990:23: error: use of undeclared identifier '__builtin_neon_vrndns_f32'
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include/arm_neon.h:32395:25: error: incompatible constant for this __builtin_neon function, err: true
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include/arm_neon.h:32416:25: error: incompatible constant for this __builtin_neon function, err: true
/usr/local/Cellar/llvm/10.0.1_1/lib/clang/10.0.1/include/arm_neon.h:33990:23: error: use of undeclared identifier '__builtin_neon_vrndns_f32', err: true
Got stuck here, any ideas ?
Found this here:
def err_invalid_neon_type_code : Error<
"incompatible constant for this __builtin_neon function">
Latest news :) :
--target=armv7-apple-ios builds fine.
--target=arm64-apple-ios has the error.
I have updated to Xcode 12.0, will try and report.
Try --target=aarch64-apple-ios. At least rustup target list doesn't have arm64-apple-ios, it has aarch64-apple-ios.

Agora many to one live streaming

I have a requirement in which different user will stream videos from their camera to a server and there will be dashboard in which the admin can view all the streams real-time something like how surveillance works? I think video broadcasting can help but the documentation says it enables live streaming from one-to-many and many-to-many but there is no mention of the many-to-one case. How can I achieve this?
The use-case you have described would be implemented the same way as a many-to-many broadcast.
For your use-case you would have all of the camera streams join the channel as broadcasters and then the "surveillance" user would join as an audience. The audience member subscribes to all the remote streams without having to broadcast a stream of their own.
[Update]
With Agora's SDK you can use an external video source, you would just have to manage it yourself. If you are using a custom video source then you don't need to use RTMP.
IVideoFrameConsumer mConsumer;
boolean mHasStarted;
// Create a VideoSource instance.
VideoSource source = new VideoSource() {
#Override
public int getBufferType() {
// Get the current frame type.
// The SDK uses different methods to process different frame types.
// If you want to switch to another VideoSource type, create another instance.
// There are three video frame types: BYTE_BUFFER(1); BYTE_ARRAY(2); TEXTURE(3)
return BufferType.BYTE_ARRAY;
}
#Override
public boolean onInitialize(IVideoFrameConsumer consumer) {
// Consumer was created by the SDK.
// Save it in the lifecycle of the VideoSource.
mConsumer = consumer;
}
#Override
public boolean onStart() {
mHasStarted = true;
}
#Override
public void onStop() {
mHasStarted = false;
}
#Override
public void onDispose() {
// Release the consumer.
mConsumer = null;
}
};
// Change the inputting video stream to the VideoSource instance.
rtcEngine.setVideoSource(source);
// After receiving the video frame data, use the consumer class to send the data.
// Choose differnet methods according to the frame type.
// For example, the current frame type is byte array, i.e. NV21.
if (mHasStarted && mConsumer != null) {
mConsumer.consumeByteArrayFrame(data, AgoraVideoFrame.NV21, width, height, rotation, timestamp);
}
full guide: https://docs.agora.io/en/Video/custom_video_android?platform=Android

Reading/Writing ElevationAngle in Kinect throws InvalidOperationException

Any idea how to move the Kinect up and down? Theoretically,
sensor.ElevationAngle = 20;
should do the job, but I am getting the following error:
InvalidOperationException
This API has returned an exception from an HRESULT: 0x8007000D
It breaks down even if e.g. reading the current ElevationAngle is the first thing after starting the Kinect Sensor... (the answer to the question here suggests it's because of too much movement operations but it happens even if the Kinect has not adjusted position for some time; if it's duplicating, I am sorry, but I am unable to comment the above mentioned question).
** edit ** code:
using Microsoft.Kinect;
namespace pro02_01_streams.tilt
{
/// <summary>
/// Interaction logic for Tilt_test.xaml
/// </summary>
public partial class Tilt_test : Window
{
private KinectSensor sensor;
public Tilt_test()
{
InitializeComponent();
Test();
}
public void Test(){
if (KinectSensor.KinectSensors.Count == 0)
{
MessageBox.Show("No Kinects presents", "Error");
Application.Current.Shutdown();
}
try
{
sensor = KinectSensor.KinectSensors[0];
sensor.DepthStream.Enable();
sensor.ColorStream.Enable();
sensor.Start();
sensor.ElevationAngle = 1;
}
catch
{
MessageBox.Show("Failed to initialize kinect", "error");
Application.Current.Shutdown();
}
}
}
}
Your code works fine for me. To find out if your Kinect works properly, open the Kinect Developer Toolkit Browser and run the Kinect Explorer-WPF. In the application, go to Sensor Settings and move your Kinect with desired angle.

Using Kinect to calculate distance traveled

I'm trying to develop what seems to be a simple program that uses the Kinect for Xbox360 to calculate the distance traveled by a person. The room that the Kinect will be pointed at will be 10 x 10. After the user presses the button, the subject will move about this space. Once the subject reaches their final destination in the area, the user will press the button again. The Kinect will then output how far the subject traveled in between both button presses. Having never developed for the Kinect before, it's been pretty daunting to get started. My issue is that I'm not entirely sure what I should be using to measure the distance. In my research, I've found ways to calculate the distance an object is FROM the Kinect but that's about it.
What you have hear is a simple question of dealing with a Cartesian plane. The Kinect has 20 joints that exist in XYZ space, and distance is measured in meters. In order to access these joints, you have these statements inside a "Tracker" class (this is C#... not sure if you're using C# or C++ in the SDK):
public Tracker(KinectSensor sn, MainWindow win, string fileName)
{
window = win;
sensor = sn;
try
{
sensor.Start();
}
catch (IOException)
{
sensor = null;
MessageBox.Show("No Kinect sensor found. Please connect one and restart the application", "*****ERROR*****");
return;
}
sensor.SkeletonFrameReady += SensorSkeletonFrameReady; //Frame handlers
sensor.ColorFrameReady += SensorColorFrameReady;
sensor.SkeletonStream.Enable();
sensor.ColorStream.Enable();
}
These access the color and skeleton streams from the Kinect. The skeleton stream contains the joints, so you focus on that with these statements:
//Start sending skeleton stream
private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
//Access the skeleton frame
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
if (skeletonFrame != null)
{
//Check to see if there is any data in the skeleton
if (this.skeletons == null)
//Allocate array of skeletons
this.skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
//Copy skeletons from this frame
skeletonFrame.CopySkeletonDataTo(this.skeletons);
//Find first tracked skeleton, if any
Skeleton skeleton = this.skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault();
if (skeleton != null)
{
//Initialize joints
///<summary>
///Joints to be displayed, projected, recorded, etc.
///</summary>
Joint leftFoot = skeleton.Joints[JointType.FootLeft];
}
}
So, at the beginning of your program, you want to pick a joint (there are 20... choose one that will ALWAYS be facing towards the Kinect when you are executing the program) and get its location with something like the following statements:
if(skeleton.Joints[JointType.FootLeft].TrackingState == JointTrackingState.Tracked)
{
double xPosition = skeleton.Joints[JointType.FootLeft].Position.X;
double yPosition = skeleton.Joints[JointType.FootLeft].Position.Y;
double zPosition = skeleton.Joints[JointType.FootLeft].Position.Z;
}
At the end, you'll want to have a slight delay before you stop the stream... some time between the click and when you shut off the stream from the Kinect. You will then do the math you need to do to get the distance between the two points. If you don't have the delay, you won't be able to get your Cartesian point.

Problem handling signals in SystemC simulation application

I am simulating a CPU and I'm doing this using high level simulation tools. SystemC is a good resource for these purposes. I'm using two modules:
DataPath
Memory
CPU datapath is modeled as a unique high level entity, however the following code will sure be better than any other explaination:
The following is datapath.hpp
SC_MODULE(DataPath) {
sc_in_clk clk;
sc_in<bool> rst;
///
/// Outgoing data from memory.
///
sc_in<w32> mem_data;
///
/// Memory read enable control signal.
///
sc_out<sc_logic> mem_ctr_memreadenable;
///
/// Memory write enable control signal.
///
sc_out<sc_logic> mem_ctr_memwriteenable;
///
/// Data to be written in memory.
///
sc_out<w32> mem_dataw; //w32 is sc_lv<32>
///
/// Address in mem to read and write.
///
sc_out<memaddr> mem_addr;
///
/// Program counter.
///
sc_signal<w32> pc;
///
/// State signal.
///
sc_signal<int> cu_state;
///
/// Other internal signals mapping registers' value.
/// ...
// Defining process functions
///
/// Clock driven process to change state.
///
void state_process();
///
/// State driven process to apply control signals.
///
void control_process();
// Constructors
SC_CTOR(DataPath) {
// Defining first process
SC_CTHREAD(state_process, clk.neg());
reset_signal_is(this->rst, true);
// Defining second process
SC_METHOD(control_process);
sensitive << (this->cu_state) << (this->rst);
}
// Defining general functions
void reset_signals();
};
The following is datapath.cpp
void DataPath::state_process() {
// Useful variables
w32 ir_value; /* Placing here IR register value */
// Initialization phase
this->cu_state.write(StateFetch); /* StateFetch is a constant */
wait(); /* Wait next clock fall edge */
// Cycling
for (;;) {
// Checking state
switch (this->cu_state.read()) { // Basing on state, let's change the next one
case StateFetch: /* FETCH */
this->cu_state.write(StateDecode); /* Transition to DECODE */
break;
case StateDecode: /* DECODE */
// Doing decode
break;
case StateExecR: /* EXEC R */
// For every state, manage transition to the next state
break;
//...
//...
default: /* Possible not recognized state */
this->cu_state.write(StateFetch); /* Come back to fetch */
} /* switch */
// After doing, wait for the next clock fall edge
wait();
} /* for */
} /* function */
// State driven process for managing signal assignment
// This is a method process
void DataPath::control_process() {
// If reset signal is up then CU must be resetted
if (this->rst.read()) {
// Reset
this->reset_signals(); /* Initializing signals */
} else {
// No Reset
// Switching on state
switch (this->cu_state.read()) {
case StateFetch: /* FETCH */
// Managing memory address and instruction fetch to place in IR
this->mem_ctr_memreadenable.write(logic_sgm_1); /* Enabling memory to be read */
this->mem_ctr_memwriteenable.write(logic_sgm_0); /* Disabling memory from being written */
std::cout << "Entering fetch, memread=" << this->mem_ctr_memreadenable.read() << " memwrite=" << this->mem_ctr_memreadenable.read() << std::endl;
// Here I read from memory and get the instruction with some code that you do not need to worry about because my problem occurs HERE ###
break;
case kCUStateDecode: /* DECODE */
// ...
break;
//...
//...
default: /* Unrecognized */
newpc = "00000000000000000000000000000000";
} /* state switch */
} /* rst if */
} /* function */
// Resetting signals
void DataPath::reset_signals() {
// Out signals
this->mem_ctr_memreadenable.write(logic_sgm_1);
this->mem_ctr_memwriteenable.write(logic_sgm_0);
}
As you can see we have a clock driven process that handles cpu transitions (changing state) and a state driven process that sets signals for cpu.
My problem is that when I arrive in ### I expect the instruction being released by memory (you cannot see the instructions but they are correct, the memory component is connected to datapath using in and out singals you can see in the hpp file).
Memory gets me "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" because mem_ctr_memreadenable and mem_ctr_memwriteenable are both set to '0'.
Memory module is written in order to be an instant component. It is written using a SC_METHOD whose sensitive is defined on input signals (read enable and write enable included). The memory component gets "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" when the mem_ctr_memreadenable signal is '0'.
Why is it '0'? I reset signals and set that signal to '1'. I do not understand why I keep having '0' for the read enable signal.
Can you help me?
Thankyou.
I'm no SystemC guru, but it looks like it might be a similar problem to a common VHDL problem of signals not updating until at least a delta-cycle has passed:
this->mem_ctr_memreadenable.write(logic_sgm_1); /* Enabling memory to be read */
this->mem_ctr_memwriteenable.write(logic_sgm_0); /* Disabling memory from being written */
My guess: No time passes between these two lines and this next line:
std::cout << "Entering fetch, memread=" << this->mem_ctr_memreadenable.read() << " memwrite=" << this->mem_ctr_memreadenable.read() << std::endl;
So the memory hasn't yet seen the read signal change. BTW, should one of the read() calls attached to mem_ctr_memwriteenable - the both seem to be on readenable?
If you:
wait(1, SC_NS);
between those two points, does it improve matters?
To get a zero time synchronization with the memory module you should use
wait(SC_ZERO_TIME); //wait one delta cycle
not to introduce an arbitrary consumtion of time in your timed simulation.
This also impose you upgrade your control_process to an SC_THREAD