How can time derivatives of angular orientation (attitude) be calculated from quaternion data? - quaternions

I want to use the quaternion data representing the attitude of a device (e.g. Android and iOS smartphones), during a movement task (e.g. arm movement), to calculate time derivatives of this angular data.
Since attitude is angular, the first derivative will be angular velocity, the second will be angular acceleration, and the third will be angular jerk.
How should this be done? I'm assuming such analyses can't be performed on the fly, but must be done after data are captured, but not sure what best to do.

Related

How can I get an approximate upper body volume from the Kinect skeleton?

I want to do a fitting room app using the Kinect. Since I need to measure the player clothing size (S, M, L, XL) I must get the player's upper body "approximation" mass only using its skeleton (not using depth data). I don't need a very precise calculation.
Examine the length of the bones by calculating the distance between the relevant upper-body joints:
SHOULDER_LEFT
SHOULDER_CENTER
SHOULDER_RIGHT
SPINE
HIP_LEFT
HIP_CENTER
HIP_RIGHT
For example, two of the most relevant features to calculate are likely related to the user's height - the distance between SHOULDER_CENTER and SPINE joints, and the distance between SPINE and HIP_CENTER joints.
I suggest using Kinect Studio to store recordings of users and classify each recording according to the user's clothing size. With this data, you should be able to iterate on an algorithm (assuming it's feasible to approximate this accurately enough using only the skeleton data).
(As a side note, to do this more accurately, you'll probably need the depth data and 3D scanning. For example, there is an existing company called Styku that has a related Kinect product that does 3D body scanning.)

how to overcome tracking jitter

I'm working with a object tracking project.
Steps:
1.Preprocessing the image and achieve some candidates regions of interest.
2.For each region, test if it is the target by ORB/BF.
3.After the target region determined, acquire coordinates of some points on the target and their corresponding coordinates in the world coordinate system.
4.Use solvePnP(in opencv) to get rotation vector and translation vector.
5.Translation vector is used in VR for localization and view control.
Tracking jitter means, although the object is stationary, because of some tracking errors, such as noise, the position of the target is slightly changing. Then, look at step 4 and step 5, due to the change, translation vector is slightly changed and with the Head Mounted Device, I feel the jitter all the time.
Seems to me that tracking jitter is unavoidable because of change in environment or some noise. But one pixel value change can lead to about a few centimeters change in z value in translation vector. So any proper way to deal with it?
I have googled but there didn't seem much information.Effects of Tracking Technology, Latency, and Spatial Jitter on Object Movement mentions the phemomenon, but did not provide a solution. Another interesting paper is Motion Tracking Requirements and Technologies. So can anyone offer some useful information?
It occurs to me that fileter is needed to do some post processing to the tracking data. But the idea is not very idea. Kalman filter can be used for tracking and can be used to attenuate noise. I don't know whether it can compensate for this kind of jitter(I mean, very small fluctuation in values) very well. And investigate how to incorporate Kalman filter into this project is another topic and need extra time.

how to reconstruct scene from different views' point clouds

I am facing a problem on 3D reconstruction since I am a new to this filed. I have some different views' depth map(point clouds), I want to use them to reconstruct the scene to get the effect like using the kinect fusion. Is there any paper of source code to settle this problem. Or any ideas on this problem.
PS:the point cloud is stored as a file with (x,y,z), you can check here to get the data.
Thank you very much.
As you have stated that you are new to this field, I shall attempt to keep this high level. Please do comment if there is something that is not clear.
The pipeline you refer to has three key stages:
Integration
Rendering
Pose Estimation
The Integration stage takes the unprojected points from a Depth Map (Kinect image) under the current pose and "integrates" them into a spatial data structure (a Voxel Volume such as a Signed Distance Function or a hierarchical structure like an Octree), often by maintaining per Voxel running averages.
The Rendering stage takes the inverse pose for the current frame and produces an image of the visible parts of the model currently in view. For the common volumetric representations this is achieved by Raycasting. The output of this stage provides the points of the model to which the next live frame is registered (the next stage).
The Pose Estimation stage registers the previously extracted model points to those of the live frame. This is commonly achieved by the Iterative Closest Point algorithm.
With regards to pertinent literature, I would advise the following papers as a starting point.
KinectFusion: Real-Time Dense Surface Mapping and Tracking
Real-time 3D Reconstruction at Scale using Voxel Hashing
Very High Frame Rate Volumetric Integration of Depth Images on
Mobile Devices

Dual kinect calibration using powerfull IR LED illuminator

i am using multiple Kinects within the scene. So I need to calibrate them and find the extrinsic parameters like translation and rotation world coordinate system. Once I have that information, i can reconstruct the scene at highest level of accuracy. the important point is : i want to get submillimeter accuracy and may be it would be nice if i could use powerfull IR projector in my system. But i do not have any Background about IR sensor and calibration methods. So i need to know about tow subject : 1- is it possible to add IR LED illuminator to kinect and manage it? 2- if i could add how to calibrate my new system?
Calibration (determining relative transforms (rotation, scale, position)) is only part of the problem. You also need to consider whether each Kinect can handle the interference of the other Kinect's projected IR reference patterns.
"Shake n Sense" (by Microsoft Research) is a novel approach that you may be able to use that has been demonstrated to work.
https://www.youtube.com/watch?v=CSBDY0RuhS4

servo motor s curve motion

I am trying to control a Industrial AC Servo motor using my XE166 device.
The controller interfaces with the servo controller using the PULSE and DIRECTION control.
To achieve a jerk-free motion I have been trying to create a S Curve motion profile (motor speed v/s time).
Calculating instantaneous speed is no problem as I know the distance moved by the motor per pulse, and the pulse duration.
I need to understand how to arrive at a mathematical equation that I could use, that would tell me what should be the nth pulses duration to have the speed profile as an S-Curve.
Since these must be a common requirement in any domain requiring motion control (Robotics, CNC, industrial) there must be some standard reference to do it
Step period is the time difference between two positions one step apart on the motion curve. If the position is defined by X(T), then the step time requires the inverse function T(X), and any given step period is P = T(X+1) - T(X). On a microcontroller with limited processing power, this is usually solved with an approximation - for 2nd order constant acceleration motion, Atmel has a fantastic example using a Taylor series approximation for inter-step time (Application note AVR446).
Another solution which works for higher order curves involves root solving. To solve T(x0), let U(T) = X(T) - x0 and solve for U(T) = 0.
For a constant acceleration curve, the quadratic formula woks great (but requires a square root operation - usually expensive on microcontrollers). For jerk-limited motion (a 3rd degree polynomial minimum) the roots can be found with an iterative root solving algorithm.