I am working on Quaternions and Euler angles with a gyroscope and dedrifting methods. What I have done is to calculate the angles with both methods. However, I noticed that when I used Euler angles and the same dedrifting method the result was more accurate (in comparison with the real rotation) against the Quaternions transformed to Euler angles. I transformed the Quaternions to Euler angles for visualization purposes.
So, my questions are:
1) Quaternions present higher error because have been transformed to Euler angles? Do they accumulate errors due to the transformation?
2) Is there a possibility Quaternions to produce higher errors than Euler angles because of the integration of the quaternions derivative?
q_dot = 1/2*q x Ω (where x is quaternion multiplication)
Related
So i have a dataset that represents x-y-z coordinates of a linear motion. The dataset is noisy and i am trying to extract a smooth trajectory as close as possible to the real trajectory using a bezier curve and fitting each point as a control point of the curve. The results are moderately satisfying so i was wondering whether a pre-filtering with a lowpass butterworth will show any better results.
In general is it useful to combine a smoothing technique like bezier or cubic smoothing splines with low pass filtering?
I use TensorFlow 1.12
I would like to take a batch of feature maps [B,H,W,C], and would like to convolve each channel with itself.
This probably is possible with tf.map_fn, but I would like to keep these operations as vectorized as possible.
What is the best vectorized way of achieving it ?
Each channel is an image. Convolving an image with another, itself in this case, is most efficiently implemented in the Fourier domain using the convolution theorem. It states that a convolution of an image with another is the same as the inverse Fourier transform of the dot product of their Fourier transforms. Breaking that into steps:
Optionally, pad the images with zeros
Fourier transform both images.
Calculate the dot product of the Fourier transforms.
Inverse Fourier transform.
Both images being the same is a special case.
I have a matrix and I want to decompose it into different matrices with low to high frequency limit. As I have noticed, it can be done using wavelet transform. I found something like the figure below for 1D signal and I want to do similar procedure for my 2D matrix using MATLAB. I want to decompose it to different matrices with low to high frequency components in different levels.
I used the matrix tool box, however, when I have problems with extracting the data.
How can I do this using MATLAB?
You are looking for the wavedec2 function.
There's a basic example w/ the function documentation here
I have a a classification function that classify a data point into one of two classes. The problem is, I need a way to plot the decision boundary of two class. While this is easy for linear function. it's cumbersome to find the equation of the boundary. ezplot package in matlab seems to be able to do it. It will plot the result automatically. It works using linear and quadratic function. It doesn't require you to provide the coordinate. In matplotlib, you can only plot if you are given the coordinate. Does anyone know how to do this with matplotlib?
In Java3D application I have two planes. How can I find if they have an intersection and if they do what is the angle between these planes? And how to resolve the vector of their intersection?