Can anyone please help me understand how to form orientation quaternion from raw accelerometer, gyroscope and magnetometer data? Also, like Euler angles, does the order of rotation have an effect on the orientation quaternion since it has an angle component to it?
Related
TLDR: I want to rotate camera but render sprites in regards to their world position not camera position.
Howdy,
I'm currently using LibGDX and have come across an issue in regards to camera/object rotation.
Say I have my camera with a rotation of 0 and I have an object(sprite) to the left of the camera's center.
i.e.
Camera Normal (0 degrees rotation)
The sprite renders fine when given a standard world coordinate, however once I rotate my camera, that world coordinate differs from the camera's new (x, y) values.
If I then rotate my camera smoothly 90 degrees to the right(clockwise so that the up direction is facing to the right like the picture below), the object(sprite) that used to be on the left should have simulated a left rotation in regards to the camera (the rotation happens via the camera, the sprite just needs to update position) and now be below the camera's center point.
i.e.
Camera Rotated (90 degrees clockwise)
I'm confused as to how I would calculate the sprite's new locations/positions during the smooth rotation.
Cheers,
Solist.
After looking everywhere for a solution to this problem for 3 weeks it was merely a matter of me needing to call the method
batch.setProjectionMatrix(camera.combined);
in order to update the sprites to their new position in regards to the changing camera rotation.
This link here explains how the Projection Matrix works.
below you can see the result of the optical flow if a camera makes a translation movement. If the camera makes a roll rotation the result looks like the second picture. Is it possible to retrieve the yaw angle from a camera if its only rotation around the yaw axis?
I think in the optical flow you can recognize if the camera is rotating around the yaw axis (z-axis), but i don't know how to retrieve the information how much the cam has rotated.
I would be gradeful for any hints. Thanks
Translation:
Roll rotation:
Orientation of camera:
If you have a pure rotation of your cam then you can use findhomography. You need four point correspondence in your pictures. For a pure rotation the homography matrix is already a rotation matrix. Otherwise you need to decompose the homograohy matrix. For a camera movement off 6 dof you can use the function find essential matrix and decompose this to translation and rotation.
Here's the task:
We have an Mesh, drawn in position POS with rotation ROT
Also we have a camera Which position and rotation is relative to Mesh For example camera point is CPOS and camera rotation is CROT.
How to calculate resulting angle for camera? I was assuming that it something like:
camera.rotation.x = mesh.rotation.x + viewport.rotation.x
camera.rotation.y = mesh.rotation.y + viewport.rotation.y
camera.rotation.z = mesh.rotation.z + viewport.rotation.z
That worked strange and wrong.
Then I decided to read about it on docs and completely dissapointed.
There are several kind of rotation structures (Euler, Quaternion). But What a want is something different.
Imagine, like you are on spaceship. And it moves in space. You are sitting at starboard turret and looking at objects. They seems like passing by...
Then you want to turn your head - Angel of your head is known to you (in raw opengl, I'd just multiplied head rotation matrix on ship's rotation matrix and got my projection matrix).
In other words I want only x and y axis for camera rotations, combined in matrix. Then I want to multiply it with position-rotation matrix of an object. And this final matrix would be my projection matrix.
How could I do the same in THREE.js?
-----EDIT-----
Thank you for the answer.
Which coords should I give to a camera? It should be local, mesh relative coords, or something absolute?
I understand, that this questions are obvious, but there's no any description about relative objects in THREE.JS docs (besides api description). And the answer might be ambiguous.
Add the camera as a child of the mesh like so:
mesh.add( camera );
When the camera is a child of an object, the camera's position and orientation are specified relative to the parent object.
You can set the camera's orientation by setting either the camera's quaternion or Euler rotation -- your choice.
Please note that the renderer updates the object's matrix and matrixWorld for you. You do not need to do that manually.
three.js r.63
I´m doing some testing on gradients using NSBezierPath and make some progress so far with radial gradients. see first picture. I wonder, however, if it is possible to make angle gradients as in picture 2.
Anyone done this?
Tia, Ronald
Cocoa has no public API for angular gradients. You'll have to do it yourself by painting pixels in varying colors.
Good Question!
There is no Standard Feature.
Try to approx by dividing the Circle into sectors which functions as clipping Polygon
Then draw a linear Gradient in direction 90 degrees to the Center line of the sectors.
I've found this beautiful quick way to remove gravity from accelerometer readings. However, I have a 6dof IMU (xyz gyro, xyz accel, no magnetometer) so I am not sure if I can use this code (I tried and it doesn't work correctly).
How would someone remove the gravity component? It's a big obstacle because I can't proceed with my project.
EDIT:
What I have:
quaternion depicting the position of aircraft (got that using Extended Kalman Filter)
acceleration sensor readings (unfiltered; axes aligned as the plane is aligned; gravity is also incorporated in these readings)
What I want:
remove the gravity
correct (rotate) the accelerometer readings so it's axes will be aligned with earth's frame of reference's axes
read the acceleration towards earth (now Z component of accelerometer)
Basically I want to read the acceleration towards earth no matter how the plane is oriented! But first step is to remove gravity I guess.
UPDATE: OK, so what you need is to rotate a vector with quaternion. See here or here.
You rotate the measured acceleration vector with the quaternion (corresponding to the orientation) then you substract gravity [0, 0, 9.81] (you may have -9.81 depending on your sign conventions) from the result. That's all.
I have implemented sensor fusion for Shimmer 2 devices based on this manuscript, I highly recommend it. It only uses accelerometers and gyroscopes but no magnetometer, and does exactly what you are looking for.
The resource you link to in your question is misleading. It relies on the quaternion that comes from sensor fusion. In other words, somebody already did the heavy lifting for you, already prepared the gravity compensation for you.