Working with Accelerometer - acceleration

I am working on gestures using acceleration values (x, y, z) from a device.
If I hold the device in my hand in a resting position (x,y,z) = ((0,0,0)). But if I change the direction of device (still at resting position) the values are changed to something like ((766,766,821)). As all the x, y, z axis are changed compared to their original orientations.
Is there any way (trigonometric function OR other) to resolve this issue?

The acceleration due to gravity will always be present. It appears you are subtracting that value from one of the axes when the device is in a particular orientation.
What you will need to do to detect gestures is to detect the tiny difference that momentarily appears from the acceleration due to gravity as the devices begins moving. You won't be able to detect if the device is stationary or moving at a constant velocity, but you will be able to determine if it is turning or being accelerated.
The (x,y,z) values give you a vector, which gives the direction of the acceleration. You can compute the (square of the) length of this vector as x^2 + y^2 + x^2. If this is the same as when the device is at rest, then you know the device is unaccelerated, but in a certain orientation. (Either at rest, or moving at a constant velocity.)
To detect movement, you need to notice the momentary change in the length of this vector as the device begins to move, and again when it is brought to a stop. This change will likely be tiny compared to gravity.
You will need to compare the orientation of the acceleration vector during the movement to determine the direction of the motion. Note that you won't be able to distinguish every gesture. For example, moving the device forward (and stopping there) has the same effect as tilting the device slightly, and then bringing it back to the same orientation.
The easier gestures to detect are those which change the orientation of the device. Other gestures, such as a punching motion, will be harder to detect. They will show up as a change in the length of the acceleration vector, but the amount of change will likely be tiny.
EDIT:
The above discussion is for normalized values of x, y, and z. You will need to determine the values to subtract from the readings to get the vector. From a comment above, it looks like 766 are the "zero" values to subtract. But they might be different for the different axes on your device. Measure the readings with the devices oriented in all six directions. That is get the maximum and minimum values for x, y, and z. The central values should be halfway between the extremes (and hopefully 766).
Certain gestures will have telltale signatures.
Dropping the device will reduce the acceleration vector momentarily, then increase it momentarily as the device is brought to a stop.
Raising the device will increase the vector momentarily, before decreasing it momentarily.
A forward motion will increase the vector momentarily, but tilt it slightly forward, then increase it again momentarily, but tilted backward, as the device is brought to a stop.
Most of the time the length of the vector will equal the acceleration due to gravity.

If the device is not compensating automatically for the gravitational acceleration you need to substract the (0,0,~9.8m/s2) vector from the output of the device.
However, you will also need to have the orientation of the device (Euler angle or Rotation Matrix). If your device isn't providing that it's basically impossible to tell if the signaled acceleration is caused by actually moving the device (linear acc) or by simply rotating it (gravity changing direction).
Your compensated acceleration will become:
OutputAcc = InputAcc x RotMat - (0,0,9.8)
This way your OutputAcc vecor will always be in a local coord frame (ie. Z is always up)

I find your question unclear. What exactly do you measure and what do you expect?
In general, an accelerometer will, if held in fixed position, measure the gravity of the earth. This is displayed as acceleration upwards, which might sound strange at first but is completely correct: as the gravity is accelerating "down" and the device is in a fixed position some force in the opposite direction, i.e. "up" needs to be applied. The force you need to hold the device in a fixed position is this force, which has a corresponding acceleration in the "up" direction.
Depending on your device this gravity acceleration might be substracted before you get the values in the PC. But, if you turn the acceleratometer, the gravity acceleration is still around and still points to the same "up" direction. If, before turning the acceleratometer, "up" would correspond to x it will correspond to a different axis if turned 90°, say y. Thus, both the measured acceleration on x and y axis will change.
So to answer your question it's necessary to know how your accelerometer presents the values. I doubt that in a resting position the acceleration values measured are (0, 0, 0).

Your comment makes your question clearer. What you need to do is calibrate your accelerometer every time the orientation changes. There is no getting around this. You could make it a UI element in your application or if it fits with your uses, recalibrate to 0 if the acceleration is relatively constant for some amount of time (won't work if you measure long accelerations).
Calibration is either built into the device's api (check the documentation) or something you have to do manually. To do it manually, you have to read the current acceleration and store those 3 values. Then whenever you take a reading from the device, subtract those 3 values from each read value.

Related

How to change gravity axis/direction in Blender

I am making a videogame in Blender Game Engline, and want to be able to adjust the direction gravity pulls objects. I can change scene Z gravity in a script, but that's one-dimensional along the Z axis.
I would be fine with the ability to set X, Y, and Z gravity, or an easy way to make everything rotate at the same time around the origin (or an arbitrary point).
I could also build a system if I could have a plane exert gravity or a force field and have it rotate around the center at a set distance (the entire gameworld is encased in a sphere).
Basically, I want to be able to, from a python script, cause a force on all dynamic objects, automatically and without . How can I do this?
I think you're looking for bge.constraints.setGravity(x, y, z)
To set the gravity.
and this one: bge.logic.getCurrentScene().gravity to get the actual gravity. According to this you should calculate the ideal rotation for your objects

Remove gravity from IMU accelerometer

I've found this beautiful quick way to remove gravity from accelerometer readings. However, I have a 6dof IMU (xyz gyro, xyz accel, no magnetometer) so I am not sure if I can use this code (I tried and it doesn't work correctly).
How would someone remove the gravity component? It's a big obstacle because I can't proceed with my project.
EDIT:
What I have:
quaternion depicting the position of aircraft (got that using Extended Kalman Filter)
acceleration sensor readings (unfiltered; axes aligned as the plane is aligned; gravity is also incorporated in these readings)
What I want:
remove the gravity
correct (rotate) the accelerometer readings so it's axes will be aligned with earth's frame of reference's axes
read the acceleration towards earth (now Z component of accelerometer)
Basically I want to read the acceleration towards earth no matter how the plane is oriented! But first step is to remove gravity I guess.
UPDATE: OK, so what you need is to rotate a vector with quaternion. See here or here.
You rotate the measured acceleration vector with the quaternion (corresponding to the orientation) then you substract gravity [0, 0, 9.81] (you may have -9.81 depending on your sign conventions) from the result. That's all.
I have implemented sensor fusion for Shimmer 2 devices based on this manuscript, I highly recommend it. It only uses accelerometers and gyroscopes but no magnetometer, and does exactly what you are looking for.
The resource you link to in your question is misleading. It relies on the quaternion that comes from sensor fusion. In other words, somebody already did the heavy lifting for you, already prepared the gravity compensation for you.

How to calibrate a camera and a robot

I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html

Proper rotation with changing x/y coordinates

I'm trying to make a little archer game, and the problem I'm having has to do with 2 pixels in particular, I'll call them _arm and _arrow. When a real archer is pulling back an arrow, he doesn't immediately pull the arrow back as far as his strength allows him, the arrow takes a little bit of time to be pulled back.
The _arm's angle is equal to the vector from a point to where the user touched on the screen. The rotation is perfect, so the _arm is good. The _arrow needs to be on the same line as _arrow, they are 1 pixel wide each so it looks as though the _arrow is exactly on top of the _arm.
I tried to decrement from the x/y coordinates based on a variable that changes with time, and I set the _arrow's location equal to the _arm's location, and tried to make it look like the _arrow was being pulled back. however, if you rotated, the x/y would mess up because it is not proportional on the x and y axis, so basically _arrow will either be slightly above the arm or slightly below it depending on the angle of the vector, based on touch.
How could I used the x/y position of _arm and the vector of touch to make the arrow appear as though it was being pulled back by a small amount, yet keep the arrow on top of the _arm sprite so that it's position would be similar to the arm, but slightly off yet still on top of the _arm pixel at all times. If you need anymore info, just leave a comment.
I'm not sure I've fully understood, but I'll have a go at answering anyway:
To make the arrow move and rotate to the same place as the consider adding the arrow as a child of the arm. You can still render it behind if you like by making its z is less than one: [arm addChild:arrow z:-1]
To then make the arrow move away from the arm as the bow is drawn, you then just set the position of the arrow with respect to the arm.
The problem I do see with this solution however is that this grouping of the sprites may be a little unusual after the arrow leaves the bow. Here you probably don't want the arrow to be a child of the arm as the coordinate systems are no longer related.
Even though they're sure what I "suggested would have solved [the] problem" here is the
Poster's solution
I had to get the x and y coords of the arm based of angle, then I got the sin/cos of a number that was based of the same angle as the arm and subtraced from that.

Corona SDK and moving objects

I have shapes (Rectangle) in my game and want to implement something like -
when the shape object is pressed for small amount of time and pushed in any direction it should move small distance but pressing the shape for longer time it should be moved to large distance ( means depending on the pressure put on the shape and when it is thrown it should move distance relative to pressure applied.
Regards
You can break the problem into two pieces:
While the object is being pressed, it accelerates (so the longer it is pressed the greater the speed it gets up to).
As it travels, it decelerates at a constant rate (so the faster it's going at the beginning, the longer it keeps moving, and the farther it moves before it stops).
Now all you have to do is implement velocity and accelaration, then pressure and drag.
If this approach doesn't give the appearance you want, there are ways to modify it.