MPU9150Lib and gimbal lock? - locking

I'm currently using the MPU9150Lib on my Arduino project to read data from my IMU (of course a MPU9150). It works great so far. But I noticed some small glitches sometimes. I think it's a gimbal lock problem. When I look at the library, it seems they fuse the data and afterwards simply convert the resulting Euler angles to a quaternion. Could this cause a gimbal lock?
If so, could you guys help me rewrite the code? I don't really understand the math behind the data fusion. So I don't know how to correct the library.
You can find it here:
https://github.com/zarthcode/MPU9150Lib/tree/master/libraries/MPU9150Lib
The library contains the following function. I'm not sure if the problem lies within the function, but to me it looks strange since in the end it convert euler coordinates to a quaternion.
void MPU9150Lib::dataFusion()
{
float qMag[4];
float deltaDMPYaw, deltaMagYaw;
float newMagYaw, newYaw;
float temp1[4], unFused[4];
float unFusedConjugate[4];
// *** NOTE *** pitch direction swapped here
m_fusedEulerPose[VEC3_X] = m_dmpEulerPose[VEC3_X];
m_fusedEulerPose[VEC3_Y] = -m_dmpEulerPose[VEC3_Y];
m_fusedEulerPose[VEC3_Z] = 0;
MPUQuaternionEulerToQuaternion(m_fusedEulerPose, unFused); // create a new quaternion
deltaDMPYaw = -m_dmpEulerPose[VEC3_Z] + m_lastDMPYaw; // calculate change in yaw from dmp
m_lastDMPYaw = m_dmpEulerPose[VEC3_Z]; // update that
qMag[QUAT_W] = 0;
qMag[QUAT_X] = m_calMag[VEC3_X];
qMag[QUAT_Y] = m_calMag[VEC3_Y];
qMag[QUAT_Z] = m_calMag[VEC3_Z];
// Tilt compensate mag with the unfused data (i.e. just roll and pitch with yaw 0)
MPUQuaternionConjugate(unFused, unFusedConjugate);
MPUQuaternionMultiply(qMag, unFusedConjugate, temp1);
MPUQuaternionMultiply(unFused, temp1, qMag);
// Now fuse this with the dmp yaw gyro information
newMagYaw = -atan2(qMag[QUAT_Y], qMag[QUAT_X]);
if (newMagYaw != newMagYaw) { // check for nAn
#ifdef MPULIB_DEBUG
Serial.println("***nAn\n");
#endif
return; // just ignore in this case
}
if (newMagYaw < 0)
newMagYaw = 2.0f * (float)M_PI + newMagYaw; // need 0 <= newMagYaw <= 2*PI
newYaw = m_lastYaw + deltaDMPYaw; // compute new yaw from change
if (newYaw > (2.0f * (float)M_PI)) // need 0 <= newYaw <= 2*PI
newYaw -= 2.0f * (float)M_PI;
if (newYaw < 0)
newYaw += 2.0f * (float)M_PI;
deltaMagYaw = newMagYaw - newYaw; // compute difference
if (deltaMagYaw >= (float)M_PI)
deltaMagYaw = deltaMagYaw - 2.0f * (float)M_PI;
if (deltaMagYaw <= -(float)M_PI)
deltaMagYaw = (2.0f * (float)M_PI + deltaMagYaw);
newYaw += deltaMagYaw/4; // apply some of the correction
if (newYaw > (2.0f * (float)M_PI)) // need 0 <= newYaw <= 2*PI
newYaw -= 2.0f * (float)M_PI;
if (newYaw < 0)
newYaw += 2.0f * (float)M_PI;
m_lastYaw = newYaw;
if (newYaw > (float)M_PI)
newYaw -= 2.0f * (float)M_PI;
m_fusedEulerPose[VEC3_Z] = newYaw; // fill in output yaw value
MPUQuaternionEulerToQuaternion(m_fusedEulerPose, m_fusedQuaternion);
}

I was able to fix the problem. I found another library that was uploaded for the sparkfun breakout board of the mpu9150. I couldn't get this one to work either, but for other reasons. The raw data was very noisy and I couldn't fix it by tweaking the options. But in the example that comes with the library is a function for the data fusion. I copied it into the MPU9150Lib and adjusted the variables. So basically I use the MPU9150Lib now with the data fusion algorithm from the sparkfun breakout board. Now everything works like a charme.
You can find the sparkfun lib here:
https://github.com/sparkfun/MPU-9150_Breakout/tree/master/firmware/MPU6050/Examples
The example in there is finally what I was looking for all the time. A well documented way to use the mpu9150. Took me some time to get this thing up and running the way I wanted to.

Related

How do I randomize the starting direction of a ball in Spritekit?

I've started trying a few things with SpriteKit for Game Development. I was creating a brick breaking game. So I've run into a issue on how to randomize the starting direction of the ball.
My ball has the following properties
ball.physicsBody.friction = 0;
ball.physicsBody.linearDamping = 0;
ball.physicsBody.restitution = 1 ; //energy lost on impact or bounciness
To start at different direction during the gameplay, I've randomized the selection of the 4 vectors because I'm using the applyImpulse method to direct the ball in a particular direction and I need to make sure the ball does not go slow if the vector values are low.
int initialDirection = arc4random()%10;
CGVector myVector;
if(initialDirection < 2)
{
myVector = CGVectorMake(4, 7);
}
else if(initialDirection >3 && initialDirection <= 6)
{
myVector = CGVectorMake(-7, -5);
}
else if(initialDirection >6 && initialDirection <= 8)
{
myVector = CGVectorMake(-5, -8);
}
else
{
myVector = CGVectorMake(8, 5);
}
//apply the vector
[ball.physicsBody applyImpulse:myVector];
Is this the right way to do it? I tried using applyForce method but then, ball slowed down after the force was applied.
Is there any way I can randomize the direction and still maintain a speed for my ball ?
The basic steps
Randomly select an angle in [0, 2*PI)
Select the magnitude of the impulse
Form vector by converting magnitude/angle to vector components
Here's an example of how to do that
ObjC:
CGFloat angle = arc4random_uniform(1000)/1000.0 * M_PI_2;
CGFloat magnitude = 4;
CGVector vector = CGVectorMake(magnitude*cos(angle), magnitude*sin(angle));
[ball.physicsBody applyImpulse:vector];
Swift
let angle:CGFloat = CGFloat(arc4random_uniform(1000)/1000) * (CGFloat.pi/2)
let magnitude:CGFloat = 4
let vector = CGVector(x:magnitude * cos(angle), y:magnitude * sin(angle))
ball.physicsBody?.applyImpulse(vector)

GluUnProject for iOS

To detect 3D world coordinates through the 2D screen coordinates of the iOS, is there any other possible way besides the gluUnProject port?
I've been fiddling around with this days on end now, and I can't seemingly get the hang of it.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
There's a good chunk of my code. Yeah, I could have easily used a struct but I was too mentally fat to do it at the time. That's something I could go back and fix later.
Anywho, the point is that when I output to the debugger using NSLog after I use gluUnProject, the nearplane's and farplane's don't relay results even close to accurate. In fact, they both relay the exact same results, not to mention, the first click always reproduces x, y, & z being all equal to "nan."
Am I skipping over something extraordinarily important here?
There is no gluUnProject function in ES2.0, what is this port that you are using? Also there is no GL_MODELVIEW_MATRIX or GL_PROJECTION_MATRIX, which is most likely your problem.

Rendering painted lines as nodes in Cocos

I'm working on a drawing app for iPad using Cocos-iOS and I'm having performance issues with drawing lines as a type of CCNode. I understand that using draw in a node causes it to be called every time the canvas is repainted and the current code is very heavy if used every time:
for (LineNodePoint *point in self.points) {
start = end;
end = point;
if (start && end) {
float distance = ccpDistance(start.point, end.point);
if (distance > 1) {
int d = (int)distance;
float difx = end.point.x - start.point.x;
float dify = end.point.y - start.point.y;
for (int i = 0; i < d; i++) {
float delta = i / distance;
[[self.brush sprite] setPosition:ccp(start.point.x + (difx * delta), start.point.y + (dify * delta))];
[[self.brush sprite] visit];
}
}
}
}
Very heavy...
I either need a better way to draw the lines or to be able to cache the drawing as a raster.
Thanks in advance for any help.
How about ccDrawLine or CCMutableTexture? CCMutableTexture is for manipulating pixels using CCRenderTexture internally as you said.
ccDrawLine
cocos2d for iPhone 1.0.0 API reference
CCMutableTexture
Fast set/getPixel for an opengl texture?
[render texture] pixel manipulation (integrated CCMutableTexture functionality)

Simulate "Newton's law of universal gravitation" using Box2D

I want to simulate Newton's law of universal gravitation using Box2D.
I went through the manual but couldn't find a way to do this.
Basically what I want to do is place several objects in space (zero gravity) and simulate the movement.
Any tips?
It's pretty easy to implement:
for ( int i = 0; i < numBodies; i++ ) {
b2Body* bi = bodies[i];
b2Vec2 pi = bi->GetWorldCenter();
float mi = bi->GetMass();
for ( int k = i; k < numBodies; k++ ) {
b2Body* bk = bodies[k];
b2Vec2 pk = bk->GetWorldCenter();
float mk = bk->GetMass();
b2Vec2 delta = pk - pi;
float r = delta.Length();
float force = G * mi * mk / (r*r);
delta.Normalize();
bi->ApplyForce( force * delta, pi );
bk->ApplyForce( -force * delta, pk );
}
}
Unfortunately, Box2D doesn't have native support for it, but you can implement it yourself: Box2D and radial gravity code
As said by others, Box2D has no buildin support for it. But you can add support for it to the library in b2_islands.cpp. Just replace
v += h * b->m_invMass * (b->m_gravityScale * b->m_mass * gravity + b->m_force);
with
int planet_x = 0;
int planet_y = 0;
b2Vec2 gravityVector = (b2Vec2(planet_x, planet_y) - b->GetPosition());
gravityVector.Normalize();
gravityVector.x = gravityVector.x * 10.0f;
gravityVector.y = gravityVector.y * 10.0f;
v += h * b->m_invMass * (b->m_gravityScale * b->m_mass * gravityVector + b->m_force);
Thats a simple solution if you have only one planet.
If you want less force the further away you are, you could use 1/gravityVector instead of normalizing it. That would also make it possible to add up the gravity
of to planets. The you could also iterate over a planet list and sum the gravityVectors up.
Additionally implementing a function like b2World::CreatePlanet might be usefull then.
The 10.0f are just an approximation of the 9.81f from earth, you might need to adjust it. If the mass of the planet is relevant you might need a constant to be multiplied with it, to make it look more realistic, or just increase the density of the object to make it match the real weight of a planet.
Sure you can also set the gravity to 0, 0 and then calculate it before each step for every object, but that might not have so much performance.

How to program smooth movement with the accelerometer like a labyrinth game on iPhone OS?

I want to be able to make image move realistically with the accelerometer controlling it, like any labyrinth game. Below shows what I have so far but it seems very jittery and isnt realistic at all. The ball images seems to never be able to stop and does lots of jittery movements around everywhere.
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
deviceTilt.x = 0.01 * deviceTilt.x + (1.0 - 0.01) * acceleration.x;
deviceTilt.y = 0.01 * deviceTilt.y + (1.0 - 0.01) * acceleration.y;
}
-(void)onTimer {
ballImage.center = CGPointMake(ballImage.center.x + (deviceTilt.x * 50), ballImage.center.y + (deviceTilt.y * 50));
if (ballImage.center.x > 279) {
ballImage.center = CGPointMake(279, ballImage.center.y);
}
if (ballImage.center.x < 42) {
ballImage.center = CGPointMake(42, ballImage.center.y);
}
if (ballImage.center.y > 419) {
ballImage.center = CGPointMake(ballImage.center.x, 419);
}
if (ballImage.center.y < 181) {
ballImage.center = CGPointMake(ballImage.center.x, 181);
}
Is there some reason why you can not use the smoothing filter provided in response to your previous question: How do you use a moving average to filter out accelerometer values in iPhone OS ?
You need to calculate the running average of the values. To do this you need to store the last n values in an array, and then push and pop values off the array when ever you read the accelerometer data. Here is some pseudocode:
const SIZE = 10;
float[] xVals = new float[SIZE];
float xAvg = 0;
function runAverage(float newX){
xAvg += newX/SIZE;
xVals.push(newX);
if(xVals.length > SIZE){
xAvg -= xVals.pop()/SIZE;
}
}
You need to do this for all three axis. Play around with the value of SIZE; the larger it is, the smoother the value, but the slower things will seem to respond. It really depends on how often you read the accelerometer value. If it is read 10 times per second, then SIZE = 10 might be too large.