CMMotionManager changes from cached reference frame not working as expected - quaternions

When I call startDeviceMotionUpdatesUsingReferenceFrame, then cache a reference to my first reference frame and call multiplyByInverseOfAttitude on all my motion updates after that, I don't get the change from the reference frame that I am expecting. Here is a really simple demonstration of what I'm not understanding.
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 1.0/20.0;
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error){
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
CMAttitude *att = motion.attitude;
if(self.motionManagerAttitudeRef == nil){
self.motionManagerAttitudeRef = att;
return;
}
[att multiplyByInverseOfAttitude:self.motionManagerAttitudeRef];
NSLog(#"yaw:%+0.1f, pitch:%+0.1f, roll:%+0.1f, att.yaw, att.pitch, att.roll);
}];
}];
First off, in my application I only really care about pitch and roll. But yaw is in there too to demonstrate my confusion.
Everything works as expected if I put the phone laying on my flat desk, launch the app and look at the logs. All of the yaw, pitch roll values are 0.0, then if I spin the phone 90 degrees without lifting it off the surface only the yaw changes. So all good there.
To demonstrate what I think is the problem... Now put the phone inside of (for example) an empty coffee mug, so that all of the angles are slightly tilted and the direction of gravity would have some fractional value in all axis. Now launch the app and with the code above you would think everything is working because there is again a 0.0 value for yaw, pitch and roll. But now spin the coffee mug 90 degrees without lifting it from the table surface. Why do I see significant change in attitude on all of the yaw, pitch and roll?? Since I cached my initial attitude (which is now my new reference attitude), and called muptiplyByInverseOfAttitude shouldn't I just be getting a change in the yaw only?

I don't really understand why using the attitude multiplied by a cached reference attitude doesn't work... And I don't think it is a gimbal lock problem. But here is what gets me exactly what I need. And if you tried the experiment with the coffee mug I described above, this provides exactly the expected results (i.e. spinning the coffee mug on a flat surface doesn't affect pitch and roll values, and tilting the coffee mug in all other directions now only affects one axis at a time too). Plus instead of saving a reference frame, I just save the reference pitch and roll, then when the app starts, everything is zero'ed out until there is some movement.
So all good now. But still wish I understood why the other method did not work as expected.
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 1.0/20.0;
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error)
{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
if(self.motionManagerAttitude == nil){
CGFloat x = motion.gravity.x;
CGFloat y = motion.gravity.y;
CGFloat z = motion.gravity.z;
refRollF = atan2(y, x) + M_PI_2;
CGFloat r = sqrtf(x*x + y*y + z*z);
refPitchF = acosf(z/r);
self.motionManagerAttitude = motion.attitude;
return;
}
CGFloat x = motion.gravity.x;
CGFloat y = motion.gravity.y;
CGFloat z = motion.gravity.z;
CGFloat rollF = refRollF - (atan2(y, x) + M_PI_2);
CGFloat r = sqrtf(x*x + y*y + z*z);
CGFloat pitchF = refPitchF - acosf(z/r);
//I don't care about yaw, so just printing out whatever the value is in the attitude
NSLog(#"yaw: %+0.1f, pitch: %+0.1f, roll: %+0.1f", (180.0f/M_PI)*motion.attitude.yaw, (180.0f/M_PI)*pitchF, (180.0f/M_PI)*rollF);
}];
}];

Related

CoreML and YOLOv3 performance issue

currently I am facing issue with performance of YOLOv3 implemented in objective-c/C++ XCode project for MacOS, however the performance is too slow. I do not have much experience with MacOS and XCode so I followed this tutorial. The execution time is around ~0.25 second.
Setup:
I run it on MacBook Pro Intel Core i5 3.1 GHz and graphic Intel Iris Plus Graphic 650 1536MB and the performance is around 4 fps. That's understandable, the GPU is not powerful one and it uses mostly CPU. Accually, it is impresive because it is faster than Pytorch implementation running on CPU. However, I run this example on MacBook pro Intel i7 2.7GHz and AMD Radeon Pro 460 and the performance is only 6 fps.
By this website the performance should be much better. Can you please let me know where I am doing mistake or it the best performance I can get with this setup? Please note that I've checked system monitor and GPU is fully used in both cases.
This is my initialisation:
//loading model
MLModel *model_ml = [[[YOLOv3 alloc] init] model];
float confidencerThreshold = 0.8;
NSMutableArray<Prediction*> *predictions = [[NSMutableArray alloc] init];
VNCoreMLModel *model = [VNCoreMLModel modelForMLModel:model_ml error:nil];
VNCoreMLRequest *request = [[VNCoreMLRequest alloc] initWithModel:model completionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error){
for(VNRecognizedObjectObservation *observation in request.results)
{
if(observation.confidence > confidencerThreshold){
CGRect rect = observation.boundingBox;
Prediction* prediction = [[Prediction alloc] initWithValues: 0 Confidence: observation.confidence BBox: rect];
[predictions addObject:prediction];
}
}
}];
request.imageCropAndScaleOption = VNImageCropAndScaleOptionScaleFill;
float ratio = height/CGFloat(width);
And my loop implementation
cv::Mat frame;
int i = 0;
while(1){
cap>>frame;
if(frame.empty()){
break;
}
image = CGImageFromCVMat(frame.clone());
VNImageRequestHandler *imageHandler = [[VNImageRequestHandler alloc] initWithCGImage:image options:nil];
NSDate *methodStart = [NSDate date]; //Measuring performance here
NSError *error = nil;
[imageHandler performRequests:#[request] error:&error]; //Call request
if(error){
NSLog(#"%#",error.localizedDescription);
}
NSDate *methodFinish = [NSDate date];
NSTimeInterval executionTime = [methodFinish timeIntervalSinceDate:methodStart]; //get execution time
// draw bounding boxes
for(Prediction *prediction in predictions){
CGRect rect = [prediction getBBox];
cv::rectangle(frame,cv::Point(rect.origin.x * width,(1 - rect.origin.y) * height),
cv::Point((rect.origin.x + rect.size.width) * width, (1 - (rect.origin.y + rect.size.height)) * height),
cv::Scalar(0,255,0), 1,8,0);
}
std::cout<<"Execution time "<<executionTime<<" sec"<<" Frame id: "<<i<<" with size "<<frame.size()<<std::endl;
[predictions removeAllObjects];
}
cap.release();
Thank you.
Set a breakpoint on the line that calls [imageHandler performRequests] and run the app with optimizations disabled. Use the "Step Into" button from the debugger a number of times. Look in the stacktrace for "Espresso".
Does this show something like Espresso::BNNSEngine? Then the model runs on the CPU, not the GPU.
Does the stacktrace show something like Espresso::MPSEngine? Then you're running on the GPU.
My guess is Core ML runs your model on the CPU, not on the GPU.

Changing Sine Wave frequencies in the same AVAudioPCMBuffer

I've been working on getting a clean sine wave sound that can change frequencies when different notes are played. From what I've understood, I need to resize the buffer's frameLength relative to the frequency to avoid those popping sounds caused when the frame ends on a sine's peak.
So on every iteration, I set the frameLength and then populate buffer with the signal.
AVAudioPlayerNode *audioPlayer = [[AVAudioPlayerNode alloc] init];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:[audioPlayer outputFormatForBus:0] frameCapacity:44100*10];`
while(YES){
AVAudioFrameCount frameCount = ceil(44100.0/osc.frequency);
[buffer setFrameLength:frameCount];
[audioPlayer scheduleBuffer:buffer atTime:0 options:AVAudioPlayerNodeBufferLoops completionHandler:nil];
for(int i = 0; i < [buffer frameLength]; i++){
for (int channelNumber = 0; channelNumber < channelCount ; channelNumber++) {
float * const channelBuffer = floatChannelData[channelNumber];
channelBuffer[i] = [self getSignalOnFrame:i];
}
}
}
where the signal is generated from:
(float)getSignalOnFrame:(int)i {
float sampleRate = 44100.0;
return [osc amplitude] * sinf([osc frequency] * i * 2.0 * M_PI / sampleRate);
}
The starting tone sounds fine and there are no popping sounds when notes change but the notes themselves sound like they're being turned into sawtooth waves or something.
Any ideas on what I might be missing here?
Or should I just create a whole new audioPlayer with a fresh buffer for each note played?
Thanks for any advice!
If the buffers are contiguous, then a better method to not have discontinuities in sine wave generation is to remember the phase of the sinewave at the end of one buffer, and use that phase as the starting point (angle) to generate the next buffer.
If the buffers are not contiguous, then a common way to avoid clicks is to gradually taper the first and last few milliseconds of each buffer from full gain to zero. A linear gain taper will do, but a raised cosine taper is a slightly smoother taper.

CMMotionManager and the SceneKit Coordinate System

I'm trying to build a rolling marble type game. I've decided to convert from Cocos3D to SceneKit so I have probably primitive questions about code snippets.
Here is my CMMotionManager setup. Problem is that as I change my device orientation, the gravity direction also changes (not properly adjusting to device orientation). This code only works with Landscape Left orientation.
-(void) setupMotionManager
{
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
motionManager = [[CMMotionManager alloc] init];
[motionManager startAccelerometerUpdatesToQueue:queue withHandler:^(CMAccelerometerData *accelerometerData, NSError *error)
{
CMAcceleration acceleration = [accelerometerData acceleration];
float accelX = 9.8 * acceleration.y;
float accelY = -9.8 * acceleration.x;
float accelZ = 9.8 * acceleration.z;
scene.physicsWorld.gravity = SCNVector3Make(accelX, accelY, accelZ);
}];
}
This code came from a marble demo from apple. I translated it from Swift to Obj-C.
If I want it to work in Landscape Right I need to change last line to
scene.physicsWorld.gravity = SCNVector3Make(-accelX, -accelY, accelZ);
This brings up another question... If Y is Up in SceneKit, why is it the accelZ variable that needs no change? So my question is how does CMMotionManager coordinates relate to Scene coordinates?

CMAttitudeReferenceFrameXTrueNorthZVertical doesn't point to north

I am using this code to define the device position. Roll & pitch values looks good(they are 0 when device is on the table), but when yaw is 0, compass points to west. What's wrong?
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical toQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error){
[self performSelectorOnMainThread:#selector(handleDeviceMotion:) withObject:motion waitUntilDone:YES];
}];
}
}
- (void)handleDeviceMotion:(CMDeviceMotion*)motion {
CMAttitude *attitude = motion.attitude;
double yaw = attitude.yaw * 180 / M_PI;
double pitch = attitude.pitch * 180 / M_PI;
double roll = attitude.roll * 180 / M_PI;
self.xLabel.text = [NSString stringWithFormat:#"%7.4f", yaw];
self.yLabel.text = [NSString stringWithFormat:#"%7.4f", pitch];
self.zLabel.text = [NSString stringWithFormat:#"%7.4f", roll];
[self sendDeviceAttitudeLogWithYaw:yaw pitch:pitch roll:roll];
}
When yaw is 0, phone must point to north, no?
What do you mean by "phone"? The question is, what part of the phone. Let's read the docs:
CMAttitudeReferenceFrameXTrueNorthZVertical
Describes a reference frame in which the Z axis is vertical and the X axis points toward true north.
The X axis runs out the right side of the device. So you should expect yaw to be 0 when the right side of the device is north.

Animate UILabel with numbers

I am still learning about UIAnimations, just got into it and I have stumbled upon a problem that I am not sure how to solve. I've seen games where you get a new high score and it adds the new high score to the old high score and they make the numbers animate up or down. It looks very cool and visually appeasing.
Can anyone explain to me how this is done? I apologize if this question is easily solved, like I said I am still trying to learn/perfect animations.
Thanks in advance
I took the code from the post sergio suggested you look at, but took note of Anshu's mention that you wanted a moving up and down animation rather then a fade-in/fade-out animation, so I changed the code to fit what you wanted. Here you go:
// Add transition (must be called after myLabel has been displayed)
CATransition *animation = [CATransition animation];
animation.duration = 1.0; //You can change this to any other duration
animation.type = kCATransitionMoveIn; //I would assume this is what you want because you want to "animate up or down"
animation.subtype = kCATransitionFromTop; //You can change this to kCATransitionFromBottom, kCATransitionFromLeft, or kCATransitionFromRight
animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
[myLabel.layer addAnimation:animation forKey:#"changeTextTransition"];
// Change the text
myLabel.text = newText;
Hope this helps!
People can correct me if I'm wrong here, but I'm pretty sure you have to code this animation manually. You might be able to find an open source version somewhere online if you look hard enough.
It might be possible to take an image of a UILabel and use sizeWithFont: to determine how wide each character is, then cut the image up into sections based on each digit. Alternatively you could just have multiple UILabels for each digit.
Once you have an array of digit images, you'd have to calculate which digits are going to change during the transition and whether they're going to increase or decrease, then transition to the next digit by pushing it in from the top/bottom (I think there's a built in transition to do this, look around in the Core Animation docs).
You would probably want to determine by how much they increase/decrease and use that to figure out how long the animation will take. That way, if you're going from 5 to 900, the last digit would have to be animating very quickly, the second to last would animate 1/10 as quickly, the third would be 1/100, etc.
This does on ok job, using the reveal function. It would be nice to have some vertical motion, but it's either going to be kCATransitionFromBottom or kCATransitionFromTop - and really we'd need kCATransitionFromBottom | kCATransitionToTop, but that's not a thing. Here's the code:
-(void)countUpLabel:(UILabel *)label fromValue:(int)fromValue toValue:(int)toValue withDelay:(float)delay{
int distance = (int)toValue - (int)fromValue;
int absDistance = abs(distance);
float baseDuration = 1.0f;
float totalDuration = absDistance / 100.0f * baseDuration;
float incrementDuration = totalDuration / (float)absDistance;
int direction = (fromValue < toValue) ? 1 : -1;
//NSString * subtype = (direction == 1) ? kCATransitionFromBottom : kCATransitionFromTop;
for (int n = 0; n < absDistance; n++){
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delay * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
CATransition * fade = [CATransition animation];
fade.removedOnCompletion = true;
fade.duration = incrementDuration;
fade.type = kCATransitionReveal;
fade.subtype = kCATransitionMoveIn;
[label.layer addAnimation:fade forKey:#"changeTextTransition"];
int value = fromValue + (n+1) * direction;
label.text = [NSString stringWithFormat:#"%i", value];
});
delay += incrementDuration;
}
}