CMMotionManager and the SceneKit Coordinate System - objective-c

I'm trying to build a rolling marble type game. I've decided to convert from Cocos3D to SceneKit so I have probably primitive questions about code snippets.
Here is my CMMotionManager setup. Problem is that as I change my device orientation, the gravity direction also changes (not properly adjusting to device orientation). This code only works with Landscape Left orientation.
-(void) setupMotionManager
{
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
motionManager = [[CMMotionManager alloc] init];
[motionManager startAccelerometerUpdatesToQueue:queue withHandler:^(CMAccelerometerData *accelerometerData, NSError *error)
{
CMAcceleration acceleration = [accelerometerData acceleration];
float accelX = 9.8 * acceleration.y;
float accelY = -9.8 * acceleration.x;
float accelZ = 9.8 * acceleration.z;
scene.physicsWorld.gravity = SCNVector3Make(accelX, accelY, accelZ);
}];
}
This code came from a marble demo from apple. I translated it from Swift to Obj-C.
If I want it to work in Landscape Right I need to change last line to
scene.physicsWorld.gravity = SCNVector3Make(-accelX, -accelY, accelZ);
This brings up another question... If Y is Up in SceneKit, why is it the accelZ variable that needs no change? So my question is how does CMMotionManager coordinates relate to Scene coordinates?

Related

CoreML and YOLOv3 performance issue

currently I am facing issue with performance of YOLOv3 implemented in objective-c/C++ XCode project for MacOS, however the performance is too slow. I do not have much experience with MacOS and XCode so I followed this tutorial. The execution time is around ~0.25 second.
Setup:
I run it on MacBook Pro Intel Core i5 3.1 GHz and graphic Intel Iris Plus Graphic 650 1536MB and the performance is around 4 fps. That's understandable, the GPU is not powerful one and it uses mostly CPU. Accually, it is impresive because it is faster than Pytorch implementation running on CPU. However, I run this example on MacBook pro Intel i7 2.7GHz and AMD Radeon Pro 460 and the performance is only 6 fps.
By this website the performance should be much better. Can you please let me know where I am doing mistake or it the best performance I can get with this setup? Please note that I've checked system monitor and GPU is fully used in both cases.
This is my initialisation:
//loading model
MLModel *model_ml = [[[YOLOv3 alloc] init] model];
float confidencerThreshold = 0.8;
NSMutableArray<Prediction*> *predictions = [[NSMutableArray alloc] init];
VNCoreMLModel *model = [VNCoreMLModel modelForMLModel:model_ml error:nil];
VNCoreMLRequest *request = [[VNCoreMLRequest alloc] initWithModel:model completionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error){
for(VNRecognizedObjectObservation *observation in request.results)
{
if(observation.confidence > confidencerThreshold){
CGRect rect = observation.boundingBox;
Prediction* prediction = [[Prediction alloc] initWithValues: 0 Confidence: observation.confidence BBox: rect];
[predictions addObject:prediction];
}
}
}];
request.imageCropAndScaleOption = VNImageCropAndScaleOptionScaleFill;
float ratio = height/CGFloat(width);
And my loop implementation
cv::Mat frame;
int i = 0;
while(1){
cap>>frame;
if(frame.empty()){
break;
}
image = CGImageFromCVMat(frame.clone());
VNImageRequestHandler *imageHandler = [[VNImageRequestHandler alloc] initWithCGImage:image options:nil];
NSDate *methodStart = [NSDate date]; //Measuring performance here
NSError *error = nil;
[imageHandler performRequests:#[request] error:&error]; //Call request
if(error){
NSLog(#"%#",error.localizedDescription);
}
NSDate *methodFinish = [NSDate date];
NSTimeInterval executionTime = [methodFinish timeIntervalSinceDate:methodStart]; //get execution time
// draw bounding boxes
for(Prediction *prediction in predictions){
CGRect rect = [prediction getBBox];
cv::rectangle(frame,cv::Point(rect.origin.x * width,(1 - rect.origin.y) * height),
cv::Point((rect.origin.x + rect.size.width) * width, (1 - (rect.origin.y + rect.size.height)) * height),
cv::Scalar(0,255,0), 1,8,0);
}
std::cout<<"Execution time "<<executionTime<<" sec"<<" Frame id: "<<i<<" with size "<<frame.size()<<std::endl;
[predictions removeAllObjects];
}
cap.release();
Thank you.
Set a breakpoint on the line that calls [imageHandler performRequests] and run the app with optimizations disabled. Use the "Step Into" button from the debugger a number of times. Look in the stacktrace for "Espresso".
Does this show something like Espresso::BNNSEngine? Then the model runs on the CPU, not the GPU.
Does the stacktrace show something like Espresso::MPSEngine? Then you're running on the GPU.
My guess is Core ML runs your model on the CPU, not on the GPU.

How can i use the Object tracking API of vision framework on ios11?

// init bounding
CGRect rect = CGRectMake(0, 0, 0.3, 0.3);
VNSequenceRequestHandler* reqImages = [[VNSequenceRequestHandler alloc] init];
VNRectangleObservation* ObserveRect = [VNRectangleObservation observationWithBoundingBox:rect];
VNTrackRectangleRequest* reqRect = [[VNTrackRectangleRequest alloc] initWithRectangleObservation:ObserveRect];
NSArray<VNRequest *>* requests = [NSArray arrayWithObjects:reqRect, nil];
BOOL bsucc = [reqImages performRequests:requests onCGImage:img.CGImage error:&error];
// get tracking bounding
VNDetectRectanglesRequest* reqRectTrack = [VNDetectRectanglesRequest new];
NSArray<VNRequest *>* requestsTrack = [NSArray arrayWithObjects:reqRectTrack, nil];
[reqImages performRequests:requestsTrack onCGImage:img.CGImage error:&error];
VNRectangleObservation* Observe = [reqRectTrack.results firstObject];
CGRect boundingBox = Observe.boundingBox;
Why the boundingBox value is incorrect?
How can i find the demo of vision.framework of ios11 ?
Here is my simple example of using Vision framework: https://github.com/artemnovichkov/iOS-11-by-Examples. I guess you have a problem with different coordinate systems. Pay attention to rect converting:
cameraLayer.metadataOutputRectConverted(fromLayerRect: originalRect)
and
cameraLayer.layerRectConverted(fromMetadataOutputRect: transformedRect)
Vision Framework tracking an object, a demo for this can be found at this link:
https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision
The Blogger goes into great details here of getting the demo working and has a gif showing a working build.
Hope this is what you are after.

CMMotionManager changes from cached reference frame not working as expected

When I call startDeviceMotionUpdatesUsingReferenceFrame, then cache a reference to my first reference frame and call multiplyByInverseOfAttitude on all my motion updates after that, I don't get the change from the reference frame that I am expecting. Here is a really simple demonstration of what I'm not understanding.
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 1.0/20.0;
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error){
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
CMAttitude *att = motion.attitude;
if(self.motionManagerAttitudeRef == nil){
self.motionManagerAttitudeRef = att;
return;
}
[att multiplyByInverseOfAttitude:self.motionManagerAttitudeRef];
NSLog(#"yaw:%+0.1f, pitch:%+0.1f, roll:%+0.1f, att.yaw, att.pitch, att.roll);
}];
}];
First off, in my application I only really care about pitch and roll. But yaw is in there too to demonstrate my confusion.
Everything works as expected if I put the phone laying on my flat desk, launch the app and look at the logs. All of the yaw, pitch roll values are 0.0, then if I spin the phone 90 degrees without lifting it off the surface only the yaw changes. So all good there.
To demonstrate what I think is the problem... Now put the phone inside of (for example) an empty coffee mug, so that all of the angles are slightly tilted and the direction of gravity would have some fractional value in all axis. Now launch the app and with the code above you would think everything is working because there is again a 0.0 value for yaw, pitch and roll. But now spin the coffee mug 90 degrees without lifting it from the table surface. Why do I see significant change in attitude on all of the yaw, pitch and roll?? Since I cached my initial attitude (which is now my new reference attitude), and called muptiplyByInverseOfAttitude shouldn't I just be getting a change in the yaw only?
I don't really understand why using the attitude multiplied by a cached reference attitude doesn't work... And I don't think it is a gimbal lock problem. But here is what gets me exactly what I need. And if you tried the experiment with the coffee mug I described above, this provides exactly the expected results (i.e. spinning the coffee mug on a flat surface doesn't affect pitch and roll values, and tilting the coffee mug in all other directions now only affects one axis at a time too). Plus instead of saving a reference frame, I just save the reference pitch and roll, then when the app starts, everything is zero'ed out until there is some movement.
So all good now. But still wish I understood why the other method did not work as expected.
self.motionQueue = [[NSOperationQueue alloc] init];
self.motionManager = [[CMMotionManager alloc] init];
self.motionManager.deviceMotionUpdateInterval = 1.0/20.0;
[self.motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:self.motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error)
{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
if(self.motionManagerAttitude == nil){
CGFloat x = motion.gravity.x;
CGFloat y = motion.gravity.y;
CGFloat z = motion.gravity.z;
refRollF = atan2(y, x) + M_PI_2;
CGFloat r = sqrtf(x*x + y*y + z*z);
refPitchF = acosf(z/r);
self.motionManagerAttitude = motion.attitude;
return;
}
CGFloat x = motion.gravity.x;
CGFloat y = motion.gravity.y;
CGFloat z = motion.gravity.z;
CGFloat rollF = refRollF - (atan2(y, x) + M_PI_2);
CGFloat r = sqrtf(x*x + y*y + z*z);
CGFloat pitchF = refPitchF - acosf(z/r);
//I don't care about yaw, so just printing out whatever the value is in the attitude
NSLog(#"yaw: %+0.1f, pitch: %+0.1f, roll: %+0.1f", (180.0f/M_PI)*motion.attitude.yaw, (180.0f/M_PI)*pitchF, (180.0f/M_PI)*rollF);
}];
}];

Simulate Low and Hight Gps Accuracy

I did a .gpx file to simulate a route on IOS simulator, now i wanna simulate the horizontal accuracy how I can do this?
as follow is an excerpt of my .gpx file:
<?xml version="1.0"?>
<gpx>
<wpt lat="-23.772830" lon="-46.689820"/> //how add horizontal accuracy 7 meters for example
<wpt lat="-23.774450" lon="-46.692570"/> //and here horizontal accuracy of 2 metters for example
<wpt lat="-23.773450" lon="-46.693530"/> //and here 19 meters
</gpx>
if I run all gps points return horizontal accuracy of 5 meters, I can change this otherwise.
I did this with a method call from the viewController (mine is a button, but obviously you could use a gestureRecognizer or whatever).
I have my viewController set as the LocationManagerDelegate, but you can put in the delegate you're using instead of "self"
- (IBAction)simulateAccuracy:(id)sender {
CLLocationCoordinate2D newCoor = CLLocationCoordinate2DMake(someLat, someLng);
CLLocation *newLoc = [[CLLocation alloc] initWithCoordinate:newCoor altitude:someAlt
horizontalAccuracy:TheAccuracyYouWantToTest verticalAccuracy:ditto timestamp:nil];
NSArray *newLocation = [[NSArray alloc] initWithObjects:newLoc,nil];
[self locationManager:myLocationManager didUpdateLocations:newLocation];
}

cocos2d particle effects not appearing

Hi All
Im just having an issue with particle effects not appearing all the time. Im coding using objective c and cocos2d for the iphone.
Below is the code in question.
CCParticleExplosion *emitter;
emitter = [[CCParticleExplosion alloc] initWithTotalParticles:30];
emitter.texture = [[CCTextureCache sharedTextureCache] addImage:#"particle_bubble.png"];
emitter.position = ccp(MidX,MidY);
emitter.life =0.5;
emitter.duration = 0.5;
emitter.speed = 60;
[self addChild:emitter];
emitter.autoRemoveOnFinish = YES;
////////////////////////////////////////////////////
CCParticleMeteor *emitter2;
emitter2 = [[CCParticleMeteor alloc] initWithTotalParticles:150];
emitter2.texture = [[CCTextureCache sharedTextureCache] addImage:#"fire_particle.png"];
emitter2.position = ccp(MidX,MidY);
emitter2.life = 0.5;
emitter2.duration = 2;
emitter2.speed = 60;
id emitMove = [CCMoveTo actionWithDuration:0.5 position:HUD.moonSprite.position ];
[self addChild:emitter2 z:1];
[emitter2 runAction:[CCSequence actions:emitMove, nil]];
emitter2.autoRemoveOnFinish = YES;
This code is within the same function right after each other as shown.
but sometimes the 2nd particle effect is not created and i cant figure out why. the first particle effect is always created no problems so im sure it is getting into the function correctly but sometimes (almost 50%) the 2nd meteor emitter is not displayed. i have tried messing around with z values to make sure it is not hidden behind an other object and it doesnt appear to be the problem. Anyone have any ideas on why this would be happening?
Thanks
G
I suggest using the 71 squared particle designer. http://particledesigner.71squared.com/
Did the trick for me.
Try this:
Define the emitters in a local variable (.h)
Call this before the code above:
if (emitter.parent == self) {
NSLog(#"em1 released");
[emitter release];
}
if (emitter2.parent == self) {
NSLog(#"em2 released");
[emitter2 release];
}
This checks if the emitter is a child and removes it, so you can remove the emitter.autoRemoveOnFinish so your emitter will show every time