iPhone/iPad Pan, Pinch and Rotate a view simultaneously - objective-c

I'm trying to recreate the behaviour of the photos app, where you can pan, pinch and rotate simultaneously. I have the basics working, but I'm stuck on something.
For the pan, I offset the centrepoint of the view by the translation amount. This is working well.
For the pinch and rotate I'm applying an affine transform to the view. This is also working well.
The problem is when I pan (ie. move the subview), and then pinch or rotate - the affine transform seems to get applied using the old centre point of the view. I though that it should use the current centre point as the transform origin - as I'm updating the centrepoint when I pan I though that this should work. Instead of a rotation about the centrepoint of the subview, I get a rotational movement about the original centrepoint.
How do I correct this ? It's clearly possible to combine these three gestures intuitively, as the photos app does it successfully.
I tried using an affine translation for the pan, but the effect was the same.

Apple have confirmed this appears to be a bug with the way that gesture recognisers are working in iPhone OS 3.2. I have filed a bug report.

Related

Replicating Camera View - DeviceOrientationControls to TrackballControls

I am trying to replicate a view from a phone (using DeviceOrientationControls) to a desktop (using TrackballControls). I am passing the view state (camera position & direction) through an intermediary server, and have that part mostly working.
I'm having trouble setting the camera rotation on the desktop. The cameras are sync'd to look at the same point, but the view on the desktop (receiving the view state from the phone) rotates around the view angle.
I definitely don't fully understand quaternions or rotation order. I've tried applying those, but clearly I'm out of my element. I guess I'm just looking for some hints on how to sync the camera rotation on the desktop.
Looks like I had a (trackball) controls.update() in my animate() that was blowing away the rotation that I was setting. Camera position and direction are not changed by this, but the rotation (or the "roll" of the camera) was.
In TrackballControls, it would be nice to have a setting for programmatically updating the camera's rotation that wouldn't get squashed by a call to rotateCamera(). I'll have to think about that, because it doesnt seem like it would be easy to implement.

Rotating Camera Around Self, limiting rotation angle in ThreeJS

I am using Three.JS to create a scene.
I want to be able to set my camera in the corner of a room and have the viewer rotate the camera around on the spot, without moving the camera's position.
Also, I want to limit the span of rotation (so that they cannot rotate the camera to look behind them).
I found FirstPersonControls, but I want the user to have to click and drag to rotate the view.
I know of the minecraft example, but it doesn't do the click and drag or angle restriction.
Does anyone know of any other existing examples that accomplish something similar? Thanks.

Seeking explanation for the difference in animation performance between iOS6 and iOS7

I have been working on an iPad app that performs animations on very large images (full screen images that can be zoomed at 2x and still be retina quality). I have spent a lot of time getting smooth transitions when zooming and panning. When running the app on iOS7 however, the animations become really jerky (slow frame rate).
Further testing shows that it is the zoom animation that causes the problem (panning does not cause a problem). Interestingly, I have been able to fix it by setting the alpha of the image being scaled to 0.995 (instead of 1.0).
I have two questions
What has changed in iOS7 to make this happen?
Why does changing the opacity of the view make a difference?
Further information for the above questions:
Animation Setup
The animations are all pre-defined and are played upon user interaction. The animations are all a mix of pan and zoom. The animations are really simple:
[UIView animateWithDuration:animationDuration delay:animationDelay options:UIViewAnimationOptionCurveEaseInOut animations:^{
self.frame = nextFrame;
//...
} completion:^(BOOL finished) {
//...
}];
To fix the jerky animation, I set the alpha before the animation
self.alpha = 0.99;
Some interesting points:
Setting the alpha inside of the animation works as well
Setting the alpha back to 1.0 after the animation and then doing the reverse animation with a 1.0 alpha does not give a smooth reverse animation.
Opacity fix
I have previously used the opacity fix to make animations smooth when scaling and panning multiple images. For example, I had two large images panning and scaling at different speeds with one on top of the other. When a previously un-rendered part of the lower image (the image on the bottom) became visible, the animation would become jerky (panning as well as scaling). My thought for why alpha helps in this case is, if the top image has a bit of transparency, the bottom image must always be rendered, which means it can be cached before the animation takes place. This thought is backed by doing the reverse animation and not seeing the jerky animation. (I guess I would be interested to know if anyone has different thoughts on this as well).
Having said the above, I don't know how this would have an affect when there is just one image (as in the situation I am describing in my question). Particularly when after getting the jerky animation, the reverse animation is still jerky. Another point of difference between the two situations is that it is only scaling that causes the problem in the current issue, while in the double image issue it was panning as well as scaling.
I hope the above is clear - any insights appreciated.
Look at Group Opacity. iOS 7 has that turned ON by default and this changes the way views/layers are composited:
When the UIViewGroupOpacity key is not present, the default value is
now YES. The default was previously NO.
This means that subviews of a transparent view will first be
composited onto that transparent view, then the precomposited subtree
will be drawn as a whole onto the background. A NO setting results in
less expensive, but also less accurate, compositing: each view in the
transparent subtree is composited onto what’s underneath it, according
to the parent’s opacity, in the normal painter’s algorithm order.
(source: iOS7 Release Notes)
With this setting on, compositing - also during animations - is way more expensive.
Also, have a look at the CoreGraphics Instruments tool to check if you have lots of off-screen images compositing going on.
Are you having any sort of changes going on in the view being animated? That would trigger more discarding of the rendered layer image from the backing store.

Implementing image gestures: UIImageView mode conflicts with pinch gesture

Pan works fine for me, but pinch with recognizer code like this does not:
- (void)pinchDetected:(UIPinchGestureRecognizer *)pinchRecognizer
{
CGFloat scale = pinchRecognizer.scale;
self.imageView.transform = CGAffineTransformScale(self.imageView.transform, scale, scale);
pinchRecognizer.scale = 1.0;
}
What happens is that the image view is continuously resetting the image according to its "mode", whether it's center, aspect fit, etc.
I solved my problem: I'm making my first image viewer, and to learn how to pinch and zoom, I naively googled for how to support gestures, which are not enabled by simply adding an image view to a view controller.
Unfortunately, there are many "tutorials" on this, showing how to program with the gesture recognizers, etc. And I spent a few hours going down this route unnecessarily. I kept going because I felt tantalizingly close to getting things working: The pan gesture was flawless and was "just" zoom that was broken.
(Side question: is there some awesome source for current, iOS 6 "best practices"?)
It turns out, this is the wrong path and needlessly complex for basic gesture recognition. All that's needed is to place the image view in a scroll view. 99% of the programming is taken care of. (I was convinced this had to be the case — I couldn't believe that such core functionality wouldn't be provided by cocoa touch.)

GLKit Rendering and iOS Device Orientation (Face Up / Down)

I have an app with some projection matrix set-up code based on Xcode 4.5.2's OpenGL Game template. In the update function I set appropriate z-translation values for baseModelViewMatrix by querying [[UIDevice currentDevice] userInterfaceIdiom] as well as UIDeviceOrientationIsLandscape: and UIDeviceOrientationIsPortrait:. This effectively lets me set the scale of the area rendered on screen on a per-orientation basis for each device. I also call update from willAnimateRotationToInterfaceOrientation:duration: to maintain the correct rendering proportions for each orientation of the device during runtime.
This all works fine, however I've noticed that when the device is oriented either face-up or face-down my scene is not displayed, and I only see what appears to be an empty GLKView. Rotating the device to any orientation perpendicular to the ground plane restores the scene to its expected behavior. I tried checking UIDeviceOrientationIsValidInterfaceOrientation:, which seems like it should handle what I need, but did not see any difference in behavior.
My guess is that GLKit does some automatic updating of the GLKView when a change in orientation is detected, but I didn't find any clear answers on what might be causing this particular behavior. Any thoughts on what's going on? Thanks in advance.
If you are using a function like GLKMatrix4MakeLookAt, you need to make sure your look direction is not parallel with the up direction. In the case of looking straight up or down, you'll need to adjust the camera's "up" vector to another value such as 0,0,-1 or 0,0,1.