Replicating Camera View - DeviceOrientationControls to TrackballControls - camera

I am trying to replicate a view from a phone (using DeviceOrientationControls) to a desktop (using TrackballControls). I am passing the view state (camera position & direction) through an intermediary server, and have that part mostly working.
I'm having trouble setting the camera rotation on the desktop. The cameras are sync'd to look at the same point, but the view on the desktop (receiving the view state from the phone) rotates around the view angle.
I definitely don't fully understand quaternions or rotation order. I've tried applying those, but clearly I'm out of my element. I guess I'm just looking for some hints on how to sync the camera rotation on the desktop.

Looks like I had a (trackball) controls.update() in my animate() that was blowing away the rotation that I was setting. Camera position and direction are not changed by this, but the rotation (or the "roll" of the camera) was.
In TrackballControls, it would be nice to have a setting for programmatically updating the camera's rotation that wouldn't get squashed by a call to rotateCamera(). I'll have to think about that, because it doesnt seem like it would be easy to implement.

Related

Is there a way I can let the camera zoom but not rotate?

Is there a way I can let the SceneKit's camera zoom but not rotate? And how can I delimitate the maximum and minimum zoom the user can do with the camera?
It depends what you mean by zoom – if you mean to do the same thing as 'zooming' a camera lens, you want to modify the yFov and xFov (field of view) attributes of the SCNCamera object. The camera stays in the exact same location, but changes its field of view like a zoom lens.
I cannot see how you can rotate the camera while zooming it – I’d need to see more context of where you’re using the camera. If you don’t touch the SCNNode the camera is attached to, you can’t possibly rotate it.
You're talking about user camera movement with allowsCameraControl, right? I don't think that's really meant to be the basis for a sophisticated user camera movement scheme, more of a simple debugging aid. If you really want fine control over how the user can move the camera, you're best served by creating your own camera node and moving it / changing its properties in response to whatever user input you want to handle (gesture recognizers, game controllers, etc).
I suppose you might be able to constrain the automatic user camera by implementing a scene renderer delegate willRenderScene method. You'd have to get the current pointOfView node, check its position and camera parameters, and change them if they're outside whatever bounds you want. But A) I'm not sure this would work, and B) it's probably not a great idea — it's sort of like messing with the internal view hierarchy of a system control class.

Camera constraints on Verold

There is a problem that multiple users of my model have noticed, namely that when you right click the model (here), the movements are hypersensitive. Orbit and zoom are fine and steady, but pan now more often than not results on the model rapidly shooting off into the distance. I've been playing with the camera controls to no avail and I don't want to simply remove the pan option for the client.
Also, is there any way to transition between cameras without a fade, just a movement of the camera?
Also, Verold not working on Internet Explorer 11... any news?
Thanks
Solved: problem was the focal point (white lined sphere). Had been set off accidentally far off into the distance (can be easily done without noticing and there is no undo). Just brought it back to the object.

Rotating Camera Around Self, limiting rotation angle in ThreeJS

I am using Three.JS to create a scene.
I want to be able to set my camera in the corner of a room and have the viewer rotate the camera around on the spot, without moving the camera's position.
Also, I want to limit the span of rotation (so that they cannot rotate the camera to look behind them).
I found FirstPersonControls, but I want the user to have to click and drag to rotate the view.
I know of the minecraft example, but it doesn't do the click and drag or angle restriction.
Does anyone know of any other existing examples that accomplish something similar? Thanks.

GLKit Rendering and iOS Device Orientation (Face Up / Down)

I have an app with some projection matrix set-up code based on Xcode 4.5.2's OpenGL Game template. In the update function I set appropriate z-translation values for baseModelViewMatrix by querying [[UIDevice currentDevice] userInterfaceIdiom] as well as UIDeviceOrientationIsLandscape: and UIDeviceOrientationIsPortrait:. This effectively lets me set the scale of the area rendered on screen on a per-orientation basis for each device. I also call update from willAnimateRotationToInterfaceOrientation:duration: to maintain the correct rendering proportions for each orientation of the device during runtime.
This all works fine, however I've noticed that when the device is oriented either face-up or face-down my scene is not displayed, and I only see what appears to be an empty GLKView. Rotating the device to any orientation perpendicular to the ground plane restores the scene to its expected behavior. I tried checking UIDeviceOrientationIsValidInterfaceOrientation:, which seems like it should handle what I need, but did not see any difference in behavior.
My guess is that GLKit does some automatic updating of the GLKView when a change in orientation is detected, but I didn't find any clear answers on what might be causing this particular behavior. Any thoughts on what's going on? Thanks in advance.
If you are using a function like GLKMatrix4MakeLookAt, you need to make sure your look direction is not parallel with the up direction. In the case of looking straight up or down, you'll need to adjust the camera's "up" vector to another value such as 0,0,-1 or 0,0,1.

iPhone/iPad Pan, Pinch and Rotate a view simultaneously

I'm trying to recreate the behaviour of the photos app, where you can pan, pinch and rotate simultaneously. I have the basics working, but I'm stuck on something.
For the pan, I offset the centrepoint of the view by the translation amount. This is working well.
For the pinch and rotate I'm applying an affine transform to the view. This is also working well.
The problem is when I pan (ie. move the subview), and then pinch or rotate - the affine transform seems to get applied using the old centre point of the view. I though that it should use the current centre point as the transform origin - as I'm updating the centrepoint when I pan I though that this should work. Instead of a rotation about the centrepoint of the subview, I get a rotational movement about the original centrepoint.
How do I correct this ? It's clearly possible to combine these three gestures intuitively, as the photos app does it successfully.
I tried using an affine translation for the pan, but the effect was the same.
Apple have confirmed this appears to be a bug with the way that gesture recognisers are working in iPhone OS 3.2. I have filed a bug report.