Three.js: way to change the up axis? - camera

I see that the ColladaLoader has a way to set the upAxis to 'Z' ... is there a way to do that in Three.js proper so that the camera's up axis is Z?
Thanks!

You can set the camera up vector like so:
camera.up.set( 0, 0, 1 );
Then when you call camera.lookAt( point ), it will work as you expect.
Edit: Updated to three.js r.68

Related

Method ViroARScene.getCameraOrientationAsync() returns strange values in rotation array

I'm developing a PoC with ViroReact lib but I'm getting strage values for the camera rotation.
Environment:
Device: Android 10. Xiaomi Mi 9
ViroReact 2.20.2
The ViroARScene.getCameraOrientationAsync() returns unexpected values in rotation array when I rotate the device over the y-axis, trying to keep the x and z axis fixed.
Specifically, when the y-axis reaches the -90º the x/z values change to +/180º and from this point the y-axis values are getting close to 0, for instance, instead of -135º the y-axis value is -45 with the x/z values in +/-180. In other words the y-axis values NEVER return an absolute value over 90.
Some examples (values have got an error margin of about 6 degrees):
Rotation expected: [0, -90, 0]. Returned rotation: [+/-180, -90, +/-180]
Rotation expected: [0, -135, 0]. Returned rotation: [+/-180, -45, +/-180]
Rotation expected: [0, -180, 0]. Returned rotation: [+/-180, 0, +/-180]
Questions:
Why the absolute value of y-axis is never greater than 90 ?
Why the x/z values change to +/-180º when I reach some point (+/-90º in y-axis) if I'm just rotating the device over the y-axis.
Is this the expeted behavior ? If so, could anyone explain these values (please).
The code to retrieve the values:
<ViroARScene onTrackingUpdated={this._onInitialized} anchorDetectionTypes={"PlanesVertical"}>
...
</ViroARScene>
_onInitialized(state, reason) {
if (state === ViroConstants.TRACKING_NORMAL && reason === ViroConstants.TRACKING_REASON_NONE) {
console.log('Tracking initiated');
this._scene.getCameraOrientationAsync().then(
(orientation) => {
console.log('Cam rot:', round(orientation.rotation));
});
}
}
I've also created a GitHub issue with some mockups to show the rotation values expected and returned: https://github.com/ViroCommunity/viro/issues/13
I think what you're coming up against might be Gimbal lock, which is the reason that a lot of 3d rotators are expressed in Quaternions, instead of the xyz (aka Euler - pronounced "oiler") system you are using now. It's probably expected behaviour for your system.
I'm not familiar with your platform but it might have built-in helpers or alternative methods you can use in order to work with Quaternions instead, if not then a solution for you might be to install a library (or write some code) that translates between Euler angles and Quaternions so that your calculations make more sense, if you are going to be spending time around the y-0.

Serializing Camera State in ThreeJS

What is the best way to serialize (or save or marshal) the state of the camera in a ThreeJS scene, and then un-serialize (or load or unmarshal) the camera later?
Right now I am saving the x, y, z coordinates of the camera's position, up, and (Euler angle) rotation fields. Later I try to restore this camera with calls to position.set(), up.set(), and rotation.set(), and then follow-up with a call to updateProjectionMatrix(). I assume the default Euler angle rotation order is the same when serializing and un-serializing.
Is this correct?
I would suggest instead storing the camera's matrix. This encompasses the entire transformation on the camera, including position, rotation and scaling.
Serializing:
const cameraState = JSON.stringify(camera.matrix.toArray());
// ... store cameraState somehow ...
Deserializing:
// ... read cameraState somehow ...
camera.matrix.fromArray(JSON.parse(cameraState));
// Get back position/rotation/scale attributes
camera.matrix.decompose(camera.position, camera.quaternion, camera.scale);
The accepted answer did not work for me, instead I came up with the following solution:
To save: serialize the x, y and z properties of camera.position and camera.rotation.
To reload: deserialize the above 6 params. And reassign them, for example:
camera.position.x = saved.position.x;
call camera.updateProjectionMatrix() to re-calculate the projection matrix.

Not getting how the property rotation works in SceneKit

When you specify a rotation for an object, you do something like this :
_earthNode.rotation = SCNVector4Make(1, 0, 0, M_PI/2);
What I am not getting is how to specify a specific rotation for each axis ? Because let's say that I wanted to rotate my node from PI on x, PI/2 on y, and PI/4 on z, how would I do that ? I thought that I could do something like this :
_earthNode.rotation = SCNVector4Make(1, 0.5, 0.25, M_PI);
But it doesn't change anything....
How does this property works ?
The rotation vector in Scene Kit is specified as the axis of rotation (first 3 components) follow by the angle (4th component), called axis-angle representation.
The format you are trying to specify (the different angles along each axis) is called Euler angles (unless I'm remembering wrong).
Translating between the two representations is just a bit of trigonometry. A quick online search for "Euler angles to axis angle" lead to this page which shows who to do it in Java.
SCNNode has an eulerAngles property that allows you to do just that

Draw rotated text to parent coordinate system

I have a UIView, which I'm drawing manually in the 'drawRect'-Function.
It is basically a coordinate system, which has 'Values' on the Y-Axis and 'Time' on the 'X-Axis'.
Due to space issues, I want the Timestamps to be vertical, instead of horizontal.
For this purpose, I use:
CGContextSaveGState(ctx); //Saves the current graphic context state
CGContextRotateCTM(ctx, M_PI_2); //Rotates the context by 90° clockwise
strPos = CGContextConvertPointToUserSpace(ctx, strPos); //SHOULD convert to Usercoordinates
[str drawAtPoint:strPos withFont:fnt]; //Draws the text to the rotated CTM
CGContextRestoreGState(ctx); //Restores the CTM to the previous state.
ctx (CGContextRef), strPos (CGPoint) and str (NSString) are variables, that have been initialized properly and correctly for 'horizontal text', with a width of the text height.
While this code works flawlessly on the iPhone 3, it gives me a complete mess on the iPhone 4 (Retina), because the CGContextConvertPointToUserSpace function produces completely different results, even though the coordinate system of the iPhone is supposed to remain the same.
I also tried using CGAffineTransform, but only with the same results.
To summarize my question: How do I draw a text to a calculated position in the parent coordinate system (0, 0 being top left)?
After studying the Apple docs regarding Quartz 2D once more, I came to realize, that the rotation by Pi/2 moves all my writing off screen to the left.
I can make the writing appear in a vertical line by translating the CTM by +height.
I'll keep trying, but would still be happy to get an answer.
Edit: Thanks to lawicko's heads-up I was able to fix the problem. See Answer for details.
I would like to thank lawicko for pointing this out.
During my tests I made two mistakes...but he is of course correct. Using CGContextShowTextAtPoint is the most simple solution, since it doesn't require the rotation of the entire CTM.
Again, THANK you.
Now, for the actual answer to my question.
To draw a rotated text at position x/y, the following code works for me.
CGAffineTransform rot = CGAffineTransformMakeRotation(M_PI_2); //Creates the rotation
CGContextSelectFont(ctx, "TrebuchetMS", 10, kCGEncodingMacRoman); //Selects the font
CGContextSetTextMatrix(ctx, CGAffineTransformScale(rot, 1, -1)); //Mirrors the rotated text, so it will be displayed correctly.
CGContextShowTextAtPoint(ctx, strPos.x, strPos.y, TS, 5); //Draws the text
ctx is the CGContext, strPos the desired position in the parent coordinate system, TS a char array.
Again, thank you lawicko.
I probably would've searched forever if not for your suggestion.
Maybe this answer will help someone else, who comes across the same problem.

Objective C - Detect a "path" drawing, inside a map image

I have a physical map (real world), for example, a little town map.
A "path" line is painted over the map, think about it like "you are here. here's how to reach the train station" :)
Now, let's suppose I can get an image of that scenario (likewise, coming from a photo).
An image that looks like:
My goal is not easy way out!
I want to GET the path OUT of the image, i.e., separate the two layers.
Is there a way to extract those red marks from the image?
Maybe using CoreGraphics? Maybe an external library?
It's not an objective C specific question, but I am working on Apple iOS.
I already worked with something similar, the face-recognition.
Now the answer I expect is: "What do you mean by PATH?"
Well, I really don't know, maybe a line (see above image) of a completely different color from the 'major' colors in the background.
Let's talk about it.
If you can use OpenCV then it becomes simpler. Here's a general method:
Separate the image into Hue, Saturation and Variation (HSV colorspace)
Here's the OpenCV code:
// Compute HSV image and separate into colors
IplImage* hsv = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
cvCvtColor( img, hsv, CV_BGR2HSV );
IplImage* h_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* s_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* v_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 );
Deal with the Hue (h_plane) image only as it gives just the hue without any change in value for a lighter or darker shade of the same color
Check which pixels have Red hue (i think red is 0 degree for HSV, but please check the OpenCV values)
Copy these pixels into a separate image
I's strongly suggest using the OpenCV library if possible, which is basically made for such tasks.
You could filter the color, define a threshold for what the color red is, then filter everything else to alpha, and you have left over what your "path" is?