Are the values for "rotation" and "translation" the values for extrinsic camera parameters in .gltf files? - blender

I have exported a 3D scene in Blender as .gltf and I am reading the data in my program.
For the camera I have the following values in the .gltf File:
{
"camera" : 0,
"name" : "Camera",
"rotation" : [
0.331510511487034,
-0.018635762442412376,
0.0052512469701468945,
0.9450923238951721
],
"translation" : [
0.25607955169677734,
1.6810789010681152,
0.129119189865864
]
},
I think the values here for "rotation" and "translation" are the extrinsic camera parameters. The translation vector (x,y,z) makes sense to me, but I don't understand why there are only 4 floats for the camera rotation. In this case, there should be more values for the matrix, or am I missing something here? Thanks in advance!

When rotation is specified by itself, it's a quaternion, not a matrix. That's why you're seeing only 4 values there.
For reference, see: https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#transformations
The glTF camera object looks along -Z in local (node transformed) space, with +Y up.
See: https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#cameras

Related

Method ViroARScene.getCameraOrientationAsync() returns strange values in rotation array

I'm developing a PoC with ViroReact lib but I'm getting strage values for the camera rotation.
Environment:
Device: Android 10. Xiaomi Mi 9
ViroReact 2.20.2
The ViroARScene.getCameraOrientationAsync() returns unexpected values in rotation array when I rotate the device over the y-axis, trying to keep the x and z axis fixed.
Specifically, when the y-axis reaches the -90º the x/z values change to +/180º and from this point the y-axis values are getting close to 0, for instance, instead of -135º the y-axis value is -45 with the x/z values in +/-180. In other words the y-axis values NEVER return an absolute value over 90.
Some examples (values have got an error margin of about 6 degrees):
Rotation expected: [0, -90, 0]. Returned rotation: [+/-180, -90, +/-180]
Rotation expected: [0, -135, 0]. Returned rotation: [+/-180, -45, +/-180]
Rotation expected: [0, -180, 0]. Returned rotation: [+/-180, 0, +/-180]
Questions:
Why the absolute value of y-axis is never greater than 90 ?
Why the x/z values change to +/-180º when I reach some point (+/-90º in y-axis) if I'm just rotating the device over the y-axis.
Is this the expeted behavior ? If so, could anyone explain these values (please).
The code to retrieve the values:
<ViroARScene onTrackingUpdated={this._onInitialized} anchorDetectionTypes={"PlanesVertical"}>
...
</ViroARScene>
_onInitialized(state, reason) {
if (state === ViroConstants.TRACKING_NORMAL && reason === ViroConstants.TRACKING_REASON_NONE) {
console.log('Tracking initiated');
this._scene.getCameraOrientationAsync().then(
(orientation) => {
console.log('Cam rot:', round(orientation.rotation));
});
}
}
I've also created a GitHub issue with some mockups to show the rotation values expected and returned: https://github.com/ViroCommunity/viro/issues/13
I think what you're coming up against might be Gimbal lock, which is the reason that a lot of 3d rotators are expressed in Quaternions, instead of the xyz (aka Euler - pronounced "oiler") system you are using now. It's probably expected behaviour for your system.
I'm not familiar with your platform but it might have built-in helpers or alternative methods you can use in order to work with Quaternions instead, if not then a solution for you might be to install a library (or write some code) that translates between Euler angles and Quaternions so that your calculations make more sense, if you are going to be spending time around the y-0.

Vega Edge bundling (directed) - vary thickness of each edge to show strength of connection

I am looking to use an edge bundle visualisation as per:
https://vega.github.io/editor/#/examples/vega/edge-bundling
However, in this example, all the directed edges are of uniform thickness.
Instead, I need to generate edges of varying thickness to illustrate the strength of the relationship between nodes.
I envisaged passing that thickness into the model such that the edges defined by the JSON at
https://github.com/vega/vega-datasets/blob/master/data/flare-dependencies.json
would be adjusted so an edge currently defined as:
{
"source": 190,
"target": 4
},
would instead be defined as say:
{
"source": 190,
"target": 4,
"edgeWeight": 23
},
Is this possible? I did try experimenting by passing two simplified JSON datasets using value but couldn't figure out how to feed in that "edgeWeight" variable to the line definition in 'marks'.
Do you know how I might do that?
Regards,
Simon
I was provided an answer to this as follows:
First add a formula to append the size value from the flare.json dataset as a field 'strokeWidth' by writing this at line 90 in the example:
{
"type": "formula",
"expr": datum.size/10000",
"as": "strokeWidth"
},
Next, in the marks, set the strokeWidth value for each edge in the edgebundle to the associated 'strokeWidth' in the column now created by writing this at what becomes line 176 after the above change:
"strokeWidth": {"field": "strokeWidth"}
After this, the diagram should render with edges of a thickness defined by that 'size' variable.
Note that in this example, I had to scale the 'size' value in the original dataset by 10,000 to set the lines at a reasonable thickness.
In practice, I would scale the data prior to presenting it to Vega.

Detection of multiple objects (using OpenCV)

I want to find multiple objects in a scene (objects look the same, but may differ in scale, and rotation and I don´t know what the object to be detected will be). I have implemented the following idea, based on the featuredetectors in OpenCV, which works:
detect and compute keypoints from the object
for i < max_objects_todetect; i++
1. detect and compute keypoints from the whole scene
2. match scene and object keypoints with Flannmatcher
3. use findHomography/Ransac to compute the boundingbox of the first object (object which hast the most keypoints in the scene with multiple objects)
4. set the pixel in the scene, which are within the computed boundingbox to 0, -> in the next loopcycle there are no keypoints for this object to detect anymore.
The Problem with this implementation is that I need to compute the keypoints for the scene multiple times which needs alot of computing time (250ms). Does anyone has a better idea for detecting multiple objects?
Thanks Drian
Hello togehter I tried ORB which is indeed faster and I will try Akaze.
While testing ORB I have encouterd following problem:
While chaning the size of my picture doesn´t affect the detected keypoints in Surf (finds the same keypoints in the small and in the big picture (in the linked picture right)), it affects the keypoints detected by ORB. In the small picute i´m not able to find these keypoints. I tried to experiment with the ORB parameters but couldn´t make it work.
Picture: http://www.fotos-hochladen.net/view/bildermaf6d3zt.png
SURF:
cv::Ptr<cv::xfeatures2d::SURF> detector = cv::xfeatures2d::SURF::create(100);
ORB:
cv::Ptr<cv::ORB> detector = cv::ORB::create( 1500, 1.05f,16, 31, 0, 2, ORB::HARRIS_SCORE, 2, 10);
Do you know if, and how it´s possible to detect the same keypoints independent of the size of the pictures?
Greetings Drian

Cytoscape cyPosition() vs zoom fit

Using Cytoscape.js v2.1, I noticed something that maybe a bug (from this version or maybe mine =p).
When inserting a node, I'm using this to get node position from the tap event e:
position = {
x: e.cyPosition.x,
y: e.cyPosition.y
};
Also, my cytoscape initializer is setting layout fitas true:
$cy.cytoscape({
minZoom: 0.1,
maxZoom: 2.0,
layout: {
fit: true
},
(...)
And so the problems begin. Using this, on Windows 7, Chrome version 32.0.1700.107 or Firefox 27.0.1, the node is being positioned with a big offset (as shown here).
On the other hand, when I set layout fit as false, the node is correctly positioned. (as you can see in this link).
As it's happening only when initial zoom fit is true, I supose this is a specific bug of this option.
Please read the documentation regarding rendered versus model position. I think you've confused the two: http://cytoscape.github.io/cytoscape.js/#notation/position
Model position must stay constant despite pan and zoom. Otherwise, positions would be inconsistent.
On the other hand, rendered position is derived from the model position, pan, and zoom. Naturally, model position and rendered position differ when zoom differs from identity (1) or pan differs from the origin (0, 0).
It doesn't look like you're using rendered position for on-screen placement.
Edit:
Don't mix and match rendered position with model position. If you get model position in your handler (e.cyPosition), then continue to use model position to add nodes et cetera. If you get rendered position (e.cyRenderedPosition), then use rendered position to add nodes et cetera.
Mixing the two will never give desired behaviour unless you do some math to translate them.

Photoshop making isometric?

Hi everyone. I'm trying to learn isometric picture on photoshop.
I've found an example on web, just as an image. At this image, left square is my tile. I should transform this to the diamond at right.
But I'm wondering that, how can I make it most pratically?
I've transform the rotate to 45'.
What should I do now?
I've used distort to the the same but I don't know which values I should enter.
X: ? Y: ? W: ? Y: ? H:? ANGLE: ?
my square is 200px/200px. and which values I must enter to make it diamond like in the picture (is there any math formule?)
So:
Rotate angle 45
Resize image: W=100%, H=58%
or use this script
Maybe you can transform x'=x and y'=x+y/2. Look the spiral search: Optimizing search through large list of lat/long coords to find match.