SCNNode getBoundingBoxMin - objective-c

I have a node that I'm extracting from an SCNScene. I've got some information about it, but I'm confused about one thing - how the bounding box is calculated. The node is positioned once it is loaded at vector 0,0,0 using:
[myNode setPosition: SCNVector3Make(0, 0, 0)];
However, the bounding box still reports a min.x of -1.
How can this be if I've just positioned it at 0? Additionally, it doesn't matter what vector values I give, it always has a min.x of -1 - despite the node actually moving around screen as expected.

It's hard to answer definitively without more information about what's in your scene and what you're doing with it, but it's probably one or both of these issues:
The bounding box tells you the extent of the node's content, not its position — e.g. if you have a cube 2 units wide positioned at (0, 0, 0), its bounding box min and max corners will be (-1, -1, -1) and (1, 1, 1).
Depending on what tricks you're using to move your node around, the node itself may not be changing position — SCNActions, CAAnimations, and physics all work on a separate copy of the node hierarchy. (Partly this is to support implicit animation — e.g. set the position of a node, and it'll appear to animate from the current position to the position you set, but you can still read back position and have it be the value you set it to.)
You get to this separate hierarchy with the node's presentationNode property — ask the presentation node for its bounding box and you'll see its position/extent as affected by actions/animations/physics and currently rendered.

Related

Line Profile Diagonal

When you make a line profile of all x-values or all y-values the extraction from each pixel is clear. But when you take a line profile along a diagonal, how does DM choose which pixels to use in the one dimensional readout?
Not really a scripting question, but I'm rather certain that it uses bi-linear interpolation between the grid-points along the drawn line. (And if perpendicular integration is enabled, it does so in an integral.) It's the same interpolation you would get for a "rotate" image.
In fact, you can think of it as a rotate-image (bi-linearly interpolated) with a 'cut-out' afterwards, potentially summed/projected onto the new X-axis.
Here is an example
Assume we have a 5 x 4 image, which gives the grid as shown below.
I'm drawing top-left corners to indicate the coordinates system pixel convention used in DigitalMicrgraph, where
(x/y)=(0/0) is the top-left corner of the image
Now extract a LineProfile from (1/1) to (4/3). I have highlighted the pixels for those coordinates.
Note, that a Line drawn from the corners seems to be shifted by half-a-pixel from what feels 'natural', but that is the consequence of the top-left-corner convention. I think, this is why a LineProfile-Marker is shown shifted compared to f.e. LineAnnotations.
In general, this top-left corner convention makes schematics with 'pixels' seem counter-intuitive. It is easier to think of the image simply as grid with values in points at the given coordinates than as square pixels.
Now the maths.
The exact profile has a length of:
As we can only have profiles with integer channels, we actually extract a LineProfile of length = 4, i.e we round up.
The angle of the profile is given by the arc-tangent of dX and dY.
So to extract the profile, we 'rotate' the grid by that angle - done by bilinear interpolation - and then extract the profile as grid of size 4 x 1:
This means the 'values' in the profile are from the four points:
Which are each bi-linearly interpolated values from four closest points of the original image:
In case the LineProfile is averaged over a certain width W, you do the same thing but:
extract a 2D grid of size L x W centered symmetrically over the line.i.e. the grid is shifted by (W-1)/2 perpendicular to the profile direction.
sum the values along W

Cytoscape cyPosition() vs zoom fit

Using Cytoscape.js v2.1, I noticed something that maybe a bug (from this version or maybe mine =p).
When inserting a node, I'm using this to get node position from the tap event e:
position = {
x: e.cyPosition.x,
y: e.cyPosition.y
};
Also, my cytoscape initializer is setting layout fitas true:
$cy.cytoscape({
minZoom: 0.1,
maxZoom: 2.0,
layout: {
fit: true
},
(...)
And so the problems begin. Using this, on Windows 7, Chrome version 32.0.1700.107 or Firefox 27.0.1, the node is being positioned with a big offset (as shown here).
On the other hand, when I set layout fit as false, the node is correctly positioned. (as you can see in this link).
As it's happening only when initial zoom fit is true, I supose this is a specific bug of this option.
Please read the documentation regarding rendered versus model position. I think you've confused the two: http://cytoscape.github.io/cytoscape.js/#notation/position
Model position must stay constant despite pan and zoom. Otherwise, positions would be inconsistent.
On the other hand, rendered position is derived from the model position, pan, and zoom. Naturally, model position and rendered position differ when zoom differs from identity (1) or pan differs from the origin (0, 0).
It doesn't look like you're using rendered position for on-screen placement.
Edit:
Don't mix and match rendered position with model position. If you get model position in your handler (e.cyPosition), then continue to use model position to add nodes et cetera. If you get rendered position (e.cyRenderedPosition), then use rendered position to add nodes et cetera.
Mixing the two will never give desired behaviour unless you do some math to translate them.

model space and homogeneous clip space

I am reading the book "real-time rendering",at the third chapter,it says:"vertex shader program transforms vertices from model space to homogeneous clip space".what's the meaning of homogeneous clip space and the difference between them?
By now, you might have already figured this out. But here it goes anyway.
Model space is the space inhabited (and even defined) by your object. If you have a unit cube, and its coordinate system is aligned with its sides, then the point (0, 0, 0) corresponds to one of the cube's vertices in the model space. This might not be true in world space, where your entire scene is contained, and this cube can be anywhere in there.
A brief explanation can be found here.
So basically, different coordinate systems mean different spaces.
Now, your clip space is the unit cube that contains everything that'll be visible upon rendering, where the item closest to the camera will be at z = 0, and the farthest, at z = 1. Since coordinates are given in affine geometry (read this!), and the cube is normalized, it's called homogeneous.

Draw rotated text to parent coordinate system

I have a UIView, which I'm drawing manually in the 'drawRect'-Function.
It is basically a coordinate system, which has 'Values' on the Y-Axis and 'Time' on the 'X-Axis'.
Due to space issues, I want the Timestamps to be vertical, instead of horizontal.
For this purpose, I use:
CGContextSaveGState(ctx); //Saves the current graphic context state
CGContextRotateCTM(ctx, M_PI_2); //Rotates the context by 90° clockwise
strPos = CGContextConvertPointToUserSpace(ctx, strPos); //SHOULD convert to Usercoordinates
[str drawAtPoint:strPos withFont:fnt]; //Draws the text to the rotated CTM
CGContextRestoreGState(ctx); //Restores the CTM to the previous state.
ctx (CGContextRef), strPos (CGPoint) and str (NSString) are variables, that have been initialized properly and correctly for 'horizontal text', with a width of the text height.
While this code works flawlessly on the iPhone 3, it gives me a complete mess on the iPhone 4 (Retina), because the CGContextConvertPointToUserSpace function produces completely different results, even though the coordinate system of the iPhone is supposed to remain the same.
I also tried using CGAffineTransform, but only with the same results.
To summarize my question: How do I draw a text to a calculated position in the parent coordinate system (0, 0 being top left)?
After studying the Apple docs regarding Quartz 2D once more, I came to realize, that the rotation by Pi/2 moves all my writing off screen to the left.
I can make the writing appear in a vertical line by translating the CTM by +height.
I'll keep trying, but would still be happy to get an answer.
Edit: Thanks to lawicko's heads-up I was able to fix the problem. See Answer for details.
I would like to thank lawicko for pointing this out.
During my tests I made two mistakes...but he is of course correct. Using CGContextShowTextAtPoint is the most simple solution, since it doesn't require the rotation of the entire CTM.
Again, THANK you.
Now, for the actual answer to my question.
To draw a rotated text at position x/y, the following code works for me.
CGAffineTransform rot = CGAffineTransformMakeRotation(M_PI_2); //Creates the rotation
CGContextSelectFont(ctx, "TrebuchetMS", 10, kCGEncodingMacRoman); //Selects the font
CGContextSetTextMatrix(ctx, CGAffineTransformScale(rot, 1, -1)); //Mirrors the rotated text, so it will be displayed correctly.
CGContextShowTextAtPoint(ctx, strPos.x, strPos.y, TS, 5); //Draws the text
ctx is the CGContext, strPos the desired position in the parent coordinate system, TS a char array.
Again, thank you lawicko.
I probably would've searched forever if not for your suggestion.
Maybe this answer will help someone else, who comes across the same problem.

Calculating collision for a moving circle, without overlapping the boundaries

Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't.
What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary.
This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists.
Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?
Since you just have a circle and a rectangle, it's actually pretty simple. A circle of radius r bouncing around inside a rectangle of dimensions w, h can be treated the same as a point p at the circle's center, inside a rectangle (w-r), (h-r).
Now position update becomes simple. Given your point at position x, y and a per-frame velocity of dx, dy, the updated position is x+dx, y+dy - except when you cross a boundary. If, say, you end up with x+dx > W (letting W = w-r), then you do the following:
crossover = (x+dx) - W // this is how far "past" the edge your ball went
x = W - crossover // so you bring it back the same amount on the correct side
dx = -dx // and flip the velocity to the opposite direction
And similarly for y. You'll have to set up a similar (reflected) check for the opposite boundaries in each dimension.
At each step, you can calculate the projected/expected position of the circle for the next frame.
If this lies outside the rectangle, then you can then use the distance from the old circle position to the rectangle's edge and the amount "past" the rectangle's edge that the next position lies at (the interpenetration) to linearly interpolate and determine the precise time when the circle "hits" the rectangle edge.
For example, if the circle is 10 pixels away from the rectangle's edge, then is predicted to move to 5 pixels beyond it, you know that for 2/3rds of the timestep (10/15ths) it moves on its orginal path, then is reflected and continues on its new path for the remaining 1/3rd of the timestep (5/15ths). By calculating these two parts of the motion and "adding" the translations together, you can find the correct new position.
(Of course, it gets more complicated if you hit near a corner, as there may be several collisions during the timestep, off different edges. And if you have more than one circle moving, things get a lot more complex. But that's where you can start for the case you've asked about)
Reflection across a rectangular boundary is incredibly simple. Just take the amount that the object passed the boundary and subtract it from the boundary position. If the position without reflecting would be (-0.8,-0.2) for example and the upper left corner is at (0,0), the reflected position would be (0.8,0.2).