Rotation in a Red Black tree - red-black-tree

I am trying to figure out the rotation in a Red Black tree while its rebalancing is done. I understand why rotation is occurring but I don't get how it is being done. Also, what intermediate rotations like LL, RR , LR and RL are done to reach till the result and I would also appreciate if someone tells me any rule of thumb as to when to do any one of these rotation. Here is the rotation:
Rr(2) is the case when black node deficiency is in right child of "py" i.e.
"y" and grandchild of "v" are 2 red nodes i.e. "b" and "x"

You can try to understand rotations in Red Black Trees in a better way by breaking down the operation into different rotations. There are only 3 basic operations for a Left Leaning Red Black BST. The operations are performed in order as listed in this slide.
Moreover, the Red Black Tree that you have shown is not correct as it violates the condition for a Red Black Tree. ie. Every path from the root to the leaf must have equal number of black edges. But in your final tree, the path from x to c has 2 black edges, while the path from x to a has 1 black edge. I recommend you to read more about self balancing BSTs and Red-Black BSTs from here.
PS. I do not own the slide. It has been taken from Robert Sedgewick's course on Algorithms from coursera.

Related

How to get non-intersection region of CGRect with multiple intersects?

I have one main (red) rectangle and several other rectangles, which intersect main rectangle randomly.
How can I get non-intersection area of main rectangle (red area)?
This depends very much on what you mean by "have" and "get". What are the input and output formats? Do you want a sequence of points, or just the area? Is this for a general solution, or just this simplified case?
For a fast, general solution, I highly recommend the BOOST polygon library (disclosure: I was one reviewer for the BOOST conference presentation). This handles arbitrary polygons, including holes, and does a lovely job of all the basic polygon operations.
A simple polygon is a sequence of points. You can make sets of polygons. For this case, declare all of your polygons; put the red rectangle into set A, the gray ones into set B. Then A-B returns the desired displayed polygon.

Netlogo - Id like to set the size of a turtle as the size of the patch it's standing on

Id like to set the size of a turtle as the size of the patch it's standing on.
Even better I need turtles which are bigger as 4 or 16 patches.
If for example i have a squared world with 16x16 patches id like to have turtles that can be big 1x1 or 2x2 or 4x4 etc....
and the turtle should overlap perfectly the patches: it might be 1 patch (1x1 case), 4 (2x2 case) etc...
abott setting the size of the turtle equal to the sie of the patch for perfect overlapping in trying wit this code:
hatch-turtle 1 [set size [size] of patch-here ]
but it gives me the error:
A patch can't access a turtle variable without specifying which turtle.
Maybe try some variation of:
ask turtles [ set size patch-size ]
perhaps scaling by a multiplier as needed. Note that size is a per-turtle variable, but patch-size is a global reporter, because all patches are always the same size in pixels.
Note that size is measured in patches, while patch-size is measured in pixels.
I really don't understand at all what you're trying to do here, but the above is legal NetLogo code, anyway.
A turtle's size is measured in units of patches, so if you want your turtles to be the same size as the patches they are standing on, that's:
ask turtles [ set size 1 ]
but 1 is the default size, so in order to get this behavior, you actually don't need to do anything at all.
This answer comes years after the question was asked, but I leave it here hoping that it helps others who may encounter the same problem (as I did). Below I first clarify the problem and then offer a solution.
Clarification: It is implied by the problem that OP has defined a square shape for the turtles. The default size of square turtles in NetLogo is 1, which means that by default a square turtle should completely fill a patch. However, OP still observed blank space between square turtles that are placed next to each other. The aim of this answer is to remove that blank space for square turtles of size 1.
Solution: To solve this problem, note that the default square shape of turtles in NetLogo is made up of a colored inner area and a thick colorless border. The blank space that the OP observed between the turtles was in fact composed of the colorless borders of square shapes. In order to produce a figure with colored squares placed immediately adjacent to each other (that is, without any apparent space between them), it suffices to define a new square shape with no border. This new square shape should be defined such that the inner area of the square fills the entire patch. This can be done using the Turtle Shapes Editor from the Tools menu: find the square shape, create a duplicate of it, and modify the new shape in the graphical editor. To modify the shape, click on its top-left corner and drag that corner to the top-left corner of the graphical editor window. Then do the same with the bottom-right corner.

Detect if a quad is actually visible 2D in OpenGL

I currently have 16 tiles, with individual images that make up 1 big map. I pan by transforming right at the beginning before any actual drawing with this:
GL.Translate(G_.Pan(0), G_.Pan(1), 0)
Then I zoom by doing this:
GL.Ortho(-G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, -G_.Size * 1.5 ^ G_.ZoomFactor, -1, 1)
G_.Size is a constant that only varies on startup depending on parameters, zoom factor ranges from -1 to -13
What I want to be able to do is check if 1 of the 16 tiles is within the visible area, so then I stop them drawing when they are not on screen. I had found some quite complex methods for doing it, but it was 3D and seemed like a lot of work for something that should be simple. I would of thought it would of been something like just checking if a point is within the bounds of visible area, but I have no idea on how to get the visible area.
Andon M Coleman already suggested you to implement projection volume culling (a generalized form of frustum culling). This is however outside the scope of OpenGL. You must understand that OpenGL is not a "magical" scene graph that does scene management and the likes. It's mere drawing API; what it does is putting shaded, textured points, lines or triangles on the screen and that's it. The rest is up to you, or the libraries you choose to implement it.
In the case of projection volume culling you're testing if a given piece of geometry intersects with the volume defined by the planes that form the borders of the volume. Your projection matrix defines such planes, specifically it transform the view space vertex position volume into the range [-1;1]×[-1;1]×[0;1] of perspective divided clip space. So by inverting the projection matrix and unprojection the corners of the [-1;1]×[-1;1]×[0;1] cube through that you determine the limiting planes of the projection volume in view space.
You then use that information to intersect your quads with the volume to see if they cross it, i.e. are in any way visible.

Maya Mel project lookAt target into place after motion capture import

I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.
I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.
Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).
Because the eye gazes are controlled by this LookAt setup, they need to be disabled when eye-gaze-including motion capture data is imported.
After the motion capture data is imported, the eye gazes are now set with motion capture rotations.
I am seeking a short Mel routine that does the following: it marches through the motion capture eye rotation samples, backwards calculates and sets each eyes' LookAt target position, and averages the two to get the global LookAt target's position.
After that Mel routine is run, I can turn the eye's LookAt constraint back on, the eyes gaze control returns to rig, nothing has changed visually, and the animator will have their eye UI working in the Maya viewport again.
I'm thinking this should be common logic for anyone doing facial mocap. Anyone got anything like this already?
How good is the eye tracking in the mocap? There may be issues if the targets are far away: depending on the sampling of the data, you may get 'crazy eyes' which seem not to converge, or jumpy data. If that's the case you may need to junk the eye data altogether, or smooth it heavily before retargeting.
To find the convergence of the two eyes, you try this (like #julian I'm using locators, etc since doing all the math in mel would be irritating).
1) constrain a locator to one eye so that one axis oriented along the look vector and the other is in the plane of the second eye. Let's say the eye aims down Z and the second eye is in the XZ plane
2) make a second locator, parented to the first, and constrained to the second eye in the same way: pointing down Z, with the first eye in the XZ plane
3) the local Y rotation of the second locator is the angle of convergence between the two eyes.
4) Figure out the focal distance using the law of sines and a cheat for the offset of the second eye relative to the first. The local X distance of the second eye is one leg of a right triangle. The angles of the triangle are the convergence angle from #3 and 90- the convergence angle. In other words:
focal distance eye_locator2.tx
-------------- = ---------------
sin(90 - eye_locator2.ry) sin( eye_locator2.ry)
so algebraically:
focal distance = eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)
You'll have to subtract the local Z of eye2, since the triangle we're solving is shifted backwards or forwards by that much:
focal distance = (eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)) - eye_locator2.tz
5) Position the target along the local Z direction of the eye locator at the distance derived above. It sounds like the actual control uses two look targets that can be moved apart to avoid crosseyes - it's kind of judgement call to know how much to use that vs the actual convergence distance. For lots of real world data the convergence may be way too far away for animator convenience: a target 30 meters away is pretty impractical to work with, but might be simulated with a target 10 meters away with a big spread. Unfortunately there's no empirical answer for that one - it's a judgement call.
I don't have this script but it would be fairly simple. Can you provide an example maya scene? You don't need any math. Here's how you could go about it:
Assume the axis pointing through the pupil is positive X, and focal length is 10 units.
Create 2 locators. Parent one to each eye. Set their translations to
(10, 0, 0).
Create 2 more locators in worldspace. Point constrain them to the others.
Create a plusMinusAverage node.
Connect the worldspace locator's translations to plusMinusAverage1 input 1 and 2
Create another locator (the lookAt)
Connect the output of plusMinusAverage1 to the translation of the lookAt locator.
Bake the translation of the lookAt locator.
Delete the other 4 locators.
Aim constrain the eyes' X axes to the lookAt.
This can all be done in a script using commands: spaceLocator, createNode, connectAttr, setAttr, bakeSimulation, pointConstraint, aimConstraint, delete.
The solution ended up being quite simple. The situation is motion capture data on the rotation nodes of the eyes, while simultaneously wanting (non-technical) animator over-ride control for the eye gaze. Within Maya, constraints have a weight factor: a parametric 0-1 value controlling the influence of the constraint. The solution is for the animator to simply key the eyes' lookAt constraint weight to 1 when they want control over the eye gaze, key those same weights to 0 when they want the motion captured eye gaze, and use a smooth transition of those constraint weights to mask the transition. This is better than my original idea described above, because the original motion capture data remains in place, available as reference, allowing the animator to switch back and forth if need be.

Calculating collision for a moving circle, without overlapping the boundaries

Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't.
What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary.
This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists.
Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?
Since you just have a circle and a rectangle, it's actually pretty simple. A circle of radius r bouncing around inside a rectangle of dimensions w, h can be treated the same as a point p at the circle's center, inside a rectangle (w-r), (h-r).
Now position update becomes simple. Given your point at position x, y and a per-frame velocity of dx, dy, the updated position is x+dx, y+dy - except when you cross a boundary. If, say, you end up with x+dx > W (letting W = w-r), then you do the following:
crossover = (x+dx) - W // this is how far "past" the edge your ball went
x = W - crossover // so you bring it back the same amount on the correct side
dx = -dx // and flip the velocity to the opposite direction
And similarly for y. You'll have to set up a similar (reflected) check for the opposite boundaries in each dimension.
At each step, you can calculate the projected/expected position of the circle for the next frame.
If this lies outside the rectangle, then you can then use the distance from the old circle position to the rectangle's edge and the amount "past" the rectangle's edge that the next position lies at (the interpenetration) to linearly interpolate and determine the precise time when the circle "hits" the rectangle edge.
For example, if the circle is 10 pixels away from the rectangle's edge, then is predicted to move to 5 pixels beyond it, you know that for 2/3rds of the timestep (10/15ths) it moves on its orginal path, then is reflected and continues on its new path for the remaining 1/3rd of the timestep (5/15ths). By calculating these two parts of the motion and "adding" the translations together, you can find the correct new position.
(Of course, it gets more complicated if you hit near a corner, as there may be several collisions during the timestep, off different edges. And if you have more than one circle moving, things get a lot more complex. But that's where you can start for the case you've asked about)
Reflection across a rectangular boundary is incredibly simple. Just take the amount that the object passed the boundary and subtract it from the boundary position. If the position without reflecting would be (-0.8,-0.2) for example and the upper left corner is at (0,0), the reflected position would be (0.8,0.2).