Deletion algorithm for a Red Black tree - red-black-tree

Guys I'm trying to implement deletion algorithm for a Red Black tree and I'm having problem with understanding line three of this algorithm (from a book "Introduction to Algorithms" second edition):
1 if left[z] = nil[T] or right[z] = nil[T]
2 then y ← z
3 else y ← TREE-SUCCESSOR(z)
4 if left[y] ≠ nil[T]
5 then x ← left[y]
6 else x ← right[y]
7 p[x] ← p[y]
8 if p[y] = nil[T]
9 then root[T] ← x
10 else if y = left[p[y]]
11 then left[p[y]] ← x
12 else right[p[y]] ← x
13 if y 3≠ z
14 then key[z] ← key[y]
15 copy y's satellite data into z
16 if color[y] = BLACK
17 then RB-DELETE-FIXUP(T, x)
18 return y
First of all there is nowhere in this book explained what the TREE-SUCCESSOR is suppose to look like (no algorithm for that) but I found this page: and if I feed this algorithm whit 11,2,1,7,5,8,14,15,4 and then try to delete 7 it finds predecessor but if I try to delete 11 it finds successor. And that what I can't understand. Why sometimes it takes predecessor and sometimes successor? What criteria are taken into consideration while making this decision? A node's color?
Thank you.
P.S. I also do not quite understand what it's written in line number 13. Does it mean that if y has three colors (neither black nor red) or something else?

tree successor (as the opposite of tree-predecessor [which is in that book i believe]) is generally defined for binary search trees as the node with the next highest key. How it determines it is dependent on the type (red-black in this case) and Im almost positive your book leaves the successor method as an exercise. (i remember the problem :P)

I think you're reading CLRS 2nd edition.
TREE-SUCCESSOR is introduced in Chapter 12 section 2 - "12.2 Querying binary search tree". And contrary to what Jesse Naugher says, it's not dependent on the type of binary search trees.
Line 13 you quoted is a typo. It should be "if y != z".

You can refer to following code.
http://code.google.com/p/cstl/source/browse/src/c_rb.c

Related

Prolog: how to optimize this code(Solving 123456789=100 puzzle)

So there was a puzzle:
This equation is incomplete: 1 2 3 4 5 6 7 8 9 = 100. One way to make
it accurate is by adding seven plus and minus signs, like so: 1 + 2 +
3 – 4 + 5 + 6 + 78 + 9 = 100.
How can you do it using only 3 plus or minus signs?
I'm quite new to Prolog, solved the puzzle, but i wonder how to optimize it
makeInt(S,F,FinInt):-
getInt(S,F,0,FinInt).
getInt(Start, Finish, Acc, FinInt):-
0 =< Finish - Start,
NewAcc is Acc*10 + Start,
NewStart is Start +1,
getInt(NewStart, Finish, NewAcc, FinInt).
getInt(Start, Finish, A, A):-
0 > Finish - Start.
itCounts(X,Y,Z,Q):-
member(XLastDigit,[1,2,3,4,5,6]),
FromY is XLastDigit+1,
numlist(FromY, 7, ListYLastDigit),
member(YLastDigit, ListYLastDigit),
FromZ is YLastDigit+1,
numlist(FromZ, 8, ListZLastDigit),
member(ZLastDigit,ListZLastDigit),
FromQ is ZLastDigit+1,
member(YSign,[-1,1]),
member(ZSign,[-1,1]),
member(QSign,[-1,1]),
0 is XLastDigit + YSign*YLastDigit + ZSign*ZLastDigit + QSign*9,
makeInt(1, XLastDigit, FirstNumber),
makeInt(FromY, YLastDigit, SecondNumber),
makeInt(FromZ, ZLastDigit, ThirdNumber),
makeInt(FromQ, 9, FourthNumber),
X is FirstNumber,
Y is YSign*SecondNumber,
Z is ZSign*ThirdNumber,
Q is QSign*FourthNumber,
100 =:= X + Y + Z + Q.
Not sure this stands for an optimization. The code is just shorter:
sum_123456789_eq_100_with_3_sum_or_sub(L) :-
append([G1,G2,G3,G4], [0'1,0'2,0'3,0'4,0'5,0'6,0'7,0'8,0'9]),
maplist([X]>>(length(X,N), N>0), [G1,G2,G3,G4]),
maplist([G,F]>>(member(Op, [0'+,0'-]),F=[Op|G]), [G2,G3,G4], [F2,F3,F4]),
append([G1,F2,F3,F4], L),
read_term_from_codes(L, T, []),
100 is T.
It took me a while, but I got what your code is doing. It's something like this:
itCounts(X,Y,Z,Q) :- % generate X, Y, Z, and Q s.t. X+Y+Z+Q=100, etc.
generate X as a list of digits
do the same for Y, Z, and Q
pick the signs for Y, Z, and Q
convert all those lists of digits into numbers
verify that, with the signs, they add to 100.
The inefficiency here is that the testing is all done at the last minute. You can improve the efficiency if you can throw out some possible solutions as soon as you pick one of your numbers, that is, testing earlier.
itCounts(X,Y,Z,Q) :- % generate X, Y, Z, and Q s.t. X+Y+Z+Q=100, etc.
generate X as a list of digits, and convert it to a number
if it's so big or small the rest can't possibly bring the sum back to 100, fail
generate Y as a list of digits, convert to number, and pick it sign
if it's so big or so small the rest can't possibly bring the sum to 100, fail
do the same for Z
do the same for Q
Your function is running pretty fast already, even if I search all possible solutions. It only picks 6 X's; 42 Y's; 224 Z's; and 15 Q's. I don't think optimizing will be worth your while.
But if you really wanted to: I tested this by putting a testing function immediately after selecting an X. It reduced the 6 X's to 3 (all before finding the solution); 42 Y's to 30; 224 Z's to 184; and 15 Q's to 11. I believe we could reduce it further by testing immediately after a Y is picked, to see whether X YSign Y is already so large or small there can be no solution.
In PROLOG programs that are more computationally intensive, putting parts of the 'test' earlier in 'generate and test' algorithms can help a lot.

Linear programming and event occurrence

Suppose we have N (in this example N = 3) events that can happen depending on some variables. Each of them can generate certain profit or loses (event1 = 300, event2 = -100, event3 = 200), they are constrained by rules when they happen.
event 1 happens only when x > 5,
event 2 happens only when x = 2 and y = 3
event 3 happens only when x is odd.
The problem is to know the maximum profit.
Assume x, y are integer numbers >= 0
In the real problem there are many events and many dimensions.
(the solution should not be specific)
My question is:
Is this linear programming problem? If yes please provide solution to the example problem using this approach. If no please suggest some algorithms to optimize such problem.
This can be formulated as a mixed integer linear program. This is a linear program where some of the variables are constrained to be integer. Contrary to linear programs, solving the general integer program is NP-hard. However, there are many commercial or open source solvers that can solve efficiently large-scale problems. For up to 300 variables and constraints, you can use excel's solver.
Here is a way to formulate the above constraints:
If you go down this route, you might find this document useful.
the last constraint in an interesting one. I am assuming that x has to be integer, but if x can be either integer or continuous I will edit the answer accordingly.
I hope this helps!
Edit: L and U above should be interpreted as L1 and U1.
Edit 2: z2 needs to changed to (1-z2) on the 3rd and 4th constraint.
A specific answer:
seems more like a mathematical calculation than a programming problem, can't you just run a loop for x= 1->1000 to see what results occur?
for the example:
as x = 2 or 3 = -200 then x > 2 or 3, and if x < 5 doesn't get the 300, so all that is really happening is x > 5 and x = odd = maximum results.
x = 7 = 300 + 200 . = maximum profit for x
A general answer:
I don't see how to answer the question without seeing what the events are and how the events effect X ? Weather it's a linear or functional (mathematical) answer seems rather beside the point of finding the desired solution.

BFS (Breadth First Search) Time complexity at every step

BFS(G,s)
1 for each vertex u ∈ G.V-{s}
2 u.color = WHITE
3 u.d = ∞
4 u.π = NIL
5 s.color = GRAY
6 s.d = 0
7 s.π = NIL
8 Q ≠ Ø
9 ENQUEUE(Q, s)
10 while Q ≠ Ø
11 u = DEQUEUE(Q)
12 for each v ∈ G.Adj[u]
13 if v.color == WHITE
14 v.color = GRAY
15 v.d = u.d + 1
16 v.π = u
17 ENQUEUE(Q, v)
18 u.color = BLACK
The above Breadth First Search code is represented using adjacency lists.
Notations -
G : Graph
s : source vertex
u.color : stores the color of each vertex u ∈ V
u.π : stores predecessor of u
u.d = stores distance from the source s to vertex u computed by the algorithm
Understanding of the code (help me if I'm wrong) -
1. As far as I could understand, the ENQUEUE(Q, s) and DEQUEUE(Q) operations take O(1) time.<br>
2. Since the enqueuing operation occurs for exactly one time for one vertex, it takes a total O(V) time.
3. Since the sum of lengths of all adjacency lists is |E|, total time spent on scanning adjacency lists is O(E).
4. Why is the running time of BFS is O(V+E)?.
Please do not refer me to some website, I've gone through many articles to understand but I'm finding it difficult to understand.
Can anyone please reply to this code by writing the time complexity of each of the 18 lines?
Lines 1-4: O(V) in total
Lines 5-9: O(1) or O(constant)
Line 11: O(V) for all operations of line 11 within the loop (each vertice can only be dequeued once)
Lines 12-13: O(E) in total as you will check through every possible edge once. O(2E) if the edges are bi-directional.
Lines 14-17: O(V) in total as out of the E edges you check, only V vertices will be white.
Line 18: O(V) in total
Summing the complexities gives you
O(4V + E + 1) which simplifies to O(V+E)
New:
It is not O(VE) because at each iteration of the loop starting at line 10, lines 12-13 will only loop through the edges the current node is linked to, not all the edges in the entire graph. Thus, looking from the point of view of the edges, they will only be looped at most twice in a bi-directional graph, once by each node it connects with.

J: Why does `f^:proposition^:_ y` stand for a while loop?

As title says, I don't understand why f^:proposition^:_ y is a while loop. I have actually used it a couple times, but I don't understand how it works. I get that ^: repeats functions, but I'm confused by its double use in that statement.
I also can't understand why f^:proposition^:a: y works. This is the same as the previous one but returns the values from all the iterations, instead of only the last one as did the one above.
a: is an empty box and I get that has a special meaning used with ^: but even after having looked into the dictionary I couldn't understand it.
Thanks.
Excerpted and adapted from a longer writeup I posted to the J forums in 2009:
while =: ^:break_clause^:_
Here's an adverb you can apply to any code (which would equivalent of the
loop body) to create a while loop. In case you haven't seen it before, ^: is the power conjunction. More specifically, the phrase f^:n y applies the function f to the argument y exactly n times. The count n maybe be an integer or a function which applied to y produces an integer¹.
In the adverb above, we see the power conjunction twice, once in ^:break_clause and again in ^:_ . Let's first discuss the latter. That _ is J's notation for infinity. So, read literally, ^:_ is "apply the function an infinite number of times" or "keep reapplying forever". This is related to a while-loop's function, but it's not very useful if applied literally.
So, instead, ^:_ and its kin were defined to mean "apply a function to its limit", that is, "keep applying the function until its output matches its input". In that case, applying the function again would have no effect, because the next iteration would have the same input as the previous (remember that J is a functional language). So there's
no point in applying the function even once more: it has reached its limit.
For example:
cos=: 2&o. NB. Cosine function
pi =: 1p1 NB. J's notation for 1*pi^1 analogous to scientific notation 1e1
cos pi
_1
cos cos cos pi
0.857553
cos^:3 pi
0.857553
cos^:10 pi
0.731404
cos^:_ pi NB. Fixed point of cosine
0.739085
Here, we keep applying cosine until the answer stops changing: cosine has reached its fixed point, and more applications are superfluous. We can visualize this by showing the
intermediate steps:
cos^:a: pi
3.1415926535897 _1 0.54030230586813 ...73 more... 0.73908513321512 0.73908513321
So ^:_ applies a function to its limit. OK, what about ^:break_condition? Again, it's the same concept: apply the function on the left the number of times specified by the function on the right. In the case of _ (or its function-equivalent, _: ) the output is "infinity", in the case of break_condition the output will be 0 or 1 depending on the input (a break condition is boolean).
So if the input is "right" (i.e. processing is done), then the break_condition will be 0, whence loop_body^:break_condition^:_ will become loop_body^:0^:_ . Obviously, loop_body^:0 applies the loop_body zero times, which has no effect.
To "have no effect" is to leave the input untouched; put another way, it copies the input to the output ... but if the input matches the output, then the function has reached its limit! Obviously ^:_: detects this fact and terminates. Voila, a while loop!
¹ Yes, including zero and negative integers, and "an integer" should be more properly read as "an arbitrary array of integers" (so the function can be applied at more than one power simultaneously).
f^:proposition^:_ is not a while loop. It's (almost) a while loop when proposition returns 1 or 0. It's some strange kind of while loop when proposition returns other results.
Let's take a simple monadic case.
f =: +: NB. Double
v =: 20 > ] NB. y less than 20
(f^:v^:_) 0 NB. steady case
0
(f^:v^:_) 1 NB. (f^:1) y, until (v y) = 0
32
(f^:v^:_) 2
32
(f^:v^:_) 5
20
(f^:v^:_) 21 NB. (f^:0) y
21
This is what's happening: every time that v y is 1, (f^:1) y is executed. The result of (f^:1) y is the new y and so on.
If y stays the same for two times in a row → output y and stop.
If v y is 0→ output y and stop.
So f^:v^:_ here, works like double while less than 20 (or until the result doesn't change)
Let's see what happens when v returns 2/0 instead of 1/0.
v =: 2 * 20 > ]
(f^:v^:_) 0 NB. steady state
0
(f^:v^:_) 1 NB. (f^:2) 1 = 4 -> (f^:2) 4 = 16 -> (f^:2) 16 = 64 [ -> (f^:0) 64 ]
64
(f^:v^:_) 2 NB. (f^:2) 2 = 8 -> (f^:2) 8 = 32 [ -> (f^:0) 32 ]
32
(f^:v^:_) 5 NB. (f^:2) 5 = 20 [ -> (f^:0) 20 ]
20
(f^:v^:_) 21 NB. [ (f^:0) 21 ]
21
You can have many kinds of "strange" loops by playing with v. (It can even return negative integers, to use the inverse of f).

How would I split a large set of tabular data into smaller relevant tables? (Not a DB Question)

I'm really hoping I can describe this question in an understandable way. This is a puzzle that I have not been able to begin to solve even though I (mostly) understand it. I'm just not sure where to start, and I'm really hoping someone out there can get me headed in the right direction.
I have a LARGE table of data. It describes relationships between objects. Let's say the Y-axis has items numbered 1-1000, and the X-axis has items 1-1000 also. If item #234 on the Y-axis is related to item #791 on X, there will be a mark in the table where the row and column cross. In some industries this is referred to an a Truth Table. One can, at a glance, see how many items in a system relate to each other. The marks in the table can help to identify trends and patterns.
Here's some other helpful stuff about the nature of the table:
The full range of the number of relationships (r) for each item on either axis can be 1 <= r <= axisTotal.
The X and Y axis will share common items, but each axis will also have items that the other axis does not.
Each item will only exist once per axis. It can be on X and Y, but it would only be on each one 1 time.
The total number of items on each axis will most likely NOT be equal. Each axis could have from 50 to 1000's of items.
The end result is that this is going to be a report that needs to be printed. We have successfully printed a table that had about 100-150 items on each axis on an 11in X 17in piece of paper. Any more than that and it begins to be so small it's unreadable.
What I am trying to do is split the super large tables into smaller tables, but related points need to stay together. If I grab item 1-100 on X then I would need each item they relate to from Y.
I've generated a number of these tables and, while the number of relationships CAN be arbitrary, I have never seen an item relate to all other items. So in real practice the range is more like 1 <= r <= (10% * axisTotal). If an item's relationships exceed this range, it can be split up into multiple tables, but that is not optimal at all.
At the end of the day I think we, and our clients, would be happy if a 1000x1000 item table was split into 8 to 10 printed pages of smaller, related tables.
Any guidance would be a great help! Thanks.
---EDIT---
One other thing worth noting, there will be no empty rows or columns in the table. Every item on both the x and y axis will relate to at least 1 item on the opposite axis.
---EDIT---
Here is an example of a small truth table that I'm describing: . Every row and column has at least one relationship.
---EDIT---
May 18th, 2011
For what it's worth, I was moving pretty good on this project and I got pulled off for a couple of weeks. So it's going to a little while before I get back to this problem. But it is one that I will have to solve soon.
---EDIT---
July 11th, 2011
Bummer. Well, looks like I'm not going to be able to solve this problem right now. I was really hoping to be able to figure this out. Through discussion we decided to present the truth table in an Excel spreadsheet as an add-on resource to the main report. Excel 2007 and later will handle 1000's of columns which will more than suffice. Plus, we added some VBA which allows the viewer to double click on the column titles. This action will reduce the rows to only ones where there are interactions. Then it removes empty columns. In this way they can see a small sub-table based on the item they want to view, and can print it if they want.
This isn't an answer, I just want to try to visualize your data a little better. Does it kind of look like this?
Alice Bob Charlie ... Zelda
Shoes X X
Hats X X
Gloves X
...
Pants X
EDIT
Is it a requirement to show the data in tabular format? Or could you just list each out? Something like:
Alice
Shoes
Bob
Hats
Pants
Charlie
Shoes
Gloves
Zelda
Hats
Or the other way:
Shoes
Alice
Charlie
Hats
Bob
Zelda
Gloves
Charlie
Pants
Bob
EDIT 2
Okay, I've made another larger truth table to hopefully get a better understanding of how you want to split things up:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
1 x x x x
2 x x x x x x
3 x x x x
4 x x x
5 x x x
6 x x x
7 x x x
8 x x x
For argument's sake lets just say that you can only fit 4 rows on a page (because I don't feel like typing out a giant table this early in the morning) so we're going to split this into two pages. First, it is important to show every row, right? Second, do you need to show columns that never have a value. For instance Y and Z never have a value for rows 1 through 8 in this table, can they be excluded from the report or do they still need to be there? Third, is order of the rows important?
If its not important to show completely empty columns then we could remove 10 columns from the table above and compress it down to:
A B C E F H I L M O P Q R U V W
1 x x x x
2 x x x x x x
3 x x x x
4 x x x
5 x x x
6 x x x
7 x x x
8 x x x
Then if row order isn't important you can compress it further by taking an optimum row arrangement (not necessarily shown here). The two tables below have further been compress to 11 and 10 columns:
A B C F H I M P Q R U
1 x x x x
2 x x x x x x
5 x x x
7 x x x
A E H I L M O P U W
3 x x x x
4 x x x
6 x x x
8 x x x
Am I going down a completely wrong path here? These are all just questions to help me better understand your data and output requirements.
Also, in all seriousness, is it an option to get larger printers/plotters? Also, is it an option to just generate a PDF and use Acrobat's print tile's option?
Last year I read an article at the Computational Biology PLoS journal (www.ploscompbiol.org), that seems related to your problem.
In short, it describes a new approach when we already have a set of proteins and tabular data about their one-to-one interaction and we want to to group them so that interaction inside a group and interaction between two groups is either maximized or (this is the innovative idea) minimized .
If we plot the start data table with black for high and white for low interaction it looks randomly gray. The result table, after the calculations and rearranging is done (so grouped items are placed near one another), looks more like orthogonal areas of black and white.
The article: Protein Interaction Networks—More Than Mere Modules,
where there are also references to other older techniques for grouping this kind of data.