Robson tree traversal algorithm - binary-search-tree

Can anyone explain the Robson algorithm for tree traversal? I'm having trouble understanding what the steps of the algorithm are.

Do you happen to have an assignment due on 5/12 that you are trying to complete?
Robson tree traversals is just a way to traverse trees using a bunch of pointers. The Steps Outlined Here do a very good job at outlining the procedures.
I would recommend creating a tree with pen and paper and following the steps. It's the easiest way to wrap your head around all of pointers, and what they are doing.

Related

Limitations of optimisation software such as CPLEX

Which of the following optimisation methods can't be done in an optimisation software such as CPLEX? Why not?
Dynamic programming
Integer programming
Combinatorial optimisation
Nonlinear programming
Graph theory
Precedence diagram method
Simulation
Queueing theory
Can anyone point me in the right direction? I didn't find too much information regarding the limitations of CPLEX on the IBM website.
Thank you!
That's kind-of a big shopping list, and most of the things on it are not optimisation methods.
For sure CPLEX does integer programming, non-linear programming (just quadratic, SOCP, and similar but not general non-linear) and combinatoric optimisation out of the box.
It is usually possible to re-cast things like DP as MILP models, but will obviously require a bit of work. Lots of MILP models are also based on graphs, so yes it is certainly possible to solve a lot of graph problems using a MILP solver such as CPLEX.
Looking wider at topics like simulation, then that is quite a different approach. Simulation really is NOT an optimisation method, but it can be used alongside optimisation to get extra insights which may be useful in a business context. Might be used for example to discover some empirical relationships that could be used in an optimisation model by CPLEX.
The same can probably also be said for things like queuing theory, precedence, etc. Basically, use CPLEX as an optimisation tool to solve part or all of your problem once you have structured and analysed it via one of these other approaches.
Hope that helps.

Best multi-objective 3D path optimization algorithm?

I would like to calculate 3D balanced paths to building exits at once, for huge amount of people (2000). Since the problem is related to evacuation and solutions for 3D paths (the fastest and others) can be precalcualted and I am going to store 3D paths in database to accelerate the process. As I see it, there are two solutions so far:
Calculation of a number of passing through nodes, in graph environment representation, but probably the time calculation will be intolerable.
Using GA. However, I cannot find a good described optimization example, where is used genetic algorithm.
Can you tell me a way of using GA for multiobjective optimization, because I found only implementation of GA for finding shortest path? and Which algorithm is the best for multi-object optimization?
Genetic Algorithm as it is cannot be easily used for multi-objective optimisation directly. If you want to use the purest GA you have to combine the objectives into a single objective, e.g. by summing weighted values of each objective. But this often does not work very well, especially when there is a strong tradeoff between the objectives.
However, there are genetic (evolutionary) algorithms designed specifically for multi-objective optimisation. Probably the most famous one and one of the best is the NSGA-II which stands for Nondominated Sorting Genetic Algorithm II. It's also relatively easy to implement (I did once). But there other MOEAs (Multi-Objective Evolutionary Algorithm) too, just google it. Some of them also use the nondomination idea, others not.

Using the Matrix Chain with Greedy method

I am reading CLRS by myself and I am finding but difficult to understand few concepts.
Compared to Greedy, in Dynamic Programming we make choices globally and end up with optimal solution. I understood these concepts well with examples of Shortest path in Multi Graph and also by Knapsack Problem.
I am unable to understand how we are making choices dynamically in Matrix Chain. I have understood the recurrence relation, but I am not able to standard about dynamic decisions. (I understood that it has optimal substructure property)
How matrix chain algorithm would work if it is solved by Greedy Method ?
Thank you !
This problem can't be solved by greedy method.
for example, a matrix chain [3x2]•[2x3] •[3x4].
The consequence will be (([3x2]•[2x3]) •[3x4]) using greedy method, but the optimal answer is ([3x2]•([2x3] •[3x4])).
More details:https://www.cs.washington.edu/education/courses/421/04su/slides/matrixchain.pdf

What is the intuition behing an optimal substructure?

This question is related to dynamic programming and specifically rod cutting problem from CLRS Pg 362
The overall optimal solution incorporates optimal solutions to two related subproblems.
The overall optimal solution is obtained by finding optimal solutions to individual subproblems and then somehow clubbing them. I can't understand the intuition and concept. Any links, examples?
thanks
You can compare dynamic programming and greedy approach.
Optimal substructure means that optimal solution can be found by finding optimal solutions to subproblems. If this is not the case, than sum of optimal solution of subproblems doesn't give us optimal global solution.
For example, consider Dijkstra's algorithm. If we know shortest path from node A to node C than we can use this information to find shortest path to another nodes as well.
If this is not the case, we can't compose optimal solutions to subproblems and get global optimal solution, than we can use greedy approach. Look at change making problem for example. Greedy algorithm makes local optimal decisions and find some solution, but this solution is not guaranteed to be optimal.

Optimal optimization order

I am working on a system of optimisation problems. These tasks can be solved by a generic optimization accross all the state space. But some of my equations are independent of the remaining system( imagine a Jacobian Matrix with some blocks full of zero ) and i would like to use this fact to optimize first the joint equations and then taking the previous solution as an input finish to solve the independent components.
The rules that say the relation between the tasks can be represented as an oriented graph, but this graph contains cycle because of the joint equations, which mean that i can't use a topological sort on it.
Does anyone have an idea of how to solve this kind of pb?
Thx
There are a couple of types of frameworks you can look into (instead of inventing it yourself), which might solve your problem. The question is a bit to abstract to tell which one suits your needs, so take a look at these:
Use a solver framework to solve this optimization and look through the search space of. Take a look at Drools Planner, Gurobi, JGap, OpenTS, ...
Use a rules engine to apply the optimization changes. Take a look at Drools Expert, JESS, ...