I'm having some problems in solving this exercise about schemata in genetic algorithms. Suppose I have the following situation, where three parents {1101, 0101, 0001} have, respectively, fitness {0.7, 4.3, 3.5} with respect to an unknown fitness function. The question is: which schemata will have the highest survival probability in the case of a maximization problem? The possible answers I had been given are: { ** 01}, {0 *** }, {***1} and {*101}.
Thank you in advance!
For the general case the schema theorem states that the schema with above average fitness, short defining length and lower order is more likely to survive.
For a schema H:
the order o(H) = number of fixed bit (e.g. o({01*0*}) = 3)
the defining length δ(H) = distance between the first and the last fixed bits (e.g. δ({*0*10}) = 3)
the probability of a gene not being changed is (1 - p) where p is the mutation probability. So the probability a schema H survives under mutation is S(H) = (1-p) ^ o(H)
...but this isn't the general case.
Every individual matches the two schemas {**01} and {***1}.
No matter what parent is selected for crossover / copy (these operations are fitness-dependent), children will match (at least before mutation) both schemas (with 100% probability).
Assuming mutation is applied gene by gene, for a schema H to survive, all fixed bits must remain unchanged. So {***1} is more likely to survive (has lower order).
Related
In Gelman book, the effective number is defined in terms of the following;
R hat
between- within MCMC sequence of variance, B and W
the number of MCMC samples, denoted by n
the number of chains, denoted by m
I do not know how the samplig() calculate the between MCMC sequence of variance for the case chains=1. So, I cannot calculate these terms ( B,W,m). I want to implement some algorithm according to the paper:https://arxiv.org/abs/1804.06788.
Roughly speaking, this paper construct some test statistics which is uniformly distributed under the null hypothesis that the MCMC sampling is correct. And if MCMC sampling is not correct, then the histogram of the test statistics become skew shape and this deviation from uniformity tells us the MCMC contains bias. I want to implement but it needs to calculate the above quantities.
In rstan, is there such function to extract the above quantities ? I think the process of calculation of R hat statistics, the above quantities B,W, m are retained in some place in the stanfit S4 object.
I am sorry, I found n_eff, but I do not know the choice of m of the case chains =1.
In the case that only one chain is estimated (which should not be happening anyway), then m = 2 because the post-warmup draws from the single chain are split into the first half and the second half. This splitting method is discussed in the documentation.
Thanks for your willingness to help.
Straight to the point, I'm confused with the use of Big O notation when analyzing the worst case time complexity of search algorithms.
For example, the worst case time complexity of Alpha-Beta Pruning is O(b^d) where ^ means ~ to the power of ~, b representing the average branching factor and d representing the depth of the search tree.
I do get that the worst case time complexity would be less or equal to a positive constant multiplied by b^d, but why is the use of big O notation permitted here? Where did the variable n, the input size, go? I do know that the input of same size might cause significant difference in time complexity of an algorithm.
All of the research I've done only explains "the use of big o notation in the analysis of worst case time complexity" in terms of the growth function, a function that has variable y as time complexity and variable x as input size. There are also formal definitions of big o notation, which make me even more confused with the question above. definition 1definition 2
Any attempts to answer my question would be greatly appreciated.
The input size you refer here to n is in this case d. If n is the amount of entries in your tree, d can be calculated by ln_2(n), assuming your tree is a balanced binary tree.
Big O notation implies that you are discussing what the runtime would be for a very large n. In the case you noted, O(b^d), the n is the variable that changes with input size. In this case, d would be your n. As you've found, some notations make use of many variables.
n is just a general term for the number of elements, but runtime could vary on many factors- depth of a tree, or a different list entirely. For example, to traverse lists like this:
for n in firstList:
for k in secondList:
do stuff
the cost would be O(n*k).
I'm new to genetic algorithms and am writing code for the Traveling Salesman problem. I'm using cycle crossover to generate new offspring and I've found that this leads to some of the offspring retaining the same exact phenotype as one parent even when the two parents are different. Would translating the chromosomes avoid this?
By translate I mean a chromosome with phenotype ABCDE shifting over two to DEABC. They would be equivalent answers and have equal fitness, but might make more diverse offspring.
Is this worth it in the long run, or is it just wasting computing time?
Cycle crossover (CX) is based on the assumption that it's important to preserve the absolute position of cities (a city preferably inherits its position from either parent) and the preventive "translation" is against the spirit of CX.
Anyway multiple studies (e.g. 1) have shown that for TSP the key is to preserve the relative position of cities and the edges.
So it could work, but you have to experiment. Some form of mutation is another possibility.
Probably, if the characteristics of CX aren't satisfying, a different crossover operator is a better choice: staying with simple operators, one of the most successful is Order Crossover (e.g. 2).
L. Darrell Whitley, Timothy Starkweather, D'Ann Fuquay - Scheduling problems and traveling salesmen: The genetic edge recombination operator - 1989.
Pablo Moscato - On Genetic Crossover Operators for Relative Order Preservation.
I have implemented an algorithm that uses two other algorithms for calculating the shortest path in a graph: Dijkstra and Bellman-Ford. Based on the time complexity of the these algorithms, I can calculate the running time of my implementation, which is easy giving the code.
Now, I want to experimentally verify my calculation. Specifically, I want to plot the running time as a function of the size of the input (I am following the method described here). The problem is that I have two parameters - number of edges and number of vertices.
I have tried to fix one parameter and change the other, but this approach results in two plots - one for varying number of edges and the other for varying number of vertices.
This leads me to my question - how can I determine the order of growth based on two plots? In general, how can one experimentally determine the running time complexity of an algorithm that has more than one parameter?
It's very difficult in general.
The usual way you would experimentally gauge the running time in the single variable case is, insert a counter that increments when your data structure does a fundamental (putatively O(1)) operation, then take data for many different input sizes, and plot it on a log-log plot. That is, log T vs. log N. If the running time is of the form n^k you should see a straight line of slope k, or something approaching this. If the running time is like T(n) = n^{k log n} or something, then you should see a parabola. And if T is exponential in n you should still see exponential growth.
You can only hope to get information about the highest order term when you do this -- the low order terms get filtered out, in the sense of having less and less impact as n gets larger.
In the two variable case, you could try to do a similar approach -- essentially, take 3 dimensional data, do a log-log-log plot, and try to fit a plane to that.
However this will only really work if there's really only one leading term that dominates in most regimes.
Suppose my actual function is T(n, m) = n^4 + n^3 * m^3 + m^4.
When m = O(1), then T(n) = O(n^4).
When n = O(1), then T(n) = O(m^4).
When n = m, then T(n) = O(n^6).
In each of these regimes, "slices" along the plane of possible n,m values, a different one of the terms is the dominant term.
So there's no way to determine the function just from taking some points with fixed m, and some points with fixed n. If you did that, you wouldn't get the right answer for n = m -- you wouldn't be able to discover "middle" leading terms like that.
I would recommend that the best way to predict asymptotic growth when you have lots of variables / complicated data structures, is with a pencil and piece of paper, and do traditional algorithmic analysis. Or possibly, a hybrid approach. Try to break the question of efficiency into different parts -- if you can split the question up into a sum or product of a few different functions, maybe some of them you can determine in the abstract, and some you can estimate experimentally.
Luckily two input parameters is still easy to visualize in a 3D scatter plot (3rd dimension is the measured running time), and you can check if it looks like a plane (in log-log-log scale) or if it is curved. Naturally random variations in measurements plays a role here as well.
In Matlab I typically calculate a least-squares solution to two-variable function like this (just concatenates different powers and combinations of x and y horizontally, .* is an element-wise product):
x = log(parameter_x);
y = log(parameter_y);
% Find a least-squares fit
p = [x.^2, x.*y, y.^2, x, y, ones(length(x),1)] \ log(time)
Then this can be used to estimate running times for larger problem instances, ideally those would be confirmed experimentally to know that the fitted model works.
This approach works also for higher dimensions but gets tedious to generate, maybe there is a more general way to achieve that and this is just a work-around for my lack of knowledge.
I was going to write my own explanation but it wouldn't be any better than this.
I am using simulated annealing to solve an NP-complete resource scheduling problem. For each candidate ordering of the tasks I compute several different costs (or energy values). Some examples are (though the specifics are probably irrelevant to the question):
global_finish_time: The total number of days that the schedule spans.
split_cost: The number of days by which each task is delayed due to interruptions by other tasks (this is meant to discourage interruption of a task once it has started).
deadline_cost: The sum of the squared number of days by which each missed deadline is overdue.
The traditional acceptance probability function looks like this (in Python):
def acceptance_probability(old_cost, new_cost, temperature):
if new_cost < old_cost:
return 1.0
else:
return math.exp((old_cost - new_cost) / temperature)
So far I have combined my first two costs into one by simply adding them, so that I can feed the result into acceptance_probability. But what I would really want is for deadline_cost to always take precedence over global_finish_time, and for global_finish_time to take precedence over split_cost.
So my question to Stack Overflow is: how can I design an acceptance probability function that takes multiple energies into account but always considers the first energy to be more important than the second energy, and so on? In other words, I would like to pass in old_cost and new_cost as tuples of several costs and return a sensible value .
Edit: After a few days of experimenting with the proposed solutions I have concluded that the only way that works well enough for me is Mike Dunlavey's suggestion, even though this creates many other difficulties with cost components that have different units. I am practically forced to compare apples with oranges.
So, I put some effort into "normalizing" the values. First, deadline_cost is a sum of squares, so it grows exponentially while the other components grow linearly. To address this I use the square root to get a similar growth rate. Second, I developed a function that computes a linear combination of the costs, but auto-adjusts the coefficients according to the highest cost component seen so far.
For example, if the tuple of highest costs is (A, B, C) and the input cost vector is (x, y, z), the linear combination is BCx + Cy + z. That way, no matter how high z gets it will never be more important than an x value of 1.
This creates "jaggies" in the cost function as new maximum costs are discovered. For example, if C goes up then BCx and Cy will both be higher for a given (x, y, z) input and so will differences between costs. A higher cost difference means that the acceptance probability will drop, as if the temperature was suddenly lowered an extra step. In practice though this is not a problem because the maximum costs are updated only a few times in the beginning and do not change later. I believe this could even be theoretically proven to converge to a correct result since we know that the cost will converge toward a lower value.
One thing that still has me somewhat confused is what happens when the maximum costs are 1.0 and lower, say 0.5. With a maximum vector of (0.5, 0.5, 0.5) this would give the linear combination 0.5*0.5*x + 0.5*y + z, i.e. the order of precedence is suddenly reversed. I suppose the best way to deal with it is to use the maximum vector to scale all values to given ranges, so that the coefficients can always be the same (say, 100x + 10y + z). But I haven't tried that yet.
mbeckish is right.
Could you make a linear combination of the different energies, and adjust the coefficients?
Possibly log-transforming them in and out?
I've done some MCMC using Metropolis-Hastings. In that case I'm defining the (non-normalized) log-likelihood of a particular state (given its priors), and I find that a way to clarify my thinking about what I want.
I would take a hint from multi-objective evolutionary algorithm (MOEA) and have it transition if all of the objectives simultaneously pass with the acceptance_probability function you gave. This will have the effect of exploring the Pareto front much like the standard simulated annealing explores plateaus of same-energy solutions.
However, this does give up on the idea of having the first one take priority.
You will probably have to tweak your parameters, such as giving it a higher initial temperature.
I would consider something along the lines of:
If (new deadline_cost > old deadline_cost)
return (calculate probability)
else if (new global finish time > old global finish time)
return (calculate probability)
else if (new split cost > old split cost)
return (calculate probability)
else
return (1.0)
Of course each of the three places you calculate the probability could use a different function.
It depends on what you mean by "takes precedence".
For example, what if the deadline_cost goes down by 0.001, but the global_finish_time cost goes up by 10000? Do you return 1.0, because the deadline_cost decreased, and that takes precedence over anything else?
This seems like it is a judgment call that only you can make, unless you can provide enough background information on the project so that others can suggest their own informed judgment call.