I am using the stochastic optimization algorithm CMA-ES. Although it finds the global minimum in the first cycles ( I know because it is a made-up benchmark test) the algorithm after some cycles converge to another minimum (a local one since it has a bigger cost function value).
Does everyone have experience in the matter?
Do I have to care that it converges to a local minimum since it has found the global one? Is it wrong to just use the global minimum like that and not to care about where the algorithm has converged?
My opinion from the results is that this is happening due to the normal distribution, the global minimum has only a few solutions but the local one has a great percentage of solutions. ( I have tried a lot of different populations values but the result is the same)
Thank you in advance for your help!
It is common to keep a global "best" solution when running evolutionary algorithms, especially if they are the kind that is allowed to move to worse results from a better one.
If you are running the algorithm with an approximate fitness function and getting a good-enough result is okay, you can go with what it converges to. Depending on the problem you are solving, it might be very good or very bad to overfit a solution.
If your fitness function is not an approximation and is the correct metric to optimize, just keep the best performer and use it when you finish running the algorithm.
Related
I want to optimize KNN. There is a lot about SVM, RF and XGboost; but very few for KNN.
As far as I know the number of neighbors is one parameter to tune.
But what other parameters to test? Is there any good article?
Thank you
KNN is so simple method that there is pretty much nothing to tune besides K. The whole method is literally:
for a given test sample x:
- find K most similar samples from training set, according to similarity measure s
- return the majority vote of the class from the above set
Consequently the only thing used to define KNN besides K is the similarity measure s, and that's all. There is literally nothing else in this algorithm (as it has 3 lines of pseudocode). On the other hand finding "the best similarity measure" is equivalently hard problem as learning a classifier itself, thus there is no real method of doing so, and people usually end up using either simple things (Euclidean distance) or use their domain knowledge to adapt s to the problem at hand.
Lejlot, pretty much summed it all. K-NN is so simple that it's an instance based nonparametric algorithm, that's what makes it so beautiful, and works really well for certain specific examples. Most of K-NN research is not in K-NN itself but in the computation and hardware that goes into it. If you'd like some readings on K-NN and machine learning algorithms Charles Bishop - Pattern Recognition and Machine Learning. Warning: it is heavy in the mathematics, but, Machine Learning and real computer science is all math.
By optimizing if you are also focusing on the reduction of prediction time (you should) then there are other aspects which you can implement to make the algorithm more efficient (But these are not parameter tuning). The major draw back with the KNN is that with the increasing number of training examples, the prediction time also goes high thus performance go low.
To optimize, you can check on the KNN with KD-trees, KNN with inverted lists(index) and KNN with locality sensitive hashing (KNN with LSH).
These will reduce the search space during the prediction time thus optimizing the algorithm.
Initially I modeled my objective function as follows:
argmin var(f(x),g(x))+var(c(x),d(x))
where f,g,c,d are linear functions
in order to be able to use linear solvers I modeled the problem as follows
argmin abs(f(x),g(x))+abs(c(x),d(x))
is it correct to change variance to absolute value in this context, I'm pretty sure they imply the same meaning as having the least difference between two functions
You haven't given enough context to answer the question. Even though your question doesn't seem to be about regression, in many ways it is similar to the question of choosing between least squares and least absolute deviations approaches to regression. If that term in your objective function is in any sense an error term then the most appropriate way to model the error depends on the nature of the error distribution. Least squares is better if there is normally distributed noise. Least absolute deviations is better in the nonparametric setting and is less sensitive to outliers. If the problem has nothing to do with probability at all then other criteria need to be brought in to decide between the two options.
Having said all this, the two ways of measuring distance are broadly similar. One will be fairly small if and only if the other is -- though they won't be equally small. If they are similar enough for your purposes then the fact that absolute values can be linearized could be a good motivation to use it. On the other hand -- if the variance-based one is really a better expression of what you are interested in then the fact that you can't use LP isn't sufficient justification to adopt absolute values. After all -- quadratic programming is not all that much harder than LP, at least below a certain scale.
To sum up -- they don't imply the same meaning, but they do imply similar meanings; and, whether or not they are similar enough depends upon your purposes.
I have a directed graph which is strongly connected and every node have some price(plus or negative). I would like to find best (highest score) path from node A to node B. My solution is some kind of brutal force so it takes ages to find that path. Is any algorithm for this or any idea how can I do it?
Have you tried the A* algorithm?
It's a fairly popular pathfinding algorithm.
The algorithm itself is not to difficult to implement, but there are plenty of implementations available online.
Dijkstra's algorithm is a special case for the A* (in which the heuristic function h(x) = 0).
There are other algorithms who can outperform it, but they usually require graph pre-processing. If the problem is not to complex and you're looking for a quick solution, give it a try.
EDIT:
For graphs containing negative edges, there's the Bellman–Ford algorithm. Detecting the negative cycles comes at the cost of performance, though (worse than the A*). But it still may be better than what you're currently using.
EDIT 2:
User #templatetypedef is right when he says the Bellman-Ford algorithm may not work in here.
The B-F works with graphs where there are edges with negative weight. However, the algorithm stops upon finding a negative cycle. I believe that is a useful behavior. Optimizing the shortest path in a graph that contains a cycle of negative weights will be like going down a Penrose staircase.
What should happen if there's the possibility of reaching a path with "minus infinity cost" depends on the problem.
When looking at Genetic programming papers, it seems to me that the number of test cases is always fixed. However, most mutations should (?) at every stage of the execution be very deleterious, i. e. make it obvious after one test case that the mutated program performs much worse than the previous one. What happens if you, at first, only try very few (one?) test case and look whether the mutation makes any sense?
Is it maybe so that different test cases test for different features of the solutions, and one mutation will probably improve only one of those features?
I don't know if I agree with your assumption that most mutations should be very deleterious, but you shouldn't care even if they were. Your goal is not to optimize the individuals, but to optimize the population. So trying to determine if a "mutation makes any sense" is exactly what genetic programming is supposed to do: i.e. eliminate mutations that "don't make sense." Your only "guidance" for the algorithm should come through the fitness function.
I'm also not sure what you mean with "test case", but for me it sounds like you are looking for something related to multi-objective-optimization (MOO). That means you try to optimize a solution regarding different aspects of the problem - therefore you do not need to mutate/evaluate a population for a specific test-case, but to find a multi objective fitness function.
"The main idea in MOO is the notion of Pareto dominance" (http://www.gp-field-guide.org.uk)
I think this is a good idea in theory but tricky to put into practice. I can't remember seeing this approach actually used before but I wouldn't be surprised if it has.
I presume your motivation for doing this is to improve the efficiency of the applying the fitness function - you can stop evaluation early and discard the individual (or set fitness to 0) if the tests look like they're going to be terrible.
One challenge is to decide how many test cases to apply; discarding an individual after one random test case is surely not a good idea as the test case could be a real outlier. Perhaps terminating evaluation after 50% of test cases if the fitness of the individual was <10% of the best would probably not discard any very good individuals; on the other hand it might not be worth it given a lot of individuals will be of middle-of-the road fitness and might well only save a small proportion of the computation. You could adjust the numbers so you save more effort, but the more effort you try to save the more chances you have of genuinely good individuals being discarded by accident.
Factor in the extra time to taken to code this and possible bugs etc. and I shouldn't think the benefit would be worthwhile (unless this is a research project in which case it might be interesting to try it and see).
I think it's a good idea. Fitness evaluation is the most computational intense process in GP, so estimating the fitness value of individuals in order to reduce the computational expense of actually calculating the fitness could be an important optimization.
Your idea is a form of fitness approximation, sometimes it's called lazy evaluation (try searching these words, there are some research papers).
There are also distinct but somewhat overlapping schemes, for instance:
Dynamic Subset Selection (Chris Gathercole, Peter Ross) is a method to select a small subset of the training data set on which to actually carry out the GP algorithm;
Segment-Based Genetic Programming (Nailah Al-Madi, Simone Ludwig) is a technique that reduces the execution time of GP by partitioning the dataset into segments and using the segments in the fitness evaluation process.
PS also in the Brood Recombination Crossover (Tackett) child programs are usually evaluated on a restricted number of test cases to speed up the crossover.
I have run across several posts and articles that suggests using things like simulated annealing to avoid the local minima/maxima problem.
I don't understand why this would be necessary if you started out with a sufficiently large random population.
Is it just another check to insure that the initial population was, in fact, sufficiently large and random? Or are those techniques just an alternative to producing a "good" initial population?
Simulated annealing is a probabilistic optimization technique -- it's not supposed to give you more precise answers, it's supposed to give you approximations faster.
Simulated annealing is probabilistic technique where chance of getting trapped in local minima/maxima depends on scheduling of temperature. Scheduling temperature is different for different types of problems. Evolutionary Algorithm is much more robust and less likely to get trapped in local minima/maxima. SA is probabilistic. On the other hand, EA uses mutation which introduces random walk in search space, that's why EA has higher probability of getting global optima.
First of all, simulated annealing is a last resort method. There are far better, more efficient, and more effective methods of discovering where the local minima are found.
A better check would be to use a statistical method to uncover information about your data set such as variance or standard deviation.