Here is what I want to do:
keep a reference curve unchanged (only shift and stretch a query curve)
constrain how many elements are duplicated
keep both start and end open
I tried:
dtw(ref_curve,query_curve,step_pattern=asymmetric,open_end=True,open_begin=True)
but I cannot constrain how the query curve is stretched
dtw(ref_curve,query_curve,step_pattern=mvmStepPattern(10))
it didn’t do anything to the curves!
dtw(ref_curve,query_curve,step_pattern=rabinerJuangStepPattern(4, "c"),open_end=True, open_begin=True)
I liked this one the most but in some cases it shifts the query curve more than needed...
I read the paper (https://www.jstatsoft.org/article/view/v031i07) and the API but still don't quite understand how to achieve what I want. Any other options to constrain number of elements that are duplicated? I would appreciate your help!
to clarify: we are talking about functions provided by the DTW suite packages at dynamictimewarping.github.io. The question is in fact language-independent (and may be more suited to the Cross-validated Stack Exchange).
The pattern rabinerJuangStepPattern(4, "c") you have found does in fact satisfy your requirements:
it's asymmetric, and each step advances the reference by exactly one step
it's slope-limited between 1/2 and 2
it's type "c", so can be normalized in a way that allows open-begin and open-end
If you haven't already, check out dtw.rabinerJuangStepPattern(4, "c").plot().
It goes without saying that in all cases you are getting is the optimal alignment, i.e. the one with the least accumulated distance among all allowed paths.
As an alternative, you may consider the simpler asymmetric recursion -- as your first attempt above -- constrained with a global warping window: see dtw.window and the window_type argument. This provides constraints of a different shape (and flexible size), which might suit your specific case.
PS: edited to add that the asymmetricP2 recursion is also similar to RJ-4c, but with a more constrained slope.
My teacher made this argument and asked us what could be wrong with it.
for an array A of n distinct numbers. Since there are n! permutations of A,
we cannot check for each permutation whether it is sorted, in a total time which
is polynomial in n. Therefore, sorting A cannot be in P.
my friend thought that it just should be : therefore sorting a cannot be in NP.
Is that it or are we thinking to easily about it?
The problem with this argument is that it fails to adequately specify the exact problem.
Sorting can be linear-time (O(n)) in the number of elements to sort, if you're sorting a large list of integers drawn from a small pool (counting sort, radix sort).
Sorting can be linearithmic-time (O(nlogn)) in the number of elements to sort, if you're sorting a list of arbitrary things which are all totally ordered according to some ordering relation (e.g., less than or equal to on the integers).
Sorting based on a partial order (e.g. topological sorting) must be analyzed in yet another way.
We can imagine a problem like sorting whereby the sortedness of a list cannot be determined by comparing adjacent entries only. In the extreme case, sortedness (according to what we are considering to be sorting) might only be verifiable by checking the entire list. If our kind of sorting is designed so as to guarantee there is exactly one sorted permutation of any given list, the time complexity is factorial-time (O(n!)) and the problem is not in P.
That's the real problem with this argument. If your professor is assuming that "sorting" refers to sorting integers not in any particular small range, the problem with the argument then is that we do not need to consider all permutations in order to construct the sorted one. If I have a bag with 100 marbles and I ask you to remove three marbles, the time complexity is constant-time; it doesn't matter that there are n(n-1)(n-2)/6 = 161700, or O(n^3), ways in which you can accomplish this task.
The argument is a non-sequitur, the conclusion does not logically follow from the previous steps. Why doesn't it follow? Giving a satisfying answer to that question requires knowing why the person who wrote the argument thinks it is correct, and then addressing their misconception. In this case, the person who wrote the argument is your teacher, who doesn't think the argument is correct, so there is no misconception to address and hence no completely satisfying answer to the question.
That said, my explanation would be that the argument is wrong because it proposes a specific algorithm for sorting - namely, iterating through all n! permutations and choosing the one which is sorted - and then assumes that the complexity of the problem is the same as the complexity of that algorithm. This is a mistake because the complexity of a problem is defined as the lowest complexity out of all algorithms which solve it. The argument only considered one algorithm, and didn't show anything about other algorithms which solve the problem, so it cannot reach a conclusion about the complexity of the problem itself.
I'm trying to understand continuous optimization algorithms applied on some test functions.
Here are the results obtaind by some algorithms used for this issue on some of test functions :
enter image description here
I didn't understand the difference between the two underlined phrases. would you please help me in this?
P.S. sometimes they use the term (median number) instead of (mean number ) what's the difference between the two??
This question lacks some context. It would have been better to link to th text too to get a grasp of what is going on.
But i read it as this (and i think that's how someone with some experience in optimization-algorithms would read it; you have to check it though with your knowledge of the context):
The bold 1.0s are the normalized number of function-evaluations on different functions to optimize (each row is a different function)
The values in the brackets are unnormalized numbers explaining the same
When ACO used 820 evaluations (unnormalized), normalized to 1.0, CACO used 8.3 * 820 evaluations
The mean and median are two different measures of central tendency. Check wikipedia out to understand the differences.
Question after BIG edition :
I need to built a ranking using genetic algorithm, I have data like this :
P(a>b)=0.9
P(b>c)=0.7
P(c>d)=0.8
P(b>d)=0.3
now, lets interpret a,b,c,d as names of football teams, and P(x>y) is probability that x wins with y. We want to build ranking of teams, we lack some observations P(a>d),P(a>c) are missing due to lack of matches between a vs d and a vs c.
Goal is to find ordering of team names, which the best describes current situation in that four team league.
If we have only 4 teams than solution is straightforward, first we compute probabilities for all 4!=24 orderings of four teams, while ignoring missing values we have :
P(abcd)=P(a>b)P(b>c)P(c>d)P(b>d)
P(abdc)=P(a>b)P(b>c)(1-P(c>d))P(b>d)
...
P(dcba)=(1-P(a>b))(1-P(b>c))(1-P(c>d))(1-P(b>d))
and we choose the ranking with highest probability. I don't want to use any other fitness function.
My question :
As numbers of permutations of n elements is n! calculation of probabilities for all
orderings is impossible for large n (my n is about 40). I want to use genetic algorithm for that problem.
Mutation operator is simple switching of places of two (or more) elements of ranking.
But how to make crossover of two orderings ?
Could P(abcd) be interpreted as cost function of path 'abcd' in assymetric TSP problem but cost of travelling from x to y is different than cost of travelling from y to x, P(x>y)=1-P(y<x) ? There are so many crossover operators for TSP problem, but I think I have to design my own crossover operator, because my problem is slightly different from TSP. Do you have any ideas for solution or frame for conceptual analysis ?
The easiest way, on conceptual and implementation level, is to use crossover operator which make exchange of suborderings between two solutions :
CrossOver(ABcD,AcDB) = AcBD
for random subset of elements (in this case 'a,b,d' in capital letters) we copy and paste first subordering - sequence of elements 'a,b,d' to second ordering.
Edition : asymetric TSP could be turned into symmetric TSP, but with forbidden suborderings, which make GA approach unsuitable.
It's definitely an interesting problem, and it seems most of the answers and comments have focused on the semantic aspects of the problem (i.e., the meaning of the fitness function, etc.).
I'll chip in some information about the syntactic elements -- how do you do crossover and/or mutation in ways that make sense. Obviously, as you noted with the parallel to the TSP, you have a permutation problem. So if you want to use a GA, the natural representation of candidate solutions is simply an ordered list of your points, careful to avoid repitition -- that is, a permutation.
TSP is one such permutation problem, and there are a number of crossover operators (e.g., Edge Assembly Crossover) that you can take from TSP algorithms and use directly. However, I think you'll have problems with that approach. Basically, the problem is this: in TSP, the important quality of solutions is adjacency. That is, abcd has the same fitness as cdab, because it's the same tour, just starting and ending at a different city. In your example, absolute position is much more important that this notion of relative position. abcd means in a sense that a is the best point -- it's important that it came first in the list.
The key thing you have to do to get an effective crossover operator is to account for what the properties are in the parents that make them good, and try to extract and combine exactly those properties. Nick Radcliffe called this "respectful recombination" (note that paper is quite old, and the theory is now understood a bit differently, but the principle is sound). Taking a TSP-designed operator and applying it to your problem will end up producing offspring that try to conserve irrelevant information from the parents.
You ideally need an operator that attempts to preserve absolute position in the string. The best one I know of offhand is known as Cycle Crossover (CX). I'm missing a good reference off the top of my head, but I can point you to some code where I implemented it as part of my graduate work. The basic idea of CX is fairly complicated to describe, and much easier to see in action. Take the following two points:
abcdefgh
cfhgedba
Pick a starting point in parent 1 at random. For simplicity, I'll just start at position 0 with the "a".
Now drop straight down into parent 2, and observe the value there (in this case, "c").
Now search for "c" in parent 1. We find it at position 2.
Now drop straight down again, and observe the "h" in parent 2, position 2.
Again, search for this "h" in parent 1, found at position 7.
Drop straight down and observe the "a" in parent 2.
At this point note that if we search for "a" in parent one, we reach a position where we've already been. Continuing past that will just cycle. In fact, we call the sequence of positions we visited (0, 2, 7) a "cycle". Note that we can simply exchange the values at these positions between the parents as a group and both parents will retain the permutation property, because we have the same three values at each position in the cycle for both parents, just in different orders.
Make the swap of the positions included in the cycle.
Note that this is only one cycle. You then repeat this process starting from a new (unvisited) position each time until all positions have been included in a cycle. After the one iteration described in the above steps, you get the following strings (where an "X" denotes a position in the cycle where the values were swapped between the parents.
cbhdefga
afcgedbh
X X X
Just keep finding and swapping cycles until you're done.
The code I linked from my github account is going to be tightly bound to my own metaheuristics framework, but I think it's a reasonably easy task to pull the basic algorithm out from the code and adapt it for your own system.
Note that you can potentially gain quite a lot from doing something more customized to your particular domain. I think something like CX will make a better black box algorithm than something based on a TSP operator, but black boxes are usually a last resort. Other people's suggestions might lead you to a better overall algorithm.
I've worked on a somewhat similar ranking problem and followed a technique similar to what I describe below. Does this work for you:
Assume the unknown value of an object diverges from your estimate via some distribution, say, the normal distribution. Interpret your ranking statements such as a > b, 0.9 as the statement "The value a lies at the 90% percentile of the distribution centered on b".
For every statement:
def realArrival = calculate a's location on a distribution centered on b
def arrivalGap = | realArrival - expectedArrival |
def fitness = Σ arrivalGap
Fitness function is MIN(fitness)
FWIW, my problem was actually a bin-packing problem, where the equivalent of your "rank" statements were user-provided rankings (1, 2, 3, etc.). So not quite TSP, but NP-Hard. OTOH, bin-packing has a pseudo-polynomial solution proportional to accepted error, which is what I eventually used. I'm not quite sure that would work with your probabilistic ranking statements.
What an interesting problem! If I understand it, what you're really asking is:
"Given a weighted, directed graph, with each edge-weight in the graph representing the probability that the arc is drawn in the correct direction, return the complete sequence of nodes with maximum probability of being a topological sort of the graph."
So if your graph has N edges, there are 2^N graphs of varying likelihood, with some orderings appearing in more than one graph.
I don't know if this will help (very brief Google searches did not enlighten me, but maybe you'll have more success with more perseverance) but my thoughts are that looking for "topological sort" in conjunction with any of "probabilistic", "random", "noise," or "error" (because the edge weights can be considered as a reliability factor) might be helpful.
I strongly question your assertion, in your example, that P(a>c) is not needed, though. You know your application space best, but it seems to me that specifying P(a>c) = 0.99 will give a different fitness for f(abc) than specifying P(a>c) = 0.01.
You might want to throw in "Bayesian" as well, since you might be able to start to infer values for (in your example) P(a>c) given your conditions and hypothetical solutions. The problem is, "topological sort" and "bayesian" is going to give you a whole bunch of hits related to markov chains and markov decision problems, which may or may not be helpful.
What is best way to represent a sereis of item, price ranges to reduce noise for the end user.
Typically when an item is displayed they show a histogram of price ranges is displayed in ecommerce sites. Are there standard algorithms that these sites use for this display?.
Well it seems to me that you would first and foremost need a way to aggregate this data. That having been said, if you have that data and need to create a histogram it can be fairly simple in the programming language R (here is some documentation: http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/base/html/hist.html ). There is also an R extension I've read about that allows you to post/run R code in wiki-like pages ( http://mars.wiwi.hu-berlin.de/mediawiki/slides/index.php/R_extension_-_Mediawiki ).
If you already have this data (in this case prices) I dont think you need an algorithm so much as you just need a way to display it in a type of graph. I think R should be useful. I hope this helps!