I'm currently learning about the TSP and want to combine two simple heuristics in one algorithm. It works by using the nearest neighbour algorithm to create a tour and then improving it by using a 2 opt swap for every combination. I believe the number of steps for the 2 opt technique is n(n-1), so O(n) = n^2. However, I don't know how to calculate the complexity for the nearest neighbour algorithm. I likely the think it will result to O(n^2) but I am not certain about the process to get to this.
Related
The "Traveling Salesman Problem" is a problem where a person has to travel between "n" cities - but choose the itinerary such that:
Each city is visited only once
The total distance traveled is minimized
I have heard that if a modern computer were the solve this problem using "brute force" (i.e. an exact solution) - if there are more than 15 cities, the time taken by the computer will exceed a hundred years!
I am interested in understanding "how do we estimate the amount of time it will take for a computer to solve the Traveling Salesman Problem (using "brute force") as the number of cities increase". For instance, from the following reference (https://www.sciencedirect.com/topics/earth-and-planetary-sciences/traveling-salesman-problem):
My Question: Is there some formula we can use to estimate the number of time it will take a computer to solve Traveling Salesman using "brute force"? For example:
N cities = N! paths
Each of these N! paths will require "N" calculations
Thus, N * N calculations would be required for the computer to check all paths and then be certain that the shortest path has been found : If we know the time each calculation takes, perhaps we could estimate the total run time as "time per calculation * N*N! "
But I am not sure if this factors in the time to "store and compare" calculations.
Can someone please explain this?
I have heard that if a modern computer were the solve this problem using "brute force" (i.e. an exact solution) - if there are more than 15 cities, the time taken by the computer will exceed a hundred years!
This is not completely true. While the naive brute-force algorithm runs with a n! complexity. A much better algorithm using dynamic programming runs in O(n^2 2^n). Just to give you an idea, with n=25, n! ≃ 2.4e18 while n^2 2^n ≃ 1e12. The former is too huge to be practicable while the second could be OK although it should take a pretty long time on a PC (one should keep in mind that both algorithm complexities contain an hidden constant variable playing an important role to compute a realistic execution time). I used an optimized dynamic programming solution based on the Held–Karp algorithm to compute the TSP of 20 cities on my machine with a relatively reasonable time (ie. no more than few minutes of computation).
Note that in practice heuristics are used to speed up the computation drastically often at the expense of a sub-optimal solution. Some algorithm can provide a good result in a very short time compared to the previous exact algorithms (polynomial algorithms with a relatively small exponent) with a fixed bound on the quality of the result (for example the distance found cannot be bigger than 2 times the optimal solution). In the end, heuristics can often found very good results in a reasonable time. One simple heuristic is to avoid crossing segments assuming an Euclidean distance is used (AFAIK a solution with crossing segments is always sub-optimal).
My Question: Is there some formula we can use to estimate the number of time it will take a computer to solve Travelling Salesman using "brute force"?
Since the naive algorithm is compute bound and quite simple, you can do such an approximation based on the running-time complexity. But to get a relatively precise approximation of the execution time, you need a calibration since not all processors nor implementations behave the same way. You can assume that the running time is C n! and find the value of C experimentally by measuring the computation time taken by a practical brute-force implementation. Another approach is to theoretically find the value of C based on low-level architectural properties (eg. frequency, number of core used, etc.) of the target processor. The former is much more precise assuming the benchmark is properly done and the number of points is big enough. Moreover, the second method requires a pretty good understanding of the way modern processors work.
Numerically, assuming a running time t ≃ C n!, we can say that ln t ≃ ln(C n!) ≃ ln(C) + ln(n!). Based on the Stirling's approximation, we can say that ln t ≃ ln C + n ln n + O(ln n), so ln C ≃ ln t - n ln n - O(ln n). Thus, ln C ≃ ln t - n ln n - O(ln n) and finally, C ≃ exp(ln t - n ln n) (with an O(n) approximation). That being said, the Stirling's approximation may not be precise enough. Using a binary search to numerically compute the inverse gamma function (which is a generalization of the factorial) should give a much better approximation for C.
Each of these N! paths will require "N" calculations
Well, a slightly optimized brute-force algorithm do not need perform N calculation as the partial path length can be precomputed. The last loops just need to read the precomputed sums from a small array that should be stored in the L1 cache (so it take only no more than few cycle of latency to read/store).
I found from various online sources that the time complexity for DTW is quadratic. On the other hand, I also found that standard kNN has linear time complexity. However, when pairing them together, does kNN-DTW have quadratic or cubic time?
In essence, does the time complexity of kNN solely depend on the metric used? I have not found any clear answer for this.
You need to be careful here. Let's say you have n time series in your 'training' set (let's call it this, even though you are not really training with kNN) of length l. Computing the DTW between a pair of time series has a asymptotic complexity of O(l * m) where m is your maximum warping window. As m <= l also O(l^2) holds. (although there might be more efficient implementations, i don't think they are actually faster in practice in most cases, see here). Classifying a time series using kNN requires you to compute the distance between that time series and all time series in the training set which would mean n comparisons, linear with respect to n.
So your final complexity would be in O(l * m * n) or O(l^2 * n). In words: the complexity is quadratic with respect to time series length and linear with respect to the number of training examples.
Imagine T1(n) and T2(n) are running times of P1 and P2 programs, and
T1(n) ∈ O(f(n))
T2(n) ∈ O(g(n))
What is the amount of T1(n)+T2(n), when P1 is running along side P2?
The Answer is O(max{f(n), g(n)}) but why?
When we think about Big-O notation, we generally think about what the algorithm does as the size of the input n gets really big. A lot of times, we can fall back on some sort of intuition with math. Consider two functions, one that is O(n^2) and one that is O(n). As n gets really large, both algorithms increases without bound. The difference is, the O(n^2) algorithm grows much, MUCH faster than O(n). So much, in fact, that if you combine the algorithms into something that would be O(n^2+n), the factor of n by itself is so small that it can be ignored, and the algorithm is still in the class O(n^2).
That's why when you add together two algorithms, the combined running time is in O(max{f(n), g(n)}). There's always one that 'dominates' the runtime, making the affect of the other negligible.
The Answer is O(max{f(n), g(n)})
This is only correct if the programms run independently of each other. Anyhow, let's assume, this is the case.
In order to answer the why, we need to take a closer look at what the BIG-O-notation represents. Contrary to the way you stated it, it does not represent time but an upperbound on the complexity.
So while running both programms might take more time, the upperbound on the complexity won't increase.
Lets considder an example: P_1 calculates the the product of all pairs of n numbers in a vector, it is implemented using nested loops, and therefore has a complexity of O(n*n). P_2 just prints the numbers in a single loop and therefore has a complexity of O(n).
Now if we run both programms at the same time, the nested loops of P_1 are the most 'complex' part, leaving the combination with a complexity of O(n*n)
I am trying to explore genetic algorithms (GA) for the bin packing problem, and compare it to classical Any-Fit algorithms. However the time complexity for GA is never mentioned in any of the scholarly articles. Is this because the time complexity is very high? and that the main goal of a GA is to find the best solution without considering the time? What is the time complexity of a basic GA?
Assuming that termination condition is number of iterations and it's constant then in general it would look something like that:
O(p * Cp * O(Crossover) * Mp * O(Mutation) * O(Fitness))
p - population size
Cp - crossover probability
Mp - mutation probability
As you can see it not only depends on parameters like eg. population size but also on implementation of crossover, mutation operations and fitness function implementation. In practice there would be more parameters like for example chromosome size etc.
You don't see much about time complexity in publications because researchers most of the time compare GA using convergence time.
Edit Convergence Time
Every GA has some kind of a termination condition and usually it's convergence criteria. Let's assume that we want to find the minimum of a mathematical function so our convergence criteria will be the function's value. In short we reach convergence during optimization when it's no longer worth it to continue optimization because our best individual doesn't get better significantly. Take a look at this chart:
You can see that after around 10000 iterations fitness doesn't improve much and the line is getting flat. Best case scenario reaches convergence at around 9500 iterations, after that point we don't observe any improvement or it's insignificantly small. Assuming that each line shows different GA then Best case has the best convergence time becuase it reaches convergence criteria first.
I have implemented an algorithm that uses two other algorithms for calculating the shortest path in a graph: Dijkstra and Bellman-Ford. Based on the time complexity of the these algorithms, I can calculate the running time of my implementation, which is easy giving the code.
Now, I want to experimentally verify my calculation. Specifically, I want to plot the running time as a function of the size of the input (I am following the method described here). The problem is that I have two parameters - number of edges and number of vertices.
I have tried to fix one parameter and change the other, but this approach results in two plots - one for varying number of edges and the other for varying number of vertices.
This leads me to my question - how can I determine the order of growth based on two plots? In general, how can one experimentally determine the running time complexity of an algorithm that has more than one parameter?
It's very difficult in general.
The usual way you would experimentally gauge the running time in the single variable case is, insert a counter that increments when your data structure does a fundamental (putatively O(1)) operation, then take data for many different input sizes, and plot it on a log-log plot. That is, log T vs. log N. If the running time is of the form n^k you should see a straight line of slope k, or something approaching this. If the running time is like T(n) = n^{k log n} or something, then you should see a parabola. And if T is exponential in n you should still see exponential growth.
You can only hope to get information about the highest order term when you do this -- the low order terms get filtered out, in the sense of having less and less impact as n gets larger.
In the two variable case, you could try to do a similar approach -- essentially, take 3 dimensional data, do a log-log-log plot, and try to fit a plane to that.
However this will only really work if there's really only one leading term that dominates in most regimes.
Suppose my actual function is T(n, m) = n^4 + n^3 * m^3 + m^4.
When m = O(1), then T(n) = O(n^4).
When n = O(1), then T(n) = O(m^4).
When n = m, then T(n) = O(n^6).
In each of these regimes, "slices" along the plane of possible n,m values, a different one of the terms is the dominant term.
So there's no way to determine the function just from taking some points with fixed m, and some points with fixed n. If you did that, you wouldn't get the right answer for n = m -- you wouldn't be able to discover "middle" leading terms like that.
I would recommend that the best way to predict asymptotic growth when you have lots of variables / complicated data structures, is with a pencil and piece of paper, and do traditional algorithmic analysis. Or possibly, a hybrid approach. Try to break the question of efficiency into different parts -- if you can split the question up into a sum or product of a few different functions, maybe some of them you can determine in the abstract, and some you can estimate experimentally.
Luckily two input parameters is still easy to visualize in a 3D scatter plot (3rd dimension is the measured running time), and you can check if it looks like a plane (in log-log-log scale) or if it is curved. Naturally random variations in measurements plays a role here as well.
In Matlab I typically calculate a least-squares solution to two-variable function like this (just concatenates different powers and combinations of x and y horizontally, .* is an element-wise product):
x = log(parameter_x);
y = log(parameter_y);
% Find a least-squares fit
p = [x.^2, x.*y, y.^2, x, y, ones(length(x),1)] \ log(time)
Then this can be used to estimate running times for larger problem instances, ideally those would be confirmed experimentally to know that the fitted model works.
This approach works also for higher dimensions but gets tedious to generate, maybe there is a more general way to achieve that and this is just a work-around for my lack of knowledge.
I was going to write my own explanation but it wouldn't be any better than this.