Time Complexity on a TSP - time-complexity

I am trying to analyze for each step of a TSP problem with a Greedy Algorithm and a Look-Ahead algorithm the Time Complexity.
It would be great if someone could check whether the logic that I have applied on each step makes sens to build the time complexity or not.
For the Greedy algorithm I have:
Iterations: max n itinerations (n: nº of locations)
Select available arc (i,j) with lowest cost n log n operations
and label this arc as unavailable 1 operation
Label all arcs from location i or to location j as unavailable 1 operation
Label all arcs as unavailable which could result in sub cycles if more than one arc is available. 1 operation
If there are still available arcs, checking of max. n locations
go back to 1.
Time Complexity
n * ( n log n + 1 + 1 + 1 + n) --> n^2 log n + n^2 + 3n --> O(n^2 log n)
And for the Look-Ahead algorithm I have tried:
Iterations: n itinerations (n: nº of locations)
For each row and column, n*n locations
determine available arcs with smallest and second smallest cost, 1 operation
and the corresponding cost difference, respectively. 1 operation
Select row or column with largest cost difference and select available arc (i,j)in that row or column with lowest cost 1 operation
and label this arc as unavailable 1 operation
Label all arcs from location ior to location j as unavailable 1 operation
Label all arcs as unavailable which could result in sub cycles. 1 operation
If there are still more arcs available than arcs still to be selected, go back to 1 check n locations
Else: select remaining arcs. 1 operation
Time complexity
n * ( n*n + 1 + 1 + 1 + 1 + 1 + 1 + n + 1) --> n^3 + 8n + n^2 --> n^3 --> O(n^3)

Related

How to prove a lower bound of n * log(n) for this simple algorithm

The question is how many times does this algorithm produce a meow:
KITTYCAT(n):
for i from 0 to n − 1:
for j from 2^i to n − 1:
meow
So The inner loop has a worse case of n, but it only runs log(n) times because even though the outer loop runs n times, whenever i > log(n) the inner loop never runs, so you have O(n * log(n)).
However, since I can't assume a best case of n for the inner loop, how do I prove that the algorithm still has a best case of n * log(n)?
When i > log2n the start value of the inner loop is higher than its end value. Depending on how you interpret it, this either means that the inner loop counts down, or that it does not run at all. If you interpret it as counting down, then it gets very big indeed and ends up dominating, and you have Ω(2n), which is not what you seem to be looking for.
If instead you assume the inner loop goes away, then this code is really
for i from 0 to log2n:
     for j from 2i to n - 1:
          meow
giving you Ω(nlogn)
If you're asking how to prove that last step, you can calculate the exact number of iterations -- the inner loop iterates n times, then n-1 times, then n-2, then n-4, etc all the way down to 0. So the exact complexity (at least when n is a power of 2) is
    n + n-1 + n-2 + n-4 + ... + n-n/4 + n-n/2 + n-n
or
    nlog2n - 1 - 2 - 4 - ... - n/4 - n/2
which converges to
    nlog2n - n
which is asymptotically equivalent to nlogn as n -> ∞

Computational complexity of DFS over the worst case DAG

I got confused by calculating the worst-case DAG complexity when you do a Naive DFS from one node to another node.
For example in the following DAG,
Node A act as a start node, and if we always pick the last node, in this case, Node D as the end node, use DFS to calculate all path from A to D.
In this case, we have DFS going through Paths:
path 1st: A -> B -> C -> D
path 2nd: A -> B -> D
path 3rd: A -> C -> D
path 4th: A -> D
The computational complexity was 8 because it takes 3 iterations in the first path, 2 in the second, 2 in the third, and 1 in the last one.
Now if we expand this graph, add more nodes after.
Assuming we have N nodes, what is O(N) then?
The way I do the calculation for the number of total paths is like, every time we add a new node to an N-node-DAG, to replace Node A as the new beginning node, we add N new edge here. Because we need to add edges to go from the new start node to all nodes that existed.
If we assume P as total paths, we have
P(N) = P(N-1) + P(N-2) + P(N-3) + .... + P(1)
then you have
P(N) = 2 * P(N-1)
then
P(N) = O(2^N)
However, if we consider that not all paths is using the computational complexity of O(1), for example, the longest path that goes through all the nodes, we take O(N) for that single path, the actual cost is higher than O(2^N).
So what could that be then?
Current Algorithm and Time Complexity
As far as I understand, your current algorithm follows the steps below:
Start from a start node specified by us.
In each node, store the adjacent vertices in a stack. Then pop the first element from stack and repeat step 2 until the stack gets empty.
Terminate execution until the stack gets empty.
Our graph is a DAG, therefore there won't be any cycle in the graph and algorithm is guaranteed to terminate eventually.
For further examination, you mentioned about expanding the graph in the same manner. This means that, whenever we add the i(th) node to the graph, we have to create vertices from each node to that node - which means we have to insert i edges to the graph.
Let's say you start from the first node. In this case, time complexity will be:
T(1) = [1+T(2)] + [1+T(3)] + ... + [1+T(n-3)] + [1+T(n-2)] + [1+T(n-1)] + [1+T(n)]
T(1) = [1+T(2)] + [1+T(3)] + ... + [1+(T(n-2)+T(n-1)+T(n))] + [1+(T(n-1)+T(n))] + [1+T(n)] + 0 (assuming T(n) = 0)
T(1) = [1+T(2)] + [1+T(3)] + ... + (2+1+1) + (1+1) + 1 + 0
T(1) = [1+T(2)] + [1+T(3)] + ... + 4 + 2 + 1
with this manner
(observe the pattern, for T(n-1), we get 2^0 - so for T(2), we'll get 2^(n-3))
T(1) = 2^(n-3) + 2^(n) + ... + 2^(0)
T(1) = 2^(n-2) - 1
Eventually it turns out that the time complexity if O(2^N) for this algorithm. It turns out to be pretty bad, it's because this is extraordinarily brute force. Let's come up with an optimized algorithm that stores the information of visited vertices.
Note:
I spotted that there are two edges from A to B and B to C, but not from C to D. I couldn't understand the pattern here, but if it's the case then it means it requires more operations than 2^N. Well, naive DFS is a bad algorithm anyways, I strongly recommend you to implement the one below.
Optimized Algorithm and Time Complexity
To make your algorithm optimized, you can follow the steps below:
0) Create a boolean array to mark each state as visited when you visit them. Assigned each index to false initially.
Start from a start node specified by us.
In each node, store the adjacent vertices (that are not marked as visited) in a stack. Mark them as visited. Then pop the first element from stack and repeat step 2 until the stack gets empty.
Terminate execution until the stack gets empty.
This time, by storing an array to mark visited nodes, we avoid the phenomenon of visiting same node more than once. This algorithm, traditionally, has the time complexity of O(N + E), N being the number of nodes and E being the number of edges. In your case, your graph seems like a complete graph (if it was undirected though), therefore E ~ N^2, meaning that your time complexity will be O(N^2).
I think I found out.
so if we consider that each new node will add a new edge from the new node to each node in DAG, traversing each of those new edges will take complexity as 1.
Then we could have (if we use C as computational complexity):
C(N) = 1 + C(N-1) + 1 + C(N-2) + 1+ C(N-3) + .... + 1 + C(1)
then you have
C(N) = 2 * C(N-1) + N - 1
= 2^2 * C(N-2) + 2 * (N-2) + N - 1
= 2^3 * C(N-3) + 2^2 * (N-3) + 2 * (N -2) + N - 1
= 2^(N-1) * C(1) + 2^(N-2) * 1 + ...... + N - 1
At this moment this becomes a sum of the geometric progression of N, the ratio is 2, so the highest rank item is at 2^N.
Thus O(2^N) is the answer.

How to Understand Time Complexity of Happy Number Problem Solution from Leetcode

I have some difficulties in understanding the time complexity analysis for one solution for the Happy Number Question from Leet code, for my doubts on complexity analysis, I marked them in bold and really appreciate your advice
Here is the question:
Link: https://leetcode.com/problems/happy-number/
Question:
Write an algorithm to determine if a number is "happy".
A happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers.
Example:
Input: 19
Output: true
Explanation:
1^2(square of 1) + 9^2 = 82
8^2 + 2^2 = 68
6^2 + 8^2 = 100
1^2 + 0^2 + 0^2 = 1
Here is the code:
class Solution(object):
def isHappy(self, n):
#getnext function will compute the sum of square of each digit of n
def getnext(n):
totalsum = 0
while n>0:
n,v = divmod(n,10)
totalsum+=v**2
return totalsum
#we declare seen as a set to track the number we already visited
seen = set()
#we stop checking if: either the number reaches one or the number was visited #already(ex.a cycle)
while n!=1 and (n not in seen):
seen.add(n)
n = getnext(n)
return n==1
Note: feel free to let me know if I need to explain how the code works
Time Complexity Analysis:
Time complexity : O(243 * 3 + logN + loglogN + log loglog N)...=O(logN).
Finding the next value for a given number has a cost of O(log n)because we are processing each digit in the number, and the number of digits in a number is given by logN.
My doubt: why the number of digits in a number is given by logN? what is N here? the value of a specific number or something else?
To work out the total time complexity, we'll need to think carefully about how many numbers are in the chain, and how big they are.
We determined above that once a number is below 243, it is impossible for it to go back up above 243.Therefore, based on our very shallow analysis we know for sure that once a number is below 243, it is impossible for it to take more than another 243 steps to terminate.
Each of these numbers has at most 3 digits. With a little more analysis, we could replace the 243 with the length of the longest number chain below 243, however because the constant doesn't matter anyway, we won't worry about it.
My doubt: I think the above paragraph is related to the time complexity component of 243*3, but I cannot understand why we multiply 243 by 3
For an n above 243, we need to consider the cost of each number in the chain that is above 243. With a little math, we can show that in the worst case, these costs will be O(log n) + O(log log n) + O(log log log N)... Luckily for us, the O(logN) is the dominating part, and the others are all tiny in comparison (collectively, they add up to less than logN), so we can ignore them. My doubt: what is the reasoning behind O(log log n) O(log log log N) for an n above 243?
Well, my guess for the first doubt is that the number of digits of a base 10 number is given by it's value (N) taken to the logarithm at base 10, rounded down. So for example, 1023 would have floor(log10(1023)) digits, which is 3. So yes, the N is the value of the number. the log in time complexity indicates a logarithm, not specifically that of base 2 or base e.
As for the second doubt, it probably has to do with the work required to reduce a number to below 243, but I am not sure. I'll edit this answer once I work that bit out.
Let's say N has M digits. Than getnext(N) <= 81*M. The equality happens when N only has 9's.
When N < 1000, i.e. at most 3 digits, getnext(N) <= 3*81 = 243. Now, you will have to call getnext(.) at most O(243) times to figure out if N is indeed happy.
If M > 3, number of digits of getnext(N) must be less than M. Try getnext(9999), getnext(99999), and so on [1].
Notes:
[1] Adding a digit to N can make it at most 10*N + 9, i.e. adding a 9 at the end. But the number of digits increases to M+1 only. It's a logarithmic relationship between N and M. Hence, the same relationship holds between N and 81*M.
Using the Leetcode solution
class Solution {
private int getNext(int n) {
int totalSum = 0;
while (n > 0) {
int d = n % 10;
n = n / 10;
totalSum += d * d;
}
return totalSum;
}
public boolean isHappy(int n) {
Set<Integer> seen = new HashSet<>();
while (n != 1 && !seen.contains(n)) {
seen.add(n);
n = getNext(n);
}
return n == 1;
}
}
}
O(243*3) for n < 243
3 is the max number of digits in n
e.g. For n = 243
getNext() will take a maximum of 3 iterations because there are 3 digits for us to loop over.
isHappy() can take a maximum of 243 iterations to find a cycle or terminate, because we can store a max of 243 numbers in our hash set.
O(log n) + O(log log n) + O(log log log N)... for n > 243
1st iteration + 2nd iteration + 3rd iteration ...
getNext() will be called a maximum of O(log n) times. Because log10 n is the number of digits.
isHappy() will be called a maximum of 9^2 per digit. This is the max we can store in the hash set before we find a cycle or terminate.
First Iteration
9^2 * number of digits
O(81*(log n)) drop the constant
O(log n)
+
Second Iteration
O(log (81*(log n))) drop the constant
O(log log n)
+
Third Iteration
O(log log log N)
+
ect ...

Time complexity for the loop

The outer loop executes n times while the inner loop executes ? So the total time is n*something.
Do i need to learn summation,if yes then any book to refer?
for(int i=1;i<=n;i++)
for(int j=1;j<=n;j+=i)
printf("*");
This question can be approached by inspection:
n = 16
i | j values | # terms
1 | 1, 2, ..., 16 | n
2 | 1, 3, 5, ..., 16 | n / 2
.. | .. | n / 3
16 | 16 | n / n
In the above table, i is the outer loop value, and j values show the iterations of the inner loop. By inspection, we can see that the loops will take n * (1 + 1/2 + 1/3 + ... + 1/n) steps. This is a bounded harmonic series. As this Math Stack Exchange article shows, there is no closed form for the above expression in terms of n. However, as this SO article shows, there is an upper bound of O(n*ln(n)).
So, the running time for your two loops is O(n*ln(n)).
I believe the time complexity of that is O(n*log(n)). Here is why:
Let us pick some arbitrary natural number i and see how many steps the inner loop takes for this given i. Well for this i, you are going from j=1 to j<=n with a jump of i in between. So basically you are doing this summation many steps:
summation = 1 + (1+i) + (1+2i) + ... (1+ki)
where k is the largest integer such that 1+ki <= n. That is, k is the number of steps and this is what we want to solve for. Well we can solve for k in the equality resulting in k <= (n-1)/i and thus k = ⌊(n-1)/i⌋. That is, k is the floor function/integer division of (n-1)/i. Since we are dealing with time complexities, this floor function doesn't matter so we will just say k = n/i for simplicity. This is the number of steps that the inner loop will take for a given i. So we basically need to add all these for i = 1 to i <= n.
So numsteps will be this addition:
numsteps = n/1 + n/2 + n/3 + ... n/n
= n(1 + 1/2 + 1/3 + ... 1+n)
So we need to find the sum of 1 + 1/2 + ... 1/n to finish this. There is actually no good closed form for this sum but it is on the order of ln(n). You can read more about this here. You can also guess this since the integral from 1 to n of 1/x is ln(n). Again, since we are dealing with time complexity, we can just use ln(n) to represent its complexity. Thus we have:
numsteps = n(ln(n))
And so the time complexity is O(n*log(n)).
Edit: My bad, i was calculating the sum :P

Computational complexity of Fibonacci Sequence

I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence:
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
What is the computational complexity of the Fibonacci sequence and how is it calculated?
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is used.
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as
f(n) = f(n-1) + f(n-2).
The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6n)). You can find out this tight bound by using generating functions as I'd mentioned above.
Just ask yourself how many statements need to execute for F(n) to complete.
For F(1), the answer is 1 (the first part of the conditional).
For F(n), the answer is F(n-1) + F(n-2).
So what function satisfies these rules? Try an (a > 1):
an == a(n-1) + a(n-2)
Divide through by a(n-2):
a2 == a + 1
Solve for a and you get (1+sqrt(5))/2 = 1.6180339887, otherwise known as the golden ratio.
So it takes exponential time.
I agree with pgaur and rickerbh, recursive-fibonacci's complexity is O(2^n).
I came to the same conclusion by a rather simplistic but I believe still valid reasoning.
First, it's all about figuring out how many times recursive fibonacci function ( F() from now on ) gets called when calculating the Nth fibonacci number. If it gets called once per number in the sequence 0 to n, then we have O(n), if it gets called n times for each number, then we get O(n*n), or O(n^2), and so on.
So, when F() is called for a number n, the number of times F() is called for a given number between 0 and n-1 grows as we approach 0.
As a first impression, it seems to me that if we put it in a visual way, drawing a unit per time F() is called for a given number, wet get a sort of pyramid shape (that is, if we center units horizontally). Something like this:
n *
n-1 **
n-2 ****
...
2 ***********
1 ******************
0 ***************************
Now, the question is, how fast is the base of this pyramid enlarging as n grows?
Let's take a real case, for instance F(6)
F(6) * <-- only once
F(5) * <-- only once too
F(4) **
F(3) ****
F(2) ********
F(1) **************** <-- 16
F(0) ******************************** <-- 32
We see F(0) gets called 32 times, which is 2^5, which for this sample case is 2^(n-1).
Now, we want to know how many times F(x) gets called at all, and we can see the number of times F(0) is called is only a part of that.
If we mentally move all the *'s from F(6) to F(2) lines into F(1) line, we see that F(1) and F(0) lines are now equal in length. Which means, total times F() gets called when n=6 is 2x32=64=2^6.
Now, in terms of complexity:
O( F(6) ) = O(2^6)
O( F(n) ) = O(2^n)
There's a very nice discussion of this specific problem over at MIT. On page 5, they make the point that, if you assume that an addition takes one computational unit, the time required to compute Fib(N) is very closely related to the result of Fib(N).
As a result, you can skip directly to the very close approximation of the Fibonacci series:
Fib(N) = (1/sqrt(5)) * 1.618^(N+1) (approximately)
and say, therefore, that the worst case performance of the naive algorithm is
O((1/sqrt(5)) * 1.618^(N+1)) = O(1.618^(N+1))
PS: There is a discussion of the closed form expression of the Nth Fibonacci number over at Wikipedia if you'd like more information.
You can expand it and have a visulization
T(n) = T(n-1) + T(n-2) <
T(n-1) + T(n-1)
= 2*T(n-1)
= 2*2*T(n-2)
= 2*2*2*T(n-3)
....
= 2^i*T(n-i)
...
==> O(2^n)
Recursive algorithm's time complexity can be better estimated by drawing recursion tree, In this case the recurrence relation for drawing recursion tree would be T(n)=T(n-1)+T(n-2)+O(1)
note that each step takes O(1) meaning constant time,since it does only one comparison to check value of n in if block.Recursion tree would look like
n
(n-1) (n-2)
(n-2)(n-3) (n-3)(n-4) ...so on
Here lets say each level of above tree is denoted by i
hence,
i
0 n
1 (n-1) (n-2)
2 (n-2) (n-3) (n-3) (n-4)
3 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
lets say at particular value of i, the tree ends, that case would be when n-i=1, hence i=n-1, meaning that the height of the tree is n-1.
Now lets see how much work is done for each of n layers in tree.Note that each step takes O(1) time as stated in recurrence relation.
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6) ..so on
2^i for ith level
since i=n-1 is height of the tree work done at each level will be
i work
1 2^1
2 2^2
3 2^3..so on
Hence total work done will sum of work done at each level, hence it will be 2^0+2^1+2^2+2^3...+2^(n-1) since i=n-1.
By geometric series this sum is 2^n, Hence total time complexity here is O(2^n)
The proof answers are good, but I always have to do a few iterations by hand to really convince myself. So I drew out a small calling tree on my whiteboard, and started counting the nodes. I split my counts out into total nodes, leaf nodes, and interior nodes. Here's what I got:
IN | OUT | TOT | LEAF | INT
1 | 1 | 1 | 1 | 0
2 | 1 | 1 | 1 | 0
3 | 2 | 3 | 2 | 1
4 | 3 | 5 | 3 | 2
5 | 5 | 9 | 5 | 4
6 | 8 | 15 | 8 | 7
7 | 13 | 25 | 13 | 12
8 | 21 | 41 | 21 | 20
9 | 34 | 67 | 34 | 33
10 | 55 | 109 | 55 | 54
What immediately leaps out is that the number of leaf nodes is fib(n). What took a few more iterations to notice is that the number of interior nodes is fib(n) - 1. Therefore the total number of nodes is 2 * fib(n) - 1.
Since you drop the coefficients when classifying computational complexity, the final answer is θ(fib(n)).
It is bounded on the lower end by 2^(n/2) and on the upper end by 2^n (as noted in other comments). And an interesting fact of that recursive implementation is that it has a tight asymptotic bound of Fib(n) itself. These facts can be summarized:
T(n) = Ω(2^(n/2)) (lower bound)
T(n) = O(2^n) (upper bound)
T(n) = Θ(Fib(n)) (tight bound)
The tight bound can be reduced further using its closed form if you like.
It is simple to calculate by diagramming function calls. Simply add the function calls for each value of n and look at how the number grows.
The Big O is O(Z^n) where Z is the golden ratio or about 1.62.
Both the Leonardo numbers and the Fibonacci numbers approach this ratio as we increase n.
Unlike other Big O questions there is no variability in the input and both the algorithm and implementation of the algorithm are clearly defined.
There is no need for a bunch of complex math. Simply diagram out the function calls below and fit a function to the numbers.
Or if you are familiar with the golden ratio you will recognize it as such.
This answer is more correct than the accepted answer which claims that it will approach f(n) = 2^n. It never will. It will approach f(n) = golden_ratio^n.
2 (2 -> 1, 0)
4 (3 -> 2, 1) (2 -> 1, 0)
8 (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
14 (5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
22 (6 -> 5, 4)
(5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
(4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:
At the root you are computing:
F(n) depends on F(n-1) and F(n-2)
F(n-1) depends on F(n-2) again and F(n-3)
F(n-2) depends on F(n-3) again and F(n-4)
then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:
T(n) = T(n-1) + T(n-2) + C, with C constant
T(n-1) = T(n-2) + T(n-3) > T(n-2) then
T(n) > 2*T(n-2)
...
T(n) > 2^(n/2) * T(1) = O(2^(n/2))
This is just a lower bound that for the purpose of your analysis should be enough but the real time function is a factor of a constant by the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.
In addition, you can find optimized versions of Fibonacci using dynamic programming like this:
static int fib(int n)
{
/* memory */
int f[] = new int[n+1];
int i;
/* Init */
f[0] = 0;
f[1] = 1;
/* Fill */
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
That is optimized and do only n steps but is also exponential.
Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason:
The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of
m = log2(n) // your real input size
let find out the number of steps as a function of the input size
m = log2(n)
2^m = 2^log2(n) = n
then the cost of your algorithm as a function of the input size is:
T(m) = n steps = 2^m steps
and this is why the cost is an exponential.
Well, according to me to it is O(2^n) as in this function only recursion is taking the considerable time (divide and conquer). We see that, the above function will continue in a tree until the leaves are approaches when we reach to the level F(n-(n-1)) i.e. F(1). So, here when we jot down the time complexity encountered at each depth of tree, the summation series is:
1+2+4+.......(n-1)
= 1((2^n)-1)/(2-1)
=2^n -1
that is order of 2^n [ O(2^n) ].
No answer emphasizes probably the fastest and most memory efficient way to calculate the sequence. There is a closed form exact expression for the Fibonacci sequence. It can be found by using generating functions or by using linear algebra as I will now do.
Let f_1,f_2, ... be the Fibonacci sequence with f_1 = f_2 = 1. Now consider a sequence of two dimensional vectors
f_1 , f_2 , f_3 , ...
f_2 , f_3 , f_4 , ...
Observe that the next element v_{n+1} in the vector sequence is M.v_{n} where M is a 2x2 matrix given by
M = [0 1]
[1 1]
due to f_{n+1} = f_{n+1} and f_{n+2} = f_{n} + f_{n+1}
M is diagonalizable over complex numbers (in fact diagonalizable over the reals as well, but this is not usually the case). There are two distinct eigenvectors of M given by
1 1
x_1 x_2
where x_1 = (1+sqrt(5))/2 and x_2 = (1-sqrt(5))/2 are the distinct solutions to the polynomial equation x*x-x-1 = 0. The corresponding eigenvalues are x_1 and x_2. Think of M as a linear transformation and change your basis to see that it is equivalent to
D = [x_1 0]
[0 x_2]
In order to find f_n find v_n and look at the first coordinate. To find v_n apply M n-1 times to v_1. But applying M n-1 times is easy, just think of it as D. Then using linearity one can find
f_n = 1/sqrt(5)*(x_1^n-x_2^n)
Since the norm of x_2 is smaller than 1, the corresponding term vanishes as n tends to infinity; therefore, obtaining the greatest integer smaller than (x_1^n)/sqrt(5) is enough to find the answer exactly. By making use of the trick of repeatedly squaring, this can be done using only O(log_2(n)) multiplication (and addition) operations. Memory complexity is even more impressive because it can be implemented in a way that you always need to hold at most 1 number in memory whose value is smaller than the answer. However, since this number is not a natural number, memory complexity here changes depending on whether if you use fixed bits to represent each number (hence do calculations with error)(O(1) memory complexity this case) or use a better model like Turing machines, in which case some more analysis is needed.