BFS (Breadth First Search) Time complexity at every step - time-complexity

BFS(G,s)
1 for each vertex u ∈ G.V-{s}
2 u.color = WHITE
3 u.d = ∞
4 u.π = NIL
5 s.color = GRAY
6 s.d = 0
7 s.π = NIL
8 Q ≠ Ø
9 ENQUEUE(Q, s)
10 while Q ≠ Ø
11 u = DEQUEUE(Q)
12 for each v ∈ G.Adj[u]
13 if v.color == WHITE
14 v.color = GRAY
15 v.d = u.d + 1
16 v.π = u
17 ENQUEUE(Q, v)
18 u.color = BLACK
The above Breadth First Search code is represented using adjacency lists.
Notations -
G : Graph
s : source vertex
u.color : stores the color of each vertex u ∈ V
u.π : stores predecessor of u
u.d = stores distance from the source s to vertex u computed by the algorithm
Understanding of the code (help me if I'm wrong) -
1. As far as I could understand, the ENQUEUE(Q, s) and DEQUEUE(Q) operations take O(1) time.<br>
2. Since the enqueuing operation occurs for exactly one time for one vertex, it takes a total O(V) time.
3. Since the sum of lengths of all adjacency lists is |E|, total time spent on scanning adjacency lists is O(E).
4. Why is the running time of BFS is O(V+E)?.
Please do not refer me to some website, I've gone through many articles to understand but I'm finding it difficult to understand.
Can anyone please reply to this code by writing the time complexity of each of the 18 lines?

Lines 1-4: O(V) in total
Lines 5-9: O(1) or O(constant)
Line 11: O(V) for all operations of line 11 within the loop (each vertice can only be dequeued once)
Lines 12-13: O(E) in total as you will check through every possible edge once. O(2E) if the edges are bi-directional.
Lines 14-17: O(V) in total as out of the E edges you check, only V vertices will be white.
Line 18: O(V) in total
Summing the complexities gives you
O(4V + E + 1) which simplifies to O(V+E)
New:
It is not O(VE) because at each iteration of the loop starting at line 10, lines 12-13 will only loop through the edges the current node is linked to, not all the edges in the entire graph. Thus, looking from the point of view of the edges, they will only be looped at most twice in a bi-directional graph, once by each node it connects with.

Related

Why is code like i=i*2 considered O(logN) when in a loop?

Why because of the i=i*2 is the runtime of the loop below considered O(logN)?
for (int i = 1; i <= N;) {
code with O(1);
i = i * 2;
}
Look at 1024 = 210. How many times do you have to double the number 1 to get 1024?
Times 1 2 3 4 5 6 7 8 9 10
Result 2 4 8 16 32 64 128 256 512 1024
So you would have to run your doubling loop ten times to get 210 And in general, you have to run your doubling loop n times to get 2n. But what is n? It's the log2 2n, so in general if n is some power of 2, the loop has to run log2n times to reach it.
To have the algorithm in O(logN), for any N it would need (around) log N steps. In the example of N=32 (where log 32 = 5) this can be seen:
i = 1 (start)
i = 2
i = 4
i = 8
i = 16
i = 32 (5 iterations)
i = 64 (abort after 6 iterations)
In general, after x iterations, i=2^x holds. To reach i>N you need x = log N + 1.
PS: When talking about complexities, the log base (2, 10, e, ...) is not relevant. Furthermore, it is not relevant if you have i <= N or i < N as this only changes the number of iterations by one.
You can prove it pretty simply.
Claim:
For the tth (0 base) iteration, i=2^t
Proof, by induction.
Base: 2^0 = 1, and indeed in the first iteration, i=1.
Step: For some t+1, the value of i is 2*i(t) (where i(t) is the value of i in the t iteration). From Induction Hypothesis we know that i(t)=2^t, and thus i(t+1) = 2*i(t) = 2*2^t = 2^(t+1), and the claim holds.
Now, let's examine our stop criteria. We iterate the loop while i <= N, and from the above claim, that means we iterate while 2^t <= N. By Doing log_2 on both sides, we get log_2(2^t) <= log_2(N), and since log_2(2^t) = t, we get that we iterate while t <= log_2(N) - so we iterate Theta(log_2(N)) times. (And that concludes the proof).
i starts in 1. In each iteration you multiply i by 2, so in the K-th iteration, i will be 2K-1.
After a number K of iterations, 2K-1 will be bigger than (or to) N.
this means N ≤ 2K-1
this means log2(N) ≤ K-1
K-1 will be the number of iterations your loop will run, and since K-1 is greater or equal to log(N), your algorithm is logarithmic.

How can I find (generate) data points form a shape in 2D in MATLAB ? For example, the letter A , B ,and C. Thanks

How can I find or generate data points form a shape in 2D in MATLAB ? For example, the letters A, B, and C.
You can use fill()
An example for an octogon, provided by
See https://www.mathworks.com/help/matlab/ref/fill.html
% Generate the points required for the fill.
t = (1/16:1/8:1)'*2*pi; % using 1/8 steps we get an 8 sided object.
x = cos(t);
y = sin(t);
% fill the data
fill(x,y,'r')
axis square % prevent skewing the result.
An example of generating the x y coordinates of a rectangle with an offset of (5,5):
x=[5 5 25 25 5]
y=[5 15 15 5 5]
You have 5 points because you need to include the final point to complete the path ( I believe ) Follow the blue path when collecting the x coordinates and the y coordinates. You can see we start at 5,5 then move to 5,15 --- so the first part of the path is
x=[5 5 ...
y=[5 15 ...
If you want to generate the coordinates automatically, you could use a program like InkScape (vector program) to help you convert a character to paths, but here is a simple example drawn with the pen tool:
The points are given by
m 0,1052.3622 5,-10 5,0 5,10 z
which 1052.3622 is VERY large, but is ultimately because I placed my shape at the bottom of the page. if we set this to be 0,0 it would go to the top of the page.

Segment tree - query complexity

I am having problems with understanding segment tree complexity. It is clear that if you have update function which has to change only one node, its complexity will be log(n).
But I have no idea why complexity of query(a,b), where (a,b) is interval that needs to be checked, is log(n).
Can anyone provide me with intuitive / formal proof to understand this?
There are four cases when query the interval (x,y)
FIND(R,x,y) //R is the node
% Case 1
if R.first = x and R.last = y
return {R}
% Case 2
if y <= R.middle
return FIND(R.leftChild, x, y)
% Case 3
if x >= R.middle + 1
return FIND(R.rightChild, x, y)
% Case 4
P = FIND(R.leftChild, x, R.middle)
Q = FIND(R.rightChild, R.middle + 1, y)
return P union Q.
Intuitively, first three cases reduce the level of tree height by 1, since the tree has height log n, if only first three cases happen, the running time is O(log n).
For the last case, FIND() divide the problem into two subproblems. However, we assert that this can only happen at most once. After we called FIND(R.leftChild, x, R.middle), we are querying R.leftChild for the interval [x, R.middle]. R.middle is the same as R.leftChild.last. If x > R.leftChild.middle, then it is Case 1; if x <= R.leftChild, then we will call
FIND ( R.leftChild.leftChild, x, R.leftChild.middle );
FIND ( R.leftChild.rightChild, R.leftChild.middle + 1, , R.leftChild.last );
However, the second FIND() returns R.leftChild.rightChild.sum and therefore takes constant time, and the problem will not be separate into two subproblems (strictly speaking, the problem is separated, though one subproblem takes O(1) time to solve).
Since the same analysis holds on the rightChild of R, we conclude that after case4 happens the first time, the running time T(h) (h is the remaining level of the tree) would be
T(h) <= T(h-1) + c (c is a constant)
T(1) = c
which yields:
T(h) <= c * h = O(h) = O(log n) (since h is the height of the tree)
Hence we end the proof.
This is my first time to contribute, hence if there are any problems, please kindly point them out and I would edit my answer.
A range query using a segment tree basically involves recursing from the root node. You can think of the entire recursion process as a traversal on the segment tree: any time a recursion is needed on a child node, you are visiting that child node in your traversal. So analyzing the complexity of a range query is equivalent to finding the upper bound for the total number of nodes that are visited.
It turns out that at any arbitrary level, there are at most 4 nodes that can be visited. Since the segment tree has a height of log(n) and that at any level there are at most 4 nodes that can be visited, the upper bound is actually 4*log(n). The time complexity is therefore O(log(n)).
Now we can prove this with induction. The base case is at the first level where the root node lies. Since the root node has at most two child nodes, we can only visit at most those two child nodes, which is at most 4 nodes.
Now suppose it is true that at an arbitrary level (say level i) we visit at most 4 nodes. We want to show that we will visit at most 4 nodes at the next level (level i+1) as well. If we had visited only 1 or 2 nodes at level i, it's trivial to show that at level i+1 we will visit at most 4 nodes because each node can have at most 2 child nodes.
So let's focus on the assumption that 3 or 4 nodes were visited at level i, and try to show that at level i+1 we can also have at most 4 visited nodes. Now since the range query is asking for a contiguous range, we know that the 3 or 4 nodes visited at level i can be categorized into 3 partitions of nodes: a leftmost single node whose segment range is only partially covered by the query range, a rightmost single node whose segment range is only partially covered by the query range, and 1 or 2 middle nodes whose segment range is fully covered by the query range. Since the middle nodes have their segment range(s) fully covered by the query range, there would be no recursion at the next level; we just use their precomputed sums. We are left with possible recursions on the leftmost node and the rightmost node at the next level, which is obviously at most 4.
This completes the proof by induction. We have proven that at any level at most 4 nodes are visited. The time complexity for a range query is therefore O(log(n)).
An interval of length n can be represented by k nodes where k <= log(n)
We can prove it based on how the binary system works.

Time Complexity of Prims Algorithm?

I found the time complexity of Prims algorithm everywhere as O((V + E) log V) = E log V. But as we can see the algorithm:
It seems like the time complexity is O(V(log V + E log V)). But if its time complexity is O((V + E) log V). Then the nesting must have to be like this:
But the above nesting is seems to be wrong.
MST-PRIM(G, w, r)
1 for each u ∈ G.V
2 u.key ← ∞
3 u.π ← NIL
4 r.key ← 0
5 Q ← G.V
6 while Q ≠ Ø
7 u ← EXTRACT-MIN(Q)
8 for each v ∈ G.Adjacent[u]
9 if v ∈ Q and w(u, v) < v.key
10 v.π ← u
11 v.key ← w(u, v)
Using a Binary Heap
The time complexity required for one call to EXTRACT-MIN(Q) is O(log V) using a min priority queue. The while loop at line 6 is executing total V times.so EXTRACT-MIN(Q) is called V times. So the complexity of EXTRACT-MIN(Q) is O(V logV).
The for loop at line 8 is executing total 2E times as length of each adjacency lists is 2E for an undirected graph. The time required to execute line 11 is O(log v) by using the DECREASE_KEY operation on the min heap. Line 11 also executes total 2E times. So the total time required to execute line 11 is O(2E logV) = O(E logV).
The for loop at line 1 will be executed V times. Using the procedure to perform lines 1 to 5 will require a complexity of O(V).
Total time complexity of MST-PRIM is the sum of the time complexity required to execute steps 1 through 3 for a total of O((VlogV) + (E logV) + (V)) = O(E logV) since |E| >= |V|.
Using a Fibonacci Heap
Same as above.
Executing line 11 requires O(1) amortized time. Line 11 executes a total of 2E times. So the total time complexity is O(E).
Same as above
So the total time complexity of MST-PRIM is the sum of executing steps 1 through 3 for a total complexity of O(V logV + E + V)=O(E + V logV).
Your idea seems correct. Let's take the complexity as
V(lg(v) + E(lg(v)))
Then notice that in the inner for loop, we are actually going through all the vertices, and not the edge, so let's modify a little to
V(lg(v) + V(lg(v)))
which means
V(lg(v)) + V*V(lg(v))
But for worst case analysis(dense graphs), V*V is roughly equal to number of edges, E
V(lg(v)) + E(lg(v))
(V+E((lg(v))
but since V << E, hence
E(lg(v))
The time complexity of Prim's algorithm is O(VlogV + ElogV). It seems like you understand how the VlogV came to be, so let's skip over that. So where does ElogV come from? Let's start by looking at Prim's algorithm's source code:
| MST-PRIM(Graph, weights, r)
1 | for each u ∈ Graph.V
2 | u.key ← ∞
3 | u.π ← NIL
4 | r.key ← 0
5 | Q ← Graph.V
6 | while Q ≠ Ø
7 | u ← EXTRACT-MIN(Q)
8 | for each v ∈ Graph.Adj[u]
9 | if v ∈ Q and weights(u, v) < v.key
10| v.π ← u
11| v.key ← weights(u, v)
Lines 8-11 are executed for every element in Q, and we know that there are V elements in Q (representing the set of all vertices). Line 8's loop is iterating through all the neighbors of the currently extracted vertex; we will do the same for the next extracted vertex, and for the one after that. Djistkra's Algorithm does not repeat vertices (because it is a greedy, optimal algorithm), and will have us go through each of the connected vertices eventually, exploring all of their neighbors. In other words, this loop will end up going through all the edges of the graph twice at some point (2E).
Why twice? Because at some point we come back to a previously explored edge from the other direction, and we can't rule it out until we've actually checked it. Fortunately, that constant 2 is dropped during our time complexity analysis, so the loop is really just doing E amounts of work.
Why wasn't it V*V? You might reach that term if you just consider that we have to check each Vertex and its neighbors, and in the worst case graph the number of neighbors approaches V. Indeed, in a dense graph V*V = E. But the more accurate description of the work of these two loops is "going through all the edges twice", so we refer to E instead. It's up to the reader to connect how sparse their graph is with this term's time complexity.
Let's look at a small example graph with 4 vertices:
1--2
|\ |
| \|
3--4
Assume that Q will give us the nodes in the order 1, 2, 3, and then 4.
In the first iteration of the outer loop, the inner loop will run 3 times (for 2, 3, and 4).
In the second iteration of the outer loop, the inner loop runs 2 times (for 1 and 4).
In the third iteration of the outer loop, the inner loop runs 2 times (for 1 and 4).
In the last iteration of the outer loop, the inner loop runs 3 times (for 1, 2, and 3).
The total iterations was 10, which is twice the number of edges (2*5).
Extracting the minimum and tracking the updated minimum edges (usually done with a Fibonacci Heap, resulting in log(V) time complexity) occurs inside the loop iterations - the exact mechanisms involve end up needing to occur inside the inner loop enough times that they are controlled by the time complexity of both loops. Therefore, the complete time complexity for this phase of the algorithm is O(2*E*log(V)). Dropping the constant yields O(E*log(V)).
Given that the total time complexity of the algorithm is O(VlogV + ElogV), we can simplify to O((V+E)logV). In a dense graph E > V, so we can conclude O(ElogV).
actually as you are saying as for is nested inside while time complexity should be v.E lg V is correct in case of asymptotic analysis. But in cormen they have done amortized analysis thats why it comes out to be (Elogv)

Computational complexity of Fibonacci Sequence

I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence:
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
What is the computational complexity of the Fibonacci sequence and how is it calculated?
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is used.
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as
f(n) = f(n-1) + f(n-2).
The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6n)). You can find out this tight bound by using generating functions as I'd mentioned above.
Just ask yourself how many statements need to execute for F(n) to complete.
For F(1), the answer is 1 (the first part of the conditional).
For F(n), the answer is F(n-1) + F(n-2).
So what function satisfies these rules? Try an (a > 1):
an == a(n-1) + a(n-2)
Divide through by a(n-2):
a2 == a + 1
Solve for a and you get (1+sqrt(5))/2 = 1.6180339887, otherwise known as the golden ratio.
So it takes exponential time.
I agree with pgaur and rickerbh, recursive-fibonacci's complexity is O(2^n).
I came to the same conclusion by a rather simplistic but I believe still valid reasoning.
First, it's all about figuring out how many times recursive fibonacci function ( F() from now on ) gets called when calculating the Nth fibonacci number. If it gets called once per number in the sequence 0 to n, then we have O(n), if it gets called n times for each number, then we get O(n*n), or O(n^2), and so on.
So, when F() is called for a number n, the number of times F() is called for a given number between 0 and n-1 grows as we approach 0.
As a first impression, it seems to me that if we put it in a visual way, drawing a unit per time F() is called for a given number, wet get a sort of pyramid shape (that is, if we center units horizontally). Something like this:
n *
n-1 **
n-2 ****
...
2 ***********
1 ******************
0 ***************************
Now, the question is, how fast is the base of this pyramid enlarging as n grows?
Let's take a real case, for instance F(6)
F(6) * <-- only once
F(5) * <-- only once too
F(4) **
F(3) ****
F(2) ********
F(1) **************** <-- 16
F(0) ******************************** <-- 32
We see F(0) gets called 32 times, which is 2^5, which for this sample case is 2^(n-1).
Now, we want to know how many times F(x) gets called at all, and we can see the number of times F(0) is called is only a part of that.
If we mentally move all the *'s from F(6) to F(2) lines into F(1) line, we see that F(1) and F(0) lines are now equal in length. Which means, total times F() gets called when n=6 is 2x32=64=2^6.
Now, in terms of complexity:
O( F(6) ) = O(2^6)
O( F(n) ) = O(2^n)
There's a very nice discussion of this specific problem over at MIT. On page 5, they make the point that, if you assume that an addition takes one computational unit, the time required to compute Fib(N) is very closely related to the result of Fib(N).
As a result, you can skip directly to the very close approximation of the Fibonacci series:
Fib(N) = (1/sqrt(5)) * 1.618^(N+1) (approximately)
and say, therefore, that the worst case performance of the naive algorithm is
O((1/sqrt(5)) * 1.618^(N+1)) = O(1.618^(N+1))
PS: There is a discussion of the closed form expression of the Nth Fibonacci number over at Wikipedia if you'd like more information.
You can expand it and have a visulization
T(n) = T(n-1) + T(n-2) <
T(n-1) + T(n-1)
= 2*T(n-1)
= 2*2*T(n-2)
= 2*2*2*T(n-3)
....
= 2^i*T(n-i)
...
==> O(2^n)
Recursive algorithm's time complexity can be better estimated by drawing recursion tree, In this case the recurrence relation for drawing recursion tree would be T(n)=T(n-1)+T(n-2)+O(1)
note that each step takes O(1) meaning constant time,since it does only one comparison to check value of n in if block.Recursion tree would look like
n
(n-1) (n-2)
(n-2)(n-3) (n-3)(n-4) ...so on
Here lets say each level of above tree is denoted by i
hence,
i
0 n
1 (n-1) (n-2)
2 (n-2) (n-3) (n-3) (n-4)
3 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
lets say at particular value of i, the tree ends, that case would be when n-i=1, hence i=n-1, meaning that the height of the tree is n-1.
Now lets see how much work is done for each of n layers in tree.Note that each step takes O(1) time as stated in recurrence relation.
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6) ..so on
2^i for ith level
since i=n-1 is height of the tree work done at each level will be
i work
1 2^1
2 2^2
3 2^3..so on
Hence total work done will sum of work done at each level, hence it will be 2^0+2^1+2^2+2^3...+2^(n-1) since i=n-1.
By geometric series this sum is 2^n, Hence total time complexity here is O(2^n)
The proof answers are good, but I always have to do a few iterations by hand to really convince myself. So I drew out a small calling tree on my whiteboard, and started counting the nodes. I split my counts out into total nodes, leaf nodes, and interior nodes. Here's what I got:
IN | OUT | TOT | LEAF | INT
1 | 1 | 1 | 1 | 0
2 | 1 | 1 | 1 | 0
3 | 2 | 3 | 2 | 1
4 | 3 | 5 | 3 | 2
5 | 5 | 9 | 5 | 4
6 | 8 | 15 | 8 | 7
7 | 13 | 25 | 13 | 12
8 | 21 | 41 | 21 | 20
9 | 34 | 67 | 34 | 33
10 | 55 | 109 | 55 | 54
What immediately leaps out is that the number of leaf nodes is fib(n). What took a few more iterations to notice is that the number of interior nodes is fib(n) - 1. Therefore the total number of nodes is 2 * fib(n) - 1.
Since you drop the coefficients when classifying computational complexity, the final answer is θ(fib(n)).
It is bounded on the lower end by 2^(n/2) and on the upper end by 2^n (as noted in other comments). And an interesting fact of that recursive implementation is that it has a tight asymptotic bound of Fib(n) itself. These facts can be summarized:
T(n) = Ω(2^(n/2)) (lower bound)
T(n) = O(2^n) (upper bound)
T(n) = Θ(Fib(n)) (tight bound)
The tight bound can be reduced further using its closed form if you like.
It is simple to calculate by diagramming function calls. Simply add the function calls for each value of n and look at how the number grows.
The Big O is O(Z^n) where Z is the golden ratio or about 1.62.
Both the Leonardo numbers and the Fibonacci numbers approach this ratio as we increase n.
Unlike other Big O questions there is no variability in the input and both the algorithm and implementation of the algorithm are clearly defined.
There is no need for a bunch of complex math. Simply diagram out the function calls below and fit a function to the numbers.
Or if you are familiar with the golden ratio you will recognize it as such.
This answer is more correct than the accepted answer which claims that it will approach f(n) = 2^n. It never will. It will approach f(n) = golden_ratio^n.
2 (2 -> 1, 0)
4 (3 -> 2, 1) (2 -> 1, 0)
8 (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
14 (5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
22 (6 -> 5, 4)
(5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
(4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:
At the root you are computing:
F(n) depends on F(n-1) and F(n-2)
F(n-1) depends on F(n-2) again and F(n-3)
F(n-2) depends on F(n-3) again and F(n-4)
then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:
T(n) = T(n-1) + T(n-2) + C, with C constant
T(n-1) = T(n-2) + T(n-3) > T(n-2) then
T(n) > 2*T(n-2)
...
T(n) > 2^(n/2) * T(1) = O(2^(n/2))
This is just a lower bound that for the purpose of your analysis should be enough but the real time function is a factor of a constant by the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.
In addition, you can find optimized versions of Fibonacci using dynamic programming like this:
static int fib(int n)
{
/* memory */
int f[] = new int[n+1];
int i;
/* Init */
f[0] = 0;
f[1] = 1;
/* Fill */
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
That is optimized and do only n steps but is also exponential.
Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason:
The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of
m = log2(n) // your real input size
let find out the number of steps as a function of the input size
m = log2(n)
2^m = 2^log2(n) = n
then the cost of your algorithm as a function of the input size is:
T(m) = n steps = 2^m steps
and this is why the cost is an exponential.
Well, according to me to it is O(2^n) as in this function only recursion is taking the considerable time (divide and conquer). We see that, the above function will continue in a tree until the leaves are approaches when we reach to the level F(n-(n-1)) i.e. F(1). So, here when we jot down the time complexity encountered at each depth of tree, the summation series is:
1+2+4+.......(n-1)
= 1((2^n)-1)/(2-1)
=2^n -1
that is order of 2^n [ O(2^n) ].
No answer emphasizes probably the fastest and most memory efficient way to calculate the sequence. There is a closed form exact expression for the Fibonacci sequence. It can be found by using generating functions or by using linear algebra as I will now do.
Let f_1,f_2, ... be the Fibonacci sequence with f_1 = f_2 = 1. Now consider a sequence of two dimensional vectors
f_1 , f_2 , f_3 , ...
f_2 , f_3 , f_4 , ...
Observe that the next element v_{n+1} in the vector sequence is M.v_{n} where M is a 2x2 matrix given by
M = [0 1]
[1 1]
due to f_{n+1} = f_{n+1} and f_{n+2} = f_{n} + f_{n+1}
M is diagonalizable over complex numbers (in fact diagonalizable over the reals as well, but this is not usually the case). There are two distinct eigenvectors of M given by
1 1
x_1 x_2
where x_1 = (1+sqrt(5))/2 and x_2 = (1-sqrt(5))/2 are the distinct solutions to the polynomial equation x*x-x-1 = 0. The corresponding eigenvalues are x_1 and x_2. Think of M as a linear transformation and change your basis to see that it is equivalent to
D = [x_1 0]
[0 x_2]
In order to find f_n find v_n and look at the first coordinate. To find v_n apply M n-1 times to v_1. But applying M n-1 times is easy, just think of it as D. Then using linearity one can find
f_n = 1/sqrt(5)*(x_1^n-x_2^n)
Since the norm of x_2 is smaller than 1, the corresponding term vanishes as n tends to infinity; therefore, obtaining the greatest integer smaller than (x_1^n)/sqrt(5) is enough to find the answer exactly. By making use of the trick of repeatedly squaring, this can be done using only O(log_2(n)) multiplication (and addition) operations. Memory complexity is even more impressive because it can be implemented in a way that you always need to hold at most 1 number in memory whose value is smaller than the answer. However, since this number is not a natural number, memory complexity here changes depending on whether if you use fixed bits to represent each number (hence do calculations with error)(O(1) memory complexity this case) or use a better model like Turing machines, in which case some more analysis is needed.