How does this code has 2 power N complexity as opposed to N*2powerN? - time-complexity

def allFib(n):
for i in range(n):
print(str(i) + ":, "+ str(fib(i))
def fib(n):
if n<=0:
return 0
elif n == 1:
return 1
return fib(n-1) + fib(n-2)
Shouldn't fib being called N time be accounted here?

Related

Time and Space Complexity of a Recursive Algorithm

Convert n to its English words representation, where 0 <= n < 1,000,000,000
Python Solution:
class Solution:
def helper(self, n):
ones = ['', 'One', 'Two', 'Three', 'Four', 'Five', 'Six', 'Seven', 'Eight', 'Nine']
teens = ['Ten', 'Eleven', 'Twelve', 'Thirteen', 'Fourteen', 'Fifteen', 'Sixteen', 'Seventeen', 'Eighteen', 'Nineteen']
tens = ['', '', 'Twenty', 'Thirty', 'Forty', 'Fifty', 'Sixty', 'Seventy', 'Eighty', 'Ninety']
res = ''
if n < 10:
res = ones[n]
elif n < 20:
res = teens[n - 10]
elif n < 100:
res = tens[n // 10] + ' ' + self.helper(n % 10)
elif n < 1000:
res = self.helper(n // 100) + ' Hundred ' + self.helper(n % 100)
elif n < 1000000:
res = self.helper(n // 1000) + ' Thousand ' + self.helper(n % 1000)
elif n < 1000000000:
res = self.helper(n // 1000000) + ' Million ' + self.helper(n % 1000000)
return res.strip()
def convert_to_word(self, n):
if n == 0:
return 'Zero'
return self.helper(n)
I've been trying to calculate the time and space complexity of this solution. I've seen different answers. Some say that the time complexity is O(1) because the helper function is called a fixed number of times (even if n is a large number). Others say it's O(log(n)).
The space complexity seems to be O(1)?
I am so confused. Please help me clarify. Thank you.
On all inputs n that are larger than 1000000000, the function returns immediately with an empty string, without any computations or recursive calls. So of course the time complexity is O(1), likewise the space complexity (since what happens for smaller n is completely irrelevant).
The case would be different and more interesting if you removed the line
elif n < 1000000000
so that for large n you would get a (virtually) unbounded resulting string with an unbounded number of Million-substrings (ignoring the fact that integers have a maximum size on a real computer, and ignoring the fact that you would get nonsensical number words). In this case you would get the time complexity O(log(n)^2) (since you're concatenating O(log n) strings of length O(log n)) and space complexity O(log n) (because of the call stack for the recursive calls). The time complexity could easily be reduced to O(log n) by handling the string concatenations more efficiently.
Some additional explanation
It seems from the comments that it's not obvious why the time complexity is O(1). If we say that the time complexity T(n) is in O(1), that means that there exists a constant c and a constant k such that for all n > k, T(n) <= c.
In this example, choosing c = 9 and k = 0 does the trick, or alternatively choosing c = 1 and k = 1000000000, hence the time complexity is O(1). It should now also be clear that the following function is also O(1) (albeit with a very large hidden constant factor):
void f(int n) {
if (n < 1000000000) {
for(int i = 0; i < n; i++) {
for(int k = 0; k < n; k++)
print(i);
}
}
else print("");
}

Time complexity Recursive program

I can't figure out the time complexety of the following snippet of code:
void f( int N ){
sum ++;
if ( N > 1){
f( N /2);
f( N /2);
}
It's the double iteration that gives me problems.
I know (or think) that the time complexity of
void f( int N ){
sum ++;
if ( N > 1){
f( N /2);
}
is ~log2(N) but don't know what to do with the other code.
You are calling the recursion twice on (N/2). Lets write the formula:
T(n) = 2*(N/2) + 1
Using master theorem, we fall on the first case where:
a=2
f = O(1)
b=2
T(n) = Θ(n)
We also find T(n) = 2*T(n/2) + 1 here, which shows it is bounded by O(n)
A good way to solve this problem would be:
1. Finding the recurrence relation
For each call to f we have time complexity T(N). Each call contains:
A constant amount of work C: sum++, comparison N > 1, recursive calling overhead, etc.
2 recursive calls to f, each with time complexity T(N / 2).
Thus the recurrence relation is given by T(N) = 2T(N/2) + C.
2. Finding a solution by inspection
If we repeatedly substitute T(N) into itself, we can see an emerging pattern:
What is the upper limit of the summation, m? Since the stopping condition is N > 1, after repeated substitutions the requirement would be
Thus the summation equals (dropping the round-down, and the constant C because it is simply multiplicative):
3. Proof by induction
Base step: test if the summation result is correct for the lowest possible value of N, i.e. 2:
The result is consistent.
Recurrence step: confirm that if the summation is correct for N / 2, it is also correct for N:
Which is exactly our original recurrence relation.
Hence by induction, the summation result is correct, and T(N) is indeed Ө(N).
4. Numerical testing:
We can write code to confirm our result if needed:
function T(n) {
return n > 1 ? 2 * T(floor(n/2)) + 1 : 1;
}
Results:
N T(N)
-------------------------
2 3
4 7
8 15
16 31
32 63
64 127
128 255
256 511
512 1023
1024 2047
2048 4095
4096 8191
8192 16383
16384 32767
32768 65535
65536 131071
131072 262143
262144 524287
524288 1048575
1048576 2097151
2097152 4194303
4194304 8388607
8388608 16777215
16777216 33554431
33554432 67108863
67108864 134217727
134217728 268435455
268435456 536870911
536870912 1073741823
1073741824 2147483647
Graph:
Let's try tracing through it:
Let N be 8.
1 + f(4) + f(4)
=> 1 + 2 + f(2) + f(2) + f(2) + f(2)
=> 1 + 6 + f(1) + f(1) + f(1) + f(1) + f(1) + f(1) + f(1) + f(1)
When n = 8; work = 15
Let N be 4.
1 + f(2) + f(2)
=> 1 + 2 + f(1) + f(1) + f(1) + f(1)
When n = 4; work = 7
Let N be 2
1 + f(1) + f(1)
When n = 2; work = 3;
Let N be 1
1
When n = 1; work = 1
So at a glance the work pattern seems to be 2n - 1
We still need to prove it!
From the algorithm the recursive relation is:
W(1) = 1
W(N) = 1 + 2 * W(N / 2)
Proof by induction
Base Case:
W(1) = 2(1) - 1 = 1 as required.
Recursive case:
Assume W(N / 2) = 2(n/2) - 1 = n - 1
W(N) = 1 + 2 * (N / 2)
Applying induction...
W(N) = 1 + 2(n - 1) = 1 + 2n - 2 = 2n - 1 as required
Therefore the complexity is O(2n - 1) because O is reflexive
=> O( max { 2n, -1 } ) because of the rule of sums
=> O(2n)
=> O(n) because of the rule of scaling
This code is much similar to binary tree traversal,
void f( int N ){
sum ++;
if ( N > 1){
f( N /2); //like Traverse left subtree
f( N /2); //like Traverse right subtree
}
which basically traverses through each node once, with O(N) time complexity.
n/8
n/4 ---------
n/8
n/2 ------------------
n/8
n/4 ---------
n/8
n -------------------------------
n/8
n/4 ---------
n/8
n/2 ----------------
n/8
n/4 ---------
n/8
this goes on till the passed value becomes 1 or 0.

Solve: T(n) = T(n/2) + n/2 + 1

I struggle to define the running time for the following algorithm in O notation. My first guess was O(n), but the gap between the iterations and the number I apply isn't steady. How have I incorrectly defined this?
public int function (int n )
{
if ( n == 0) {
return 0;
}
int i = 1;
int j = n ;
while ( i < j )
{
i = i + 1;
j = j - 1;
}
return function ( i - 1) + 1;
}
The while is executed in about n/2 time.
The recursion is executed passing as n a value that is about half of the original n, so:
n/2 (first iteration)
n/4 (second iteration, equal to (n/2)/2)
n/8
n/16
n/32
...
This is similar to a geometric serie.
Infact it can be represented as
n * (1/2 + 1/4 + 1/8 + 1/16 + ...)
So it converges to n * 1 = n
So the O notation is O(n)
Another approach is to write it down as T(n) = T(n/2) + n/2 + 1.
The while loop does n/2 work. Argument passed to next call is n/2.
Solving this using the master theorem where:
a = 1
b = 2
f = n/2 + 1
Let c=0.9
1*(f(n/2) + 1) <? c*f(n)
1*(n/4)+1 <? 0.9*(n/2 + 1)
0.25n + 1 <? 0.45n + 0.9
0 < 0.2n - 0.1
Which is:
T(n) = Θ(n)

Running Time Calculation/Complexity of an Algorithm

I have to calculate the time complexity or theoretical running time of an algorithm (given the psuedocode), line by line as T(n). I've given it a try, but there are a couple things confusing me. For example, what is the time complexity for an "if" statement? And how do I deal with nested loops? The code is below along with my attempt which is commented.
length[A] = n
for i = 0 to length[A] - 1 // n - 1
k = i + 1 // n - 2
for j = 1 + 2 to length[A] // (n - 1)(n - 3)
if A[k] > A[j] // 1(n - 1)(n - 3)
k = j // 1(n - 1)(n - 3)
if k != i + 1 // 1(n - 1)
temp = A[i + 1] // 1(n - 1)
A[i + 1] = A[k] // 1(n - 1)
A[k] = temp // 1(n - 1)
Blender is right, the result is O(n^2): two nested loops that each have an iteration count dependent on n.
A longer explanation:
The if, in this case, does not really matter: Since O-notation only looks at the worst-case execution time of an algorithm, you'd simply choose the execution path that's worse for the overall execution time. Since, in your example, both execution paths (k != i+ 1 is true or false) have no further implication for the runtime, you can disregard it. If there were a third nested loop, also running to n, inside the if, you'd end up with O(n^3).
A line-by-line overview:
for i = 0 to length[A] - 1 // n + 1 [1]
k = i + 1 // n
for j = 1 + 2 to length[A] // (n)(n - 3 + 1) [1]
if A[k] > A[j] // (n)(n - 3)
k = j // (n)(n - 3)*x [2]
if k != i + 1 // n
temp = A[i + 1] // n*y [2]
A[i + 1] = A[k] // n*y
A[k] = temp // n*y
[1] The for loop statement will be executed n+1 times with the following values for i: 0 (true, continue loop), 1 (true, continue loop), ..., length[A] - 1 (true, continue loop), length[A] (false, break loop)
[2] Without knowing the data, you have to guess how often the if's condition is true. This guess can be done mathematically by introducing a variable 0 <= x <= 1. This is in line with what I said before: x is independent of n and therefore influences the overall runtime complexity only as a constant factor: you need to take a look at the execution paths .

How to optimize code for finding Amicable Pairs

Please see the code I've used to find what I believe are all Amicable Pairs (n, m), n < m, 2 <= n <= 65 million. My code: http://tutoree7.pastebin.com/wKvMAWpT. The found pairs: http://tutoree7.pastebin.com/dpEc0RbZ.
I'm finding that each additional million now takes 24 minutes on my laptop. I'm hoping there are substantial numbers of n that can be filtered out in advance. This comes close, but no cigar: odd n that don't end in '5'. There is only one counterexample pair so far, but that's one too many: (34765731, 36939357). That as a filter would filter out 40% of all n.
I'm hoping for some ideas, not necessarily the Python code for implementing them.
Here is a nice article that summarizes all optimization techniques for finding amicable pairs
with sample C++ code
It finds all amicable numbers up to 10^9 in less than a second.
#include<stdio.h>
#include<stdlib.h>
int sumOfFactors(int );
int main(){
int x, y, start, end;
printf("Enter start of the range:\n");
scanf("%d", &start);
printf("Enter end of the range:\n");
scanf("%d", &end);
for(x = start;x <= end;x++){
for(y=end; y>= start;y--){
if(x == sumOfFactors(y) && y == sumOfFactors(x) && x != y){
printf("The numbers %d and %d are Amicable pair\n", x,y);
}
}
}
return 0;
}
int sumOfFactors(int x){
int sum = 1, i, j;
for(j=2;j <= x/2;j++){
if(x % j == 0)
sum += j;
}
return sum;
}
def findSumOfFactors(n):
sum = 1
for i in range(2, int(n / 2) + 1):
if n % i == 0:
sum += i
return sum
start = int(input())
end = int(input())
for i in range(start, end + 1):
for j in range(end, start + 1, -1):
if i is not j and findSumOfFactors(i) == j and findSumOfFactors(j) == i and j>1:
print(i, j)