Time Complexity of a loop iteration by i^2 - time-complexity

i=2;
while(i<n) {
i = i*i;
//O(1) complexity here
}
I'm new to time complexity and trying to figure out what this would be.
I know that if the iteration would've been i=2*i then it'd be O(log(n)) but I don't really know how I can calculate iterations of i^2.
Intuitively it'd also be O(log(n)) because it "iterates faster" but I don't know how to formally explain this.
Any help would be appreciated, thanks in advance

You can neatly translate this into the i = 2 * i case you mentioned by considering the mathematical log of i. Pseudocode for how this value changes:
log_i = log(2);
while (log_i < log_n) {
log_i = 2 * log_i;
// O(1) stuff here
}
It should be clear from this that the time complexity is O(log log n), assuming constant multiplication cost of course.

I think it's easier to approach this problem just using mathematics.
Consider your variable i. What sequence does it take? It seems to be
2, 4, 16, 162, ...
If you look at it for a bit, you notice that this sequence is just
2, 22, (22)2, ((22)2)2, ... or
21, 22, 24, 28, ... which is
220, 221, 222, 223, ...
so general term for this sequence is 22k where k = 0, 1, 2, ...
Now, how many iterations does your loop make? It will be in the order of k when 22k = n. So let us solve this equation:
22k = n (apply log2 to both sides)
2k = log2n (apply log2 to both sides again)
k = log2(log2n)
In big-O notation, the base of the logarithm doesn't matter, so we say your algorithm has a time complexity of:
O(log log n).

There are different ways to do i^2, The iterative approach (The one that you have) will have a time complexity of O(N) because we are iterating once till N.
Another method is recursive, something like:
static long pow(int x, int power) {
//System.out.println(x+":"+power);
//base condition
if (power == 0)
return 1 l;
if (power == 1)
return x;
power--;
return x * pow(x, power);}
this will also have a time complexity of O(N) because pow(x,n) is called recursively for each number from 1 to n.
The last method (and more efficient one) is the Divide and conquer which can improve the time complexity by only by calling pow(x, power/2).
static long pow(int x, int power) {
//System.out.println(x + ":" + power);
//base condition
if (power == 0)
return 1 L;
if (power == 1)
return x;
log res = pow(x, power / 2);
//if power is even
else if (power % 2 == 0)
return res * res;
else
return x * res * res; //if power is odd}
The time complexity in this case would be O(log N) because pow(x,n/2) is calculated and then stored for using the same result.

Related

Time complexity of below pseudocode

void buzz(int num, int [] a)
{
for(int i=1;i<num;i*=2){
for(int j=0;j<a[i]%num;j++){
print(num);
}
}
}
I understand that the outer for loop runs for log n times. Need help in understanding the complexity of the inner loop and the overall.
Let us assume num = 2^n for simplicity.
The inner loop runs a[i] % num times, which is comprised in range [0, num-1]. The outer loop runs n times so you can conclude O(num.log(num)) = O(n.2^n) worst-case complexity.
For a precise evaluation, let r[i] := a[i] % num. Then the number of print's is
r[1] + r[2] + r[4] + r[8] + ... r[2^(n-1)].
This is n times the average r (for the relevant indexes; the expectation of r could be (num-1)/2)).

Time and Space Complexity of a Recursive Algorithm

Convert n to its English words representation, where 0 <= n < 1,000,000,000
Python Solution:
class Solution:
def helper(self, n):
ones = ['', 'One', 'Two', 'Three', 'Four', 'Five', 'Six', 'Seven', 'Eight', 'Nine']
teens = ['Ten', 'Eleven', 'Twelve', 'Thirteen', 'Fourteen', 'Fifteen', 'Sixteen', 'Seventeen', 'Eighteen', 'Nineteen']
tens = ['', '', 'Twenty', 'Thirty', 'Forty', 'Fifty', 'Sixty', 'Seventy', 'Eighty', 'Ninety']
res = ''
if n < 10:
res = ones[n]
elif n < 20:
res = teens[n - 10]
elif n < 100:
res = tens[n // 10] + ' ' + self.helper(n % 10)
elif n < 1000:
res = self.helper(n // 100) + ' Hundred ' + self.helper(n % 100)
elif n < 1000000:
res = self.helper(n // 1000) + ' Thousand ' + self.helper(n % 1000)
elif n < 1000000000:
res = self.helper(n // 1000000) + ' Million ' + self.helper(n % 1000000)
return res.strip()
def convert_to_word(self, n):
if n == 0:
return 'Zero'
return self.helper(n)
I've been trying to calculate the time and space complexity of this solution. I've seen different answers. Some say that the time complexity is O(1) because the helper function is called a fixed number of times (even if n is a large number). Others say it's O(log(n)).
The space complexity seems to be O(1)?
I am so confused. Please help me clarify. Thank you.
On all inputs n that are larger than 1000000000, the function returns immediately with an empty string, without any computations or recursive calls. So of course the time complexity is O(1), likewise the space complexity (since what happens for smaller n is completely irrelevant).
The case would be different and more interesting if you removed the line
elif n < 1000000000
so that for large n you would get a (virtually) unbounded resulting string with an unbounded number of Million-substrings (ignoring the fact that integers have a maximum size on a real computer, and ignoring the fact that you would get nonsensical number words). In this case you would get the time complexity O(log(n)^2) (since you're concatenating O(log n) strings of length O(log n)) and space complexity O(log n) (because of the call stack for the recursive calls). The time complexity could easily be reduced to O(log n) by handling the string concatenations more efficiently.
Some additional explanation
It seems from the comments that it's not obvious why the time complexity is O(1). If we say that the time complexity T(n) is in O(1), that means that there exists a constant c and a constant k such that for all n > k, T(n) <= c.
In this example, choosing c = 9 and k = 0 does the trick, or alternatively choosing c = 1 and k = 1000000000, hence the time complexity is O(1). It should now also be clear that the following function is also O(1) (albeit with a very large hidden constant factor):
void f(int n) {
if (n < 1000000000) {
for(int i = 0; i < n; i++) {
for(int k = 0; k < n; k++)
print(i);
}
}
else print("");
}

Time complexity for all Fibonacci numbers from 0 to n

I was calculating the time complexity of this code that prints all Fibonacci numbers from 0 to n. According to what I calculated, the fib() method takes O(2^n) and since it is being called i number of times, so it came out to be O(n*2^n). However, the book says it is O(2^n). Can anyone explain why the time complexity here will be O(2^n)?
Here is the code:
void allFib(int n){
for(int i = 0 ; i < n ; i++){
System.out.println(i + ": " + fib(i));
}
}
int fib(int n ){
if(n <= 0) return 0;
else if (n == 1) return 1;
return fib(n-1) + fib(n-2);
}
I've figured it out my own way to understand the book's solution, hope it helps those who are still struggling.
Imagine we now call allFib(n).
Since we have a for loop from 0 to n, the following function will be called:
i = 0, call fib(0)
i = 1, call fib(1)
i = 2, call fib(2)
...
i = n-1, call fib(n-1)
As discussed before fib(n) will take O(2^n) = 2^n steps
Therefore,
i = 0, call fib(0) takes 2^0 steps
i = 1, call fib(1) takes 2^1 steps
i = 2, call fib(2) takes 2^2 steps
...
i = n-1, call fib(n-1) takes 2^(n-1) steps
Thus, the runtime of allFib(n) will be
2^0 + 2^1 + 2^2 + ... + 2^(n-1). *
Follow the sum of powers of 2 formula we have:
* = 2^(n-1+1) - 1 = 2^n - 1.
Thus it is O(2^n)
I finally got my answer from my professor and I'll post it here:
According to him: you should not just simply look the for loop iterating from 0 to n, but you must find what are the actual computations by calculating the steps.
fib(1) takes 2^1 steps
fib(2) takes 2^2 steps
fib(3) takes 2^3 steps
..........
fib(n) takes 2^n steps
now adding these:
2^1 + 2^2 + 2^3 + ........+ 2^n = 2^n+1
and ignoring the constant, it is 2^n, hence the time complexity is O(2^n).

Time complexity of this code?

So these are the for loops that I have to find the time complexity, but I am not really clearly understood how to calculate.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
}
For the first line, (int i = n; i > 1; i /= 3), keeps diving i by 3 and if i is less than 1 then the loop stops there, right?
But what is the time complexity of that? I think it is n, but I am not really sure. The reason why I am thinking it is n is, if I assume that n is 30 then i will be like 30, 10, 3, 1 then the loop stops. It runs n times, doesn't it?
And for the last for loop, I think its time complexity is also n because what it does is
k starts as 2 and keeps multiplying itself to itself until k is greater than n.
So if n is 20, k will be like 2, 4, 16 then stop. It runs n times too.
I don't really think I am understanding this kind of questions because time complexity can be log(n) or n^2 or etc but all I see is n.
I don't really know when it comes to log or square. Or anything else.
Every for loop runs n times, I think. How can log or square be involved?
Can anyone help me understanding this? Please.
If you want to calculate the time complexity of an algorithm, go through this post here: How to find time complexity of an algorithm
That said, the way you're thinking about algorithm complexity is small and linear. It helps to think about it in orders of magnitude, then plot it that way. If you take:
x, z = 0
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
x = x + 1
}
for (int k = 2; k < n; k = (k * k) {
z = z + 1
}
}
and plot x and z on a graph where n goes from 1 -> 10 -> 100 -> 1000 -> 10^15 or so, you'll get an answer which looks like an n^2 graph. When analyzing algorithmic complexity you're primarily interested in maximum the number of times, in either the worst or most common case, your inputs are looped through omitting constants. So in this case I would expect your algorithm to be O(n^2)
For further reading, I suggest https://en.wikipedia.org/wiki/Introduction_to_Algorithms ; it's not exactly easy but covers this in depth.

Calculate the time complexity of the following function

How do I calculate the time complexity of the following function?
int Compute (int n)
{
int j = 0;
int i = 0;
while (i<=n)
{
i = 2*j + i + 1;
j++;
}
return j-1;
}
Now, I know that the loop has O(n) time complexity, but in this case i grows in a much faster rate. Taking this iteration by iteration I found out that, for every m-th iteration i = m^2. But I'm still confused how to calculate Big-O.
If you look at the values of i and j for a few iterations:
i=1
j=1
i=4
j=2
i=9
j=3
i=16
j=4
and so on. By mathematical induction we can prove that i takes square values: ( 2*n + n^2 + 1 = (n+1)^2 )
Since we loop only while i<=n and since i takes the vales 1, 2^2, 3^2,..., k^2 <=n, it means that we stop when i=k goes over sqrt(n). Hence the complexity seems to be O(k) which means O(sqrt(n)).