What is a time complexity of the following algorithm in Big Theta Notation? - time-complexity

res = 0
for i in range (1,n):
j = i
while j % 2 == 0:
j = j/2
res = res + j
I understand that upper bound is O(nlogn), however I'm wondering if it's possible to find a stronger constraint? I'm stuck with the analysis.

Some ideas that may be helpful:
Could create a function (g(n)) that annotates your function (f(n)) to include how many operations occur when running f(n)
def f(n):
res = 0
for i in range (1,n):
j = i
while j % 2 == 0:
j = j/2
res = res + j
return res
def g(n):
comparisons = 0
operations = 0
assignments = 0
assignments += 1
res = 0
assignments += 1. # i = 1
comparisons += 1. # i < n
for i in range (1,n):
assignments += 1
j = i
operations += 1
comparisons += 1
while j % 2 == 0:
operations += 1
assignments += 1
j = j/2
operations += 1
assignments += 1
res = res + j
operations += 1
comparisons += 1
operations += 1 # i + 1
assignments += 1 # assign to i
comparisons += 1 # i < n ?
return operations + comparisons + assignments
For n = 1, the code runs without hitting any loops: assigning the value of res; assigning i as 1; comparing i to n and skipping the loop as a result.
For n > 1, you get into the for loop, and the for statement is all that is changing the loop varaible, so the complexity of the rest of the code is at least O(n).
Once in the loop:
if i is odd, then you only assign j, perform the mod operation and compare to zero. That will be the case for half the values of i, so each run of the loop from 2 to n will (half the time) add a fixed number of a few operations (including the loop operations). So, that's still O(n), just with a larger constant.
if i is even, then we divide by 2 until it is odd. This is what we need to work out the impact of.
Based on my counting of the different operations, I get:
g_initial_setup = 3 (every time)
g_for_any_i = 6 (half the time, it is just this)
g_for_even_i = 6 for each time we divide by two (the other half of the time)
For a random even i between 2 and n, half the time we will only need to divide by two once, half the remaining time by two again, half the remaining time by two again, etc. So we have an infinite series as n goes to infinity of sum(1/2^i) for 1 < i < n, and multiply that by the 6 operations done for each halving of j.
I would expect from this:
g(n) = 3 + (n * 6) + (n * 6) * sum( 1 / pow(2,m) for m between 1 and n )
Given that the infinite series 1/2^n = 1, we simplify that to:
g(n) = 3 + 12n as n approaches infinity.
That implies that the algorithm is O(n). Huh. I did not expect that.
Let's try out the function g(n) from above, counting all the operations that are occurring as f(n) is computed.
g(1) = 3 operations
g(2) = 9
g(3) = 21
g(4) = 27
g(5) = 45
g(10) = 123
g(100) = 1167
g(1000) = 11943
g(10000) = 119943
g(100000) = 1199931
g(1000000) = 11999919
g(10000000) = 119999907
Okay, unless I've really made a serious error here, it's O(n).

Related

Runtime of power of 2 iteration

For example:
for (int i = 2; i <=n; i = pow(i,2)) // O(1) time here)
Would log(log(n)) be correct from 2 4 16 256 ... 2^(2^n)?
Yes, the time complexity of this loop would be O(log(log(n))).
Motivation:
i grows as ((2^2)^2)^...^2 = 2(2 * 2 * ... * 2) = 22^(k-1) where k is the number of iterations. Thus we must solve:
n = 22^(k-1)
log2(n) = 2(k-1)
log2(log2(n)) = k-1
k = log2(log2(n)) + 1 = O(log(log(n)). ▢

determine the time complexity in Java

public class complexity {
public int calc(int n) {
int a = 2 * Math.abs(n);
int b = Math.abs(n) - a;
int result = n;
for (int i = 0; i < a; i++) {
result += i;
for (int j = 0; j > b; j--) {
result -= j;
}
for (int j = a/2; j > 1 ; j/=2) {
System.out.println(j);
}
}
int tmp = result;
while (tmp > 0) {
result += tmp;
tmp--;
}
return result;
}
}
I have been given the following program in Java and need to determine the time complexity (Tw(n)).
I checked those site:
Big O, how do you calculate/approximate it?
How to find time complexity of an algorithm
But I have still problem too understand it.
Here is the given solution by the professor.From the for loop on I didn't understand anything how he came up with the different time complexity. Can anybody explain it ?
let's go over it step by step :
before the for loop every instruction is executed in a constant time
then:
for(int i = 0; i < a; i++) is executed 2n + 1 times as a = 2*n so 0=> a = 2n steps and the additional step is when i = 2n the program has to execute that step and when it finds that i = 2n it breaks out of the loop.
Now each instruction in the loop has to be executed 2n times (as we are looping from 0 to 2n - 1) which explains why result += i is done 2n times.
The next instruction is another for loop so we apply the same logic about the line
for (int j = 0; j > b; j--) : as b = -n this instruction will go from 0 down to -n plus the extra step I mentioned in the first for loop which means : 0 -> -n => n steps + 1 = n+1 steps and as this instruction is in the other loop it will be execited 2n times hence 2n * (n+1)
Instructions inside this for loop are done n times and therefore result -= j is done n times and as the loop itself is done 2n times (result -= j) will be done n*2n times
Same goes for the last for loop except here we are going from a/2 which is n as a = 2n to 1 and each time we are dividing by 2, this is a bit complicated so lets do some steps first iteration j = n then j = n/2 then j = n/4 till j is <= 1 so how many steps do we need to reach 1 ?
to reach n/2 we need 1 = log(2) step n => n/2
to reach n/4 we need 2 2 = log(4) steps n => n/2 => n/4
we remark here that to reach n/x we need log(x) steps and as 1 = n/n we need log(n) steps to reach 1
therefore each instruction in this loop is executed log(n) times and as it is in the parent loop it has to be done 2n times => 2n*log(n) times.
for the while loop : result = n + (0 + 1 + 2 + .. + 2n) + (0 + 1 + .. + 2(n^2))
we did this in the for loop and then you do the arithmetic sequence sums it gives
result = n^2 (n+1) and here you go.
I hope this is clear don't hesitate to ask otherwise !
I am willing to clarify i.e. the last while loop. Unfortunately my answer does not match the professor's. And I am not that certain to have not made a grave mistake.
result starts with n
for i from 0 upto 2|n|
result += i
2|n| times done, average i.|n| so result increased by: ~2n.n = 2n².
for j from 0 downto -|n|
result -= j
2|n| times done, result increased by: 2n.n/2 = n²
So result is n + 3n²
The outer for loop remains O(n²) as the inner println-for has only O(log n).
The last while loop would be the same as:
for tmp from n + 3n² downto 0
result += tmp
This is also O(n²) like the outer for loop, so the entire complexity is O(n²).
The result is roughly 3n².3n²/2 or. 4.n4.

Division by Zero error in calculating series

I am trying to compute a series, and I am running into an issue that I don't know why is occurring.
"RuntimeWarning: divide by zero encountered in double_scalars"
When I checked the code, it didn't seem to have any singularities, so I am confused. Here is the code currently(log stands for natural logarithm)(edit: extending code if that helps):
from numpy import pi, log
#Create functions to calculate the sums
def phi(z: int):
k = 0
phi = 0
#Loop through 1000 times to try to approximate the series value as if it went to infinity
while k <= 100:
phi += ((1/(k+1)) - (1/(k+(2*z))))
k += 1
return phi
def psi(z: int):
psi = 0
k = 1
while k <= 101:
psi += ((log(k))/( k**(2*z)))
k += 1
return psi
def sig(z: int):
sig = 0
k = 1
while k <= 101:
sig += ((log(k))**2)/(k^(2*z))
k += 1
return sig
def beta(z: int):
beta = 0
k = 1
while k <= 101:
beta += (1/(((2*z)+k)^2))
k += 1
return beta
#Create the formula to approximate the value. For higher accuracy, either calculate more derivatives of Bernoulli numbers or increase the boundry of k.
def Bern(z :int):
#Define Euler–Mascheroni constant
c = 0.577215664901532860606512
#Begin computations (only approximation)
B = (pi/6) * (phi(1) - c - 2 * log(2 * pi) - 1) - z * ((pi/6) * ((phi(1)- c - (2 * log(2 * pi)) - 1) * (phi(1) - c) + beta(1) - 2 * psi(1)) - 2 * (psi(1) * (phi(1) - c) + sig(1) + 2 * psi(1) * log(2 * pi)))
#output
return B
A = int(input("Choose any value: "))
print("The answer is", Bern(A + 1))
Any help would be much appreciated.
are you sure you need a ^ bitwise exclusive or operator instead of **? I've tried to run your code with input parameter z = 1. And on a second iteration the result of k^(2*z) was equal to 0, so where is from zero division error come from (2^2*1 = 0).

How to find the value of integer k efficiently for which q divides b ^ k finitely?

We have given two integers b and q, and we want to find the minimum value of an integer 'k' for which q completely divides b^k or k does not exist. Can we find out the value of k efficiently? Not just iterating each value of k (0, 1, 2, 3, ...) and checking (b^k) % q == 0) where q <= k or q >= k.
First of all, k will never equal zero unless q=1. k will never equal one unless q=b.
Next, if you can factorize q and b, then you can reason about them.
If there are any prime factors of b that are not factors of q at all, then k does not exist. Otherwise, k has to be large enough so that every factor of b^k is represented in q.
Here's some pseudo-code:
if (q==1) return 0;
if (q==b) return 1;
// qfactors and bfactors are arrays, one element per factor
let qfactors = prime_factorization(q);
let bfactors = prime_factorization(b);
let kmin=0;
foreach (f in bfactors.unique) {
let bcount = bfactors.count(f);
let qcount = qfactors.count(f);
if (qcount==0 || qcount < bcount) return -1; // k does not exist
kmin_f = ceiling(bcount/qcount);
if (kmin_f > kmin) let kmin = kmin_f;
}
return kmin;
If q = 1 ; k = 0
If b = q ; k = 1
If b > q and factors ; k = 1
If b < q and factors ; k != I
If b != q and not factors ; k != I
We know,
Dividend = Divisor x Quotient + Reminder
=> Dividend = Divisor x Quotient [Here, Reminder = 0]
Now go for calculation of Maxima and Minima as lower the value of Quotient is lower the value of 'k'.
If you consider the Quotient as 1 (lowest but spl case) then your formula for 'k' becomes,
k = log q/log b
I found a solution-
If q divides pow(b,k) then all prime factors of q are prime factors of b. Now we can do iterations q = q ÷ gcd(b,q) while gcd(q,b)≠1. If q≠1 after iterations, there are prime factors of q which are not prime factors of b then k doesn't exist else k = no of iteration.

Complexity of the program [duplicate]

In the book Programming Interviews Exposed it says that the complexity of the program below is O(N), but I don't understand how this is possible. Can someone explain why this is?
int var = 2;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j *= 2) {
var += var;
}
}
You need a bit of math to see that. The inner loop iterates Θ(1 + log [N/(i+1)]) times (the 1 + is necessary since for i >= N/2, [N/(i+1)] = 1 and the logarithm is 0, yet the loop iterates once). j takes the values (i+1)*2^k until it is at least as large as N, and
(i+1)*2^k >= N <=> 2^k >= N/(i+1) <=> k >= log_2 (N/(i+1))
using mathematical division. So the update j *= 2 is called ceiling(log_2 (N/(i+1))) times and the condition is checked 1 + ceiling(log_2 (N/(i+1))) times. Thus we can write the total work
N-1 N
∑ (1 + log (N/(i+1)) = N + N*log N - ∑ log j
i=0 j=1
= N + N*log N - log N!
Now, Stirling's formula tells us
log N! = N*log N - N + O(log N)
so we find the total work done is indeed O(N).
Outer loop runs n times. Now it all depends on the inner loop.
The inner loop now is the tricky one.
Lets follow:
i=0 --> j=1 ---> log(n) iterations
...
...
i=(n/2)-1 --> j=n/2 ---> 1 iteration.
i=(n/2) --> j=(n/2)+1 --->1 iteration.
i > (n/2) ---> 1 iteration
(n/2)-1 >= i > (n/4) ---> 2 iterations
(n/4) >= i > (n/8) ---> 3 iterations
(n/8) >= i > (n/16) ---> 4 iterations
(n/16) >= i > (n/32) ---> 5 iterations
(n/2)*1 + (n/4)*2 + (n/8)*3 + (n/16)*4 + ... + [n/(2^i)]*i
N-1
n*∑ [i/(2^i)] =< 2*n
i=0
--> O(n)
#Daniel Fischer's answer is correct.
I would like to add the fact that this algorithm's exact running time is as follows:
Which means: