Complexity classes examples - time-complexity

I wanted to know if my answers are indeed correct for the following statements:
3(n^3) + 5(n^2) + 25n + 10 = BigOmega(n^3) -> T
->Grows at a rate equals or slower
3(n^3) + 5(n^2) + 25n + 10 = Theta(n^3) -> T
-Grows at a rate exactly equals
3(n^3) + 5(n^2) + 25n + 10 = BigO(n^3) -> T
-Grows at a rate equals or faster
Thanks!!!

Formal definitions of O notations are:
f(n) = O(g(n)) means that there exists some constant c and n0 that
f(n) <= c*g(n) for n >= n0.
f(n) = Ω(g(n)) means that there exists some constant c and n0 that
f(n) >= c*g(n) for n >= n0.
f(n) = Θ(g(n)) means that there exists some constants c1 and c2 and n0
that f(n) >= c1*g(n) and f(n) <= c2*g(n) for n >= n0.
Proof for O:
3(n^3) + 5(n^2) + 25n + 10 < 3*n^3 + n^3 + n^3 = 5*n^3
You can see that for n >= 10 this formula is true.
So there exists c = 5 and n0 = 10, thus it is O(n^3).
Proof for Ω:
3(n^3) + 5(n^2) + 25n + 10 > 3*n^3
So c = 3 and n0 = 1, thus it is Ω(n^3).
Because both O and Ω apply then the 3rd statement for Θ is also true.

Related

What will be complexity of reccurance relation : T (n) = 0.5T (n/2) + 1/n

How to calculate the complexity of this case since master method fails for a<1?
Let's solve the recurrence relation below:
T(n) = (1/2) * T(n/2) + (1/n)
T(n/2) = (1/2) * T(n/4) + (1/(2/n))
Therefore:
T(n) = (1/4) * T(n/4) + (1/n) + (1/n) Notice the pattern.
In overall:
T(n) = (1/2^k) * T(n/2^k) + (1/n)*k
2^k = n --> k = log(n)
T(n) = (1/n) * T(1) + (1/n)*log(n)
T(n) = (log(n) + c)/n c = constant
T(n) ~ log(n)/n
Notice that as n goes to infinity, total number of operations goes to zero.
Recall the formal definition of Big-Oh notation:
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k.
T(n) = O(1) since 0 <= T(n) <= c*1 for all n >= k for some k>0 and c>0.

is log(n!) equivalent to nlogn? (Big O notation)

In my book there is a multiple choice question:
What is the big O notation of the following function:
n^log(2) +log(n^n) + nlog(n!)
I know that log(n!) belongs to O(nlogn), But I read online that they are equivalent. how is log(n!) the same thing as saying nlogn?
how is:
log(n!) = logn + log(n-1) + ... + log2 + log1
equivalent to nlogn?
Let n/2 be the quotient of the integer division of n by 2. We have:
log(n!) = log(n) + log(n-1) + ... + log(n/2) + log(n/2 - 1) + ... + log(2) + log(1)
>= log(n/2) + log(n/2) + ... + log(n/2) + log(n/2 - 1) + ... + log(2)
>= (n/2)log(n/2) + (n/2)log(2)
>= (n/2)(log(n) -log(2) + log(2))
= (n/2)log(n)
then
n log(n) <= 2log(n!) = O(log(n!))
and n log(n) = O(log(n!)). Conversely,
log(n!) <= log(n^n) = n log(n)
and log(n!) = O(n log(n)).

Time complexity for nested loops?

a)
sum = 0;
for(i=1;i<2*n;i++)
for(j=1;j<i*i;j++)
for(k=1;k<j;k++)
if (j % i == 1)
sum++;
b)
sum = 0;
for(i=1;i<2*n;i++)
for(j=1;j<i*i;j++)
for(k=1;k<j;k++)
if (j % i)
sum++;
I chanced the above two pseudo code above while looking for algorithm analysis questions to practice on. The answers for the above two snippets are O(n4) and O(n5) respectively.
Note that the running time corresponds here to the number of times the operation
sum++ is executed.
How is it that the time complexity for the above two algorithms different by an order of n when the only difference is the if loop testing for equality to 1? How would I go about counting the O(n) complexity for such a question?
Algorithm A
Let's call f(n) the number of operations aggregated at the level of the outer loop, g(n) that for the first inner loop, h(n) for the most inner loop.
We can see that
f(n) = sum(i=1; i < 2n; g(i))
g(i) = sum(j=1, j < i*i; h(j))
h(j) = sum(k=1; k < j; 1 if j%i = 1, else 0)
How many times j%i = 1 while j varies from 1 to i*i? Exactly i times for the following values of j:
j = 0.i + 1
j = 1.i + 1
j = 2.i + 1
...
j = (i-1)*i + 1
So:
h(j) = sum(k=1; k < j; 1 if `j%i = 1`, else 0)
= i
=> g(i) = sum(j=1, j < i*i; h(j))
= sum(j=1, j < i*i; i)
= i*i * i = i^3
=> f(n) = sum(i=1; i < 2n; g(i))
= sum(i=1; i < 2n; i^3)
<= sum(i=1; i < 2n; 16.n^3) // Here just cap every i^3 with (2n)^3
<= 32.n^4
=> f(n) = O(n^4)
Algorithm B
(With the same naming conventions as algorithm A)
How many times do we have j%i casting to true? Every time j%i is different from 0. And how many times does this happen? We need to remove the occurrences for which j is a multiple of i, which are i, 2.i, ... (i-1).i, over the range of integers 1 to i*i, which has i*i numbers. This quantity is i*i - (i-1) = i^2 - i + 1.
As a result,
h(j) = sum(k=1; k < j; 1 if j%i = 1, else 0)
= i^2 - i + 1
= i^2 // dropping the terms i and 1 which are dominated by i^2 as i -> +Inf.
=> g(i) = sum(j=1, j < i*i; h(j))
= sum(j=1, j < i*i; i^2)
= i*i * i^2
= i^4
=> f(n) = sum(i=1; i < 2n; g(i))
= sum(i=1; i < 2n; i^4)
<= sum(i=1; i < 2n; 32.n^4) // Here just cap every i^4 with (2n)^4
<= 64.n^5
=> f(n) = O(n^5)
Bottom line
The difference of complexities between algorithms A and B comes from the fact that:
You have i values of j for which i%j = 1
You have i^2 - i + 1 values of j for which i%j <> 0

Complexity of the program [duplicate]

In the book Programming Interviews Exposed it says that the complexity of the program below is O(N), but I don't understand how this is possible. Can someone explain why this is?
int var = 2;
for (int i = 0; i < N; i++) {
for (int j = i+1; j < N; j *= 2) {
var += var;
}
}
You need a bit of math to see that. The inner loop iterates Θ(1 + log [N/(i+1)]) times (the 1 + is necessary since for i >= N/2, [N/(i+1)] = 1 and the logarithm is 0, yet the loop iterates once). j takes the values (i+1)*2^k until it is at least as large as N, and
(i+1)*2^k >= N <=> 2^k >= N/(i+1) <=> k >= log_2 (N/(i+1))
using mathematical division. So the update j *= 2 is called ceiling(log_2 (N/(i+1))) times and the condition is checked 1 + ceiling(log_2 (N/(i+1))) times. Thus we can write the total work
N-1 N
∑ (1 + log (N/(i+1)) = N + N*log N - ∑ log j
i=0 j=1
= N + N*log N - log N!
Now, Stirling's formula tells us
log N! = N*log N - N + O(log N)
so we find the total work done is indeed O(N).
Outer loop runs n times. Now it all depends on the inner loop.
The inner loop now is the tricky one.
Lets follow:
i=0 --> j=1 ---> log(n) iterations
...
...
i=(n/2)-1 --> j=n/2 ---> 1 iteration.
i=(n/2) --> j=(n/2)+1 --->1 iteration.
i > (n/2) ---> 1 iteration
(n/2)-1 >= i > (n/4) ---> 2 iterations
(n/4) >= i > (n/8) ---> 3 iterations
(n/8) >= i > (n/16) ---> 4 iterations
(n/16) >= i > (n/32) ---> 5 iterations
(n/2)*1 + (n/4)*2 + (n/8)*3 + (n/16)*4 + ... + [n/(2^i)]*i
N-1
n*∑ [i/(2^i)] =< 2*n
i=0
--> O(n)
#Daniel Fischer's answer is correct.
I would like to add the fact that this algorithm's exact running time is as follows:
Which means:

The space complexity of Fibonacci Sequence

I saw some textbook about the worst-case space complexity of Fibonacci Sequence. However, I have the following question:
You can start with a concrete example and generalize. Start with n = 5.
S(5) = S(4) + c
= (S(3) + c) + c
= ((S(2) + c) + c) + c
= (((S(1) + c) + c) + c) + c
= S(1) + 4c
There are 4 c's when n = 5. In general, there are n-1 c's.