What is the big O notation for this algorithm? - time-complexity

What is the big O notation for this algorithm?
i<-0
k<-0
while (i<=n)
{
for (j<-i to n**2)
{
k<-k+1
}
i<-i*2
}
Possible Answers:
a. O(logn)
b. O(n)
c. O(nlogn)
d. None of the answers

As i is multiplied by 2 each time in a while, Hence the while loop will be run log(n) times. And inner for loop will be run in O(n^2) as i is at most n. Hence, the time complexity of the code in O notation is O(n^2 log(n)).

Related

Big O Time Complexity of While Loops

I am a bit confused about the time complexity, in the case of two seperate while loops.
I am aware that the code:
while(i<N){
code
while(k<N){
code
}
}
will have a complexity of O(n^2)
What about the case where we don't have nested loops, though?
while(i<N){
}
while(k<N){
}
So you run two loops, one after the other. If they both perform n iterations, then your code performs 2n loop iterations in total.
Now, 2n = O(2n) = O(n) (by the properties of big-O notation), so that is your time complexity.

Determining time complexity of solution by reductions

Suppose that you found a solution to the A problem and are trying to get some idea of its complexity. You solve A by calling your B sub-routine a total of n^2 times and also doing a constant amount of additional work.
If B is selection sort, what is the time complexity of this solution?
If B is merge sort, what is the time complexity of this solution?
My answer to 1st question is n^2 and to 2nd one is nlogn. Any idea will be appreciated about my answers.
I assume that by "solution" you mean "algorithm", and by "this solution", you mean the algorithm that solves problem A by calling B n^2 times. Furthermore, I assume that by n you mean the size of the input.
Then if B is selection sort, which is an O(n^2) algorithm, the algorithm for solving A would be O(n^2 * n^2) = O(n^4).
If B is merge sort, which is O(n log n), the algorithm for solving A would be O(n^2* n log n) = O(n^3 log n).
Yeah your are right,
O(B) = n ^ 2 -> Selection Sort;
O(B) = n * log(n). -> Marge Sort

Does O(N/2) simplifies to O(log n)?

I have one algorithm that only needs to run in O(N/2) time (basically it is a for loop that only runs over half of the elements).
How does this simplifies in the big O notation? O(log n)?
Big O notation drops the factors. O(N/2) = O(1/2 * N) and simplifies to O(N).
If you want to know why the factor is dropped I would refer you to this other SO question.

time comlpexity of enumeration all the subsets

for (i=0;i<n;i++)
{
enumerate all subsets of size i = 2^n
each subset of size i takes o(nlogn) to search a solution
from all these solution I want to search the minimum subset of size S.
}
I want to know the complexity of this algorithm it'is 2^n O(nlogn*n)=o(2^n n²) ??
If I understand you right:
You iterate all subsets of a sorted set of n numbers.
For each subset you test in O(n log n) if its is a solution. (how ever you do this)
After you have all this solutions you looking for the one with exact S elements with the smalest sum.
The way you write it, the complexity would be O(2^n * n log n) * O(log (2^n)) = O(2^n * n^2 log n). O(log (2^n)) = O(n) is for searching the minimum solution, and you do this every round of the for loop with worst case i=n/2 and every subset is a solution.
Now Im not sure if you mixing O() and o() up.
2^n O(nlogn*n)=o(2^n n²) is only right if you mean 2^n O(nlog(n*n)).
f=O(g) means, the complexity of f is not bigger than the complexity of g.
f=o(g) means the complexity of f is smaller than the complexity of g.
So 2^n O(nlogn*n) = O(2^n n logn^2) = O(2^n n * 2 logn) = O(2^n n logn) < O(2^n n^2)
Notice: O(g) = o(h) is never a good notation. You will (most likly every time) find a function f with f=o(h) but f != O(g), if g=o(h).
Improvements:
If I understand your algorithm right, you can speed it a little up. You know the size of the subset you looking for, so only look at all the subsets that have the size S. The worst case is S=n/2, so C(n,n/2) ~ 2^(n-1) will not reduce the complexity but saves you a factor 2.
You can also just save a solution and check if the next solution is smaller. this way you get the smallest solution without serching for it again. So the complexity would be O(2^n * n log n).

T(n) of the nested loop, i get the answer as (logn+1)(logn+2) , am i right?

i=n;
while(i>=1){
j=i;
while(j<=n){
thetha(1)
j=j*2;
}
i=i/2;
}
Edit : changed the code because of op's comment below.
Yes, you are correct in that the outer loop is Log(n) and the inner loop is Log(n), which yields (log n)(log n).
The reason for Log(n) complexity is because the number of remaining iterations in the loop is halved at each iteration. Whether this is achieved by dividing the iterating variable i by 2 or by multiplying the variable j by 2 is irrelevant. The time taken to complete the loops grows as Log(n) for each loop.
The multiplication of (log n)(log n) is due to the fact that each iteration of the outer loop executes Log(n) iterations of the inner loop.
The additions are unnecessary because in big-O notation we are only concerned with the rate at which a function grows relative to another function. Offsetting it by a constant (or multiplying by a constant) does not change the relative complexity of the functions, so the end result is (log n)(log n).
In the while(i>=1){ (...) } loop, i is bigger than 1 (stricly bigger except for the last iteration). Thus, after j=i, j is bigger than 1 (stricly bigger except for the last iteration).
For that reason, your code is more or less equivalent to :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
j=i
while(j<=1){
thetha(1)
j=j*2;
}
}
Which can be rewritten :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
thetha(1)
}
And the overall complexity is the complexity of the while loop which is log(n).