Big O Time Complexity of While Loops - while-loop

I am a bit confused about the time complexity, in the case of two seperate while loops.
I am aware that the code:
while(i<N){
code
while(k<N){
code
}
}
will have a complexity of O(n^2)
What about the case where we don't have nested loops, though?
while(i<N){
}
while(k<N){
}

So you run two loops, one after the other. If they both perform n iterations, then your code performs 2n loop iterations in total.
Now, 2n = O(2n) = O(n) (by the properties of big-O notation), so that is your time complexity.

Related

How to calculate time complexity of this loop?

for (let i = 0; i < n; i+=2){
...operation
}
I have tried various docs on time complexity. I didn't understand properly.
The loop is simply incrementing in a linear fashion. If we use Big O notation, the time complexity for this would therefore be O(n)

Computational complexity depending on two variables

I have an algorithm and it is mainly composed of k-NN , followed by a computation involving finding permutations, followed by some for loops. Line by line, my computational complexity is :
O(n) - for k-NN
O(2^k) - for a part that computes singlets, pairs, triplets, etc.
O(k!) - for a part that deals with combinatorics.
O(k*k!) - for the final part.
K here is a parameter that can be chosen by the user, in general it is somewhat small (10-100). n is the number of examples in my dataset, and this can get very large.
What is the overall complexity of my algorithm? Is it simply O(n) ?
As k <= 100, f(k) = O(1) for every function f.
In your case, there is a function f such that the overall time is O(n + f(k)), so it is O(n)

How to calculate time complexity O(n) of the algorithm?

What I have done:
I measured the time spent processing 100, 1000, 10000, 100000, 1000000 items.
Measurements here: https://github.com/DimaBond174/cache_single_thread
.
Then I assumed that O(n) increases in proportion to n, and calculated the remaining algorithms with respect to O(n) ..
Having time measurements for processing 100, 1000, 10000, 100000, 1000000 items how can we now attribute the algorithm to O(1), O(log n), O(n), O(n log n), or O(n^2) ?
Let's define N as one of the possible inputs of data. An algorithm can have different Big O values depending on which input you're referring to, but generally there's only one big input that you care about. Without the algorithm in question, you can only guess. However there are some guidelines that will help you determine which it is.
General Rule:
O(1) - the speed of the program barely changes regardless of size of data. To get this, a program must not have loops operating on the data in question at all.
O(log N) - the program slows down slightly when N increases dramatically, in a logarithmic curve. To get this, loops must only go through a fraction of the data. (for example, binary search).
O(N) - the program's speed is directly proportional to the size of the data input. If you perform an operation on each unit of the data, you get this. You must not have any kind of nested loops (that act on the data).
O(N log N)- the program's speed is significantly reduced by larger input. This occurs when you have a O(logN) operation NESTED in a loop that would otherwise be O(N). So for example, you had a loop that did a binary search for each unit of data.
O(N^2) - The program will slow down to a crawl with larger input and eventually stall with large enough data. This happens when you have NESTED loops. Same as above, but this time the nested loop is O(N) instead of O(log N)
So, try to think of a looping operation as O(N) or O(log N). Then, whenever you have nesting, multiply them together. If the loops are NOT nested, they are not multiplied like this. So two loops separate from each other would simply be O(2N) and not O(N^2).
Also remember that you may have loops under the hood, so you should think about them too. For example, if you did something like Arrays.sort(X) in Java, that would be a O(N logN) operation. So if you have that inside a loop for some reason, your program is going to be a lot slower than you think.
Hope that answers your question.

What is the big O notation for this algorithm?

What is the big O notation for this algorithm?
i<-0
k<-0
while (i<=n)
{
for (j<-i to n**2)
{
k<-k+1
}
i<-i*2
}
Possible Answers:
a. O(logn)
b. O(n)
c. O(nlogn)
d. None of the answers
As i is multiplied by 2 each time in a while, Hence the while loop will be run log(n) times. And inner for loop will be run in O(n^2) as i is at most n. Hence, the time complexity of the code in O notation is O(n^2 log(n)).

T(n) of the nested loop, i get the answer as (logn+1)(logn+2) , am i right?

i=n;
while(i>=1){
j=i;
while(j<=n){
thetha(1)
j=j*2;
}
i=i/2;
}
Edit : changed the code because of op's comment below.
Yes, you are correct in that the outer loop is Log(n) and the inner loop is Log(n), which yields (log n)(log n).
The reason for Log(n) complexity is because the number of remaining iterations in the loop is halved at each iteration. Whether this is achieved by dividing the iterating variable i by 2 or by multiplying the variable j by 2 is irrelevant. The time taken to complete the loops grows as Log(n) for each loop.
The multiplication of (log n)(log n) is due to the fact that each iteration of the outer loop executes Log(n) iterations of the inner loop.
The additions are unnecessary because in big-O notation we are only concerned with the rate at which a function grows relative to another function. Offsetting it by a constant (or multiplying by a constant) does not change the relative complexity of the functions, so the end result is (log n)(log n).
In the while(i>=1){ (...) } loop, i is bigger than 1 (stricly bigger except for the last iteration). Thus, after j=i, j is bigger than 1 (stricly bigger except for the last iteration).
For that reason, your code is more or less equivalent to :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
j=i
while(j<=1){
thetha(1)
j=j*2;
}
}
Which can be rewritten :
i=n;
while(i>1){
i=i/2;
}
if (i==1){
thetha(1)
}
And the overall complexity is the complexity of the while loop which is log(n).