I'm trying to figure the time complexity of this function out:
Pseudo-code
def inc(m):
if m>12:
return;
for i in range(1,m):
//some code
mergeSort(array,0,n-1)
for j in range(1,m):
//some code
inc(m+=1);
Is the time complexity O(n^2logN)? as you can see, this example is of a recursive function calling a differente recursive function for sorting and at the end itself. I don't know if the for loops affect and also the calling of another recursive function as merge sort.
The time complexity of this code is O(m*n logn), since m is constant which is no more than 12, we can say the complexity is bounded by the complexity of merge sort which is O(nlogn). At each recursive call, the function performs two loops, one before the merge sort and another after it and they both have complexity of O(m). The merge sort has a time complexity of O(nlog n).
Therefore, the total time complexity would be m * (2m + n * log n) which is O(m * (m + n * log n))= O(m^2) + O(mnlogn)
Related
for(i=1;i<=n;i=i*2)
{
for(j=1;j<=i;j++)
{
}
}
How the complexity of the following code is O(nlogn) ?
Time complexity in terms of what? If you want to know how many inner loop operations the algorithm performs, it is not O(n log n). If you want to take into account also the arithmetic operations, then see further below. If you literally are to plug in that code into a programming language, chances are the compiler will notice that your code does nothing and optimise the loop away, resulting in constant O(1) time complexity. But only based on what you've given us, I would interpret it as time complexity in terms of whatever might be inside the inner loop, not counting arithmetic operations of the loops themselves. If so:
Consider an iteration of your inner loop a constant-time operation, then we just need to count how many iterations the inner loop will make.
You will find that it will make
1 + 2 + 4 + 8 + ... + n
iterations, if n is a square number. If it is not square, it will stop a bit sooner, but this will be our upper limit.
We can write this more generally as
the sum of 2i where i ranges from 0 to log2n.
Now, if you do the math, e.g. using the formula for geometric sums, you will find that this sum equals
2n - 1.
So we have a time complexity of O(2n - 1) = O(n), if we don't take the arithmetic operations of the loops into account.
If you wish to verify this experimentally, the best way is to write code that counts how many times the inner loop runs. In javascript, you could write it like this:
function f(n) {
let c = 0;
for(i=1;i<=n;i=i*2) {
for(j=1;j<=i;j++) {
++c;
}
}
console.log(c);
}
f(2);
f(4);
f(32);
f(1024);
f(1 << 20);
If you do want to take the arithmetic operations into account, then it depends a bit on your assumptions but you can indeed get some logarithmic coefficients to account for. It depends on how you formulate the question and how you define an operation.
First, we need to estimate number of high-level operations executed for different n. In this case the inner loop is an operation that you want to count, if I understood the question right.
If it is difficult, you may automate it. I used Matlab for example code since there was no tag for specific language. Testing code will look like this:
% Reasonable amount of input elements placed in array, change it to fit your needs
x = 1:1:100;
% Plot linear function
plot(x,x,'DisplayName','O(n)', 'LineWidth', 2);
hold on;
% Plot n*log(n) function
plot(x, x.*log(x), 'DisplayName','O(nln(n))','LineWidth', 2);
hold on;
% Apply our function to each element of x
measured = arrayfun(#(v) test(v),x);
% Plot number of high level operations performed by our function for each element of x
plot(x,measured, 'DisplayName','Measured','LineWidth', 2);
legend
% Our function
function k = test(n)
% Counter for operations
k = 0;
% Outer loop, same as for(i=1;i<=n;i=i*2)
i = 1;
while i < n
% Inner loop
for j=1:1:i
% Count operations
k=k+1;
end
i = i*2;
end
end
And the result will look like
Our complexity is worse than linear but not worse than O(nlogn), so we choose O(nlogn) as an upper bound.
Furthermore the upper bound should be:
O(n*log2(n))
The worst case is n being in 2^x. x€real numbers
The inner loop is evaluated n times, the outer loop log2 (logarithm basis 2) times.
I am novice in analysising time complexity.some one can help me with the time complexity of below algorithm?
public void test(int n)
{
int i=1;
while(i<n)
{
int j=1;
while (j<i)
{
j=j*2;
}
i=i*2;
}
}
outer loop will run log(n) times.How many times the inner loop will run. How can we calulate the frequency of inner
loop in terms of "n" because here it depends on variable "i" and will run log(i) times.
Can someone help to find time complexity of above code.
The time complexity of the given function is O(log n log n) = O(log^2 n).
The outer loop has time complexity O(log n).
Similarly, the inner loop also has time complexity O(log n) because the value of i is bounded above by n. Hence log i = O(log n).
Outer loop will run for (log n ) times.
Since inner loop is bound by i. So it will run for
log1+log2+log3...log (n-1) times for different values of i.
Solving it above can be deduced to
= log(1*2*3...(n-l). (Because log a* log b=log(a*b)
= log((n-1)!). (This is (n-1) factorial)
=(n-1)log(n-1). (because logn!= nlogn)
SO inner loop will have complexity O(nlogn).
So the time complexity of above algo is
logn + nlogn
which is O(nlogn).
How to calculate the time complexity of the following algorithm?
for(i=1;i<=n;i++)
for(k=i;k*k<=n;k++)
{
Statements;
}
From what I know, time complexity for nested for loops is equal to the number of times the innermost loop is executed. So here innermost loop is executed n*n times, hence it's O(n^2).
Could it be O(n) depending upon the condition k*k<=n given in the second loop?
Thank you!
Time complexity of an algorithm is always measured in terms of a certain type of operation. For example, if your Statements; have an un unknown time complexity which depends on n, then it would be misleading to describe the time complexity in the first place.
But what you are probably after is to know the time complexity in terms of Statements; operations. If Statements; is a constant-time operation, this becomes especially meaningful. And in this case, what we are looking for is simply to count how many times Statements; are executed. If this number is, say, 3*n, then the time complexity would be O(n).
To answer this question, let us break your nested loop apart. The outer loop iterates from (and including) 1 to n, so it will run exactly n times, regardless of anything.
For each iteration of the outer loop, the inner loop will execute once. It starts from k=i and iterates until k*k > n, or k > sqrt(n). Notice that whenever i > sqrt(n), it will not run at all. We can see that on average, it will run for
O(sqrt(n) + sqrt(n)-1 + sqrt(n)-2 + ... + 0) / n
iterations. By the summation formula you can find here, this equals
O( sqrt(n) * (sqrt(n) + 1) / 2 ) = O( (n + sqrt(n))/2 ) = O( n + sqrt(n) ) = O(n).
So yes, the time complexity in this case is O(n) as you suggested.
You can see this in action by writing a simple script which simulates your algorithm and counts the number of Statements;. Below in JavaScript, so it can be run as a snippet:
// Simulation
function f(n) {
let res = 0;
for(let i=1;i<=n;i++)
for(let k=i;k*k<=n;k++)
++res;
return res;
}
// Estimation
function g(n) {
return ~~((n + Math.sqrt(n))/2);
}
console.log(
f(10),
f(100),
f(1000),
f(10000),
);
console.log(
g(10),
g(100),
g(1000),
g(10000),
);
I hope you found this useful.
Hi could anyone explain why the first one is True and second one is False?
First loop , number of times the loop gets executed is k times,
Where for a given n, i takes values 1,2,4,......less than n.
2 ^ k <= n
Or, k <= log(n).
Which implies , k the number of times the first loop gets executed is log(n), that is time complexity here is O(log(n)).
Second loop does not get executed based on p as p is not used in the decision statement of for loop. p does take different values inside the loop, but doesn't influence the decision statement, number of times the p*p gets executed, its time complexity is O(n).
O(logn):
for(i=0;i<n;i=i*c){// Any O(1) expression}
Here, time complexity is O(logn) when the index i is multiplied/divided by a constant value.
In the second case,
for(p=2,i=1,i<n;i++){ p=p*p }
The incremental increase is constant i.e i=i+1, the loop will run n times irrespective of the value of p. Hence the loop alone has a complexity of O(n). Considering naive multiplication p = p*p is an O(n) expression where n is the size of p. Hence the complexity should be O(n^2)
Let me summarize with an example, suppose the value of n is 8 then the possible values of i are 1,2,4,8 as soon as 8 comes look will break. You can see loop run for 3 times i.e. log(n) times as the value of i keeps on increasing by 2X. Hence, True.
For the second part, its is a normal loop which runs for all values of i from 1 to n. And the value of p is increasing be the factor p^2n. So it should be O(p^2n). Thats why it is wrong.
In order to understand why some algorithm is O(log n) it is enough to check what happens when n = 2^k (i.e., we can restrict ourselves to the case where log n happens to be an integer k).
If we inject this into the expression
for(i=1; i<2^k; i=i*2) s+=i;
we see that i will adopt the values 2, 4, 8, 16,..., i.e., 2^1, 2^2, 2^3, 2^4,... until reaching the last one 2^k. In other words, the body of the loop will be evaluated k times. Therefore, if we assume that the body is O(1), we see that the complexity is k*O(1) = O(k) = O(log n).
Consider a tree where the cost of an insertion is in O(log n). Say you start from an empty tree and add N elements iteratively. We want to know the total time complexity. I did this:
nb of operations in iteration i = log i
nb of operations in all iterations from 1 to N = log 1 + log 2 + ... + log N = log( N! )
total complexity = O(N!) ~ O(N log N)
(cf the Stirling approximation http://en.wikipedia.org/wiki/Stirling%27s_approximation )
Is this correct?
Yes, it's nearly correct.
A small correction: in the ith step, the number of operations is not log i, as most of the time that's an irrational number, it's O(log i). So for a mathematically tight proof you have to work a bit harder, but in short, what you wrote is the essence of the proof.