I want to know the time complexity of the code attached.
I get O(n^2logn), while my friends get O(nlogn) and O(n^2).
SomeMethod() = log n
Here is the code:
j = i**2;
for (k = 0; k < j; k++) {
for (p = 0; p < j; p++) {
x += p;
}
someMethod();
}
The question is not very clear about the variable N and the statement i**2.
i**2 gives a compilation error in java.
assuming someMethod() takes log N time(as mentioned in question), and completely ignoring value of N,
lets call i**2 as Z
someMethod(); runs Z times. and time complexity of the method is log N so that becomes:
Z * log N ----------------------------------------- A
lets call this expression A.
Now, x+=p runs Z^2 times (i loop * j loop) and takes constant time to run. that makes the following expression:
( Z^2 ) * 1 = ( Z^2 ) ---------------------- B
lets call this expression B.
The total run time is sum of expression A and expression B. which brings us to:
O((Z * log N) + (Z^2))
where Z = i**2
so final expression will be O(((i**2) * log N) + ((i**2)^2))
if we can assume i**2 is i^2, the expression becomes,
O(((i^2) * log N) + (i^4))
Considering only the higher order variables, like we consider n^2 in n^2 + 2n + 5, the complexity can be expressed as follows,
i^4
Based on the picture, the complexity is O(logNI2 + I4).
We cannot give a complexity class with one variable because the picture does not explain the relationship between N and I. They must be treated as separate variables.
And likewise, we cannot eliminate the logNI2 term because the N variable will dominate the I variable in some regions of N x I space.
If we treat N as a constant, then the complexity class reduces to O(I4).
We get the same if we treat N as being the same thing as I; i.e. there is a typo in the question.
(I think there is mistake in the way the question was set / phrased. If not, this is a trick question designed to see if you really understood the mathematical principles behind complexity involving multiple independent variables.)
Related
Can anybody help me find time conplexity of this recursive function?
int test(int m, int n) {
if(n == 0)
return m;
else
return (3 + test(m + n, n - 1));
The test(m+n, n-1) is called n-1 times before base case which is if (n==0), so complexity is O(n)
Also, this is a duplicate of Determining complexity for recursive functions (Big O notation)
It is really important to understand recursion and the time-complexity of recursive functions.
The first step to understand easy recursive functions like that is to be able to write the same function in an iterative way. This is not always easy and not always reasonable but in case of an easy function like yours this shouldn't be a problem.
So what happens in your function in every recursive call?
is n > 0?
If yes:
m = m + n + 3
n = n - 1
If no:
return m
Now it should be pretty easy to come up with the following (iterative) alternative:
int testIterative(int m, int n) {
while(n != 0) {
m = m + n + 3;
n = n - 1;
}
return m;
}
Please note: You should pay attention to negative n. Do you understand what's the problem here?
Time complexity
After looking at the iterative version, it is easy to see that the time-complexity is depending on n: The time-complexity therefore is O(n).
I have a simple loop like this :
for (int i = 0; i < n; i++) {
// constant time operation
}
It’s very easy to see that it’s of O(n) time complexity, but if we calculate it, why is it 2*n + 2 + c*n (given answer) and not (1+ (n+1) + 2*n + c*n) = (3+c)*n + 2? I see i++ as 2 operations: addition and assignment; thus, it should be 2*n, and the constant operation is executed n times, so it’s c*n.
2*n
(n comparisons (i<n)
+ n increments (i++)) == 2*n
2
(i=0 1 for assignment
+ 1 for allocation of i) == 2
c*n
(n constant time operations) == c*n
From what I can tell, increment/decrement is treated as a single operation. This makes sense because in assembly, you can perform an increment or decrement with a single line of assembly code. Furthermore, most lines of assembly code translate directly to a single binary instruction. Thus, increment/decrement is effectively a constant time operation.
Therefore, we have n operations from incrementing. We also run the body of the loop n times, and the body of the loop performs a constant time operation, so we have an additional c * n operations. When we enter the loop the first time, there is an additional assignment operation. This yields another operation. Finally, after the loop runs the nth time, the loop checks the condition of the loop one more time. This means there are n + 1 comparisons that the loop performs.
Adding these up, we have n + c * n + 1 + (n + 1) = 2 * n + c * n + 2, which is the answer you saw.
Suppose I have a recursive procedure with a formal parameter p. This procedure
wraps the recursive call in a Θ(1) (deferred) operation
and executes a Θ(g(k)) operation before that call.
k is dependent upon the value of p. [1]
The procedure calls itself with the argument p/b where b is a constant (assume it terminates at some point in the range between 1 and 0).
Question 1.
If n is the value of the argument to p in the initial call to the procedure, what are the orders of growth of the space and the number of steps executed, in terms of n, for the process this procedure generates
if k = p? [2]
if k = f(p)? [3]
Footnotes
[1] i.e., upon the value of the argument passed into p.
[2] i.e., the size of the input to the nested operation is same as that for our procedure.
[3] i.e., the size of the input to the nested operation is some function of the input size of our procedure.
Sample procedure
(define (* a b)
(cond ((= b 0) 0)
((even? b) (double (* a (halve b))))
(else (+ a (* a (- b 1))))))
This procedure performs integer multiplication as repeated additions based on the rules
a * b = double (a * (b / 2)) if b is even
a * b = a + (a * (b - 1)) if b is odd
a * b = 0 if b is zero
Pseudo-code:
define *(a, b) as
{
if (b is 0) return 0
if (b is even) return double of *(a, halve (b))
else return a + *(a, b - 1)
}
Here
the formal parameter is b.
argument to the recursive call is b/2.
double x is a Θ(1) operation like return x + x.
halve k is Θ(g(k)) with k = b i.e., it is Θ(g(b)).
Question 2.
What will be the orders of growth, in terms of n, when *(a, n) is evaluated?
Before You Answer
Please note that the primary questions are the two parts of question 1.
Question 2 can be answered as the first part. For the second part, you can assume f(p) to be any function you like: log p, p/2, p^2 etc.
I saw someone has already answered question 2, so I'll answer question 1 only.
First thing is to notice is that the two parts of the question are equivalent. In the first question, k=p so we execute a Θ(g(p)) operation for some function g. In the second one, k=f(p) and we execute a Θ(g(f(p))) = Θ((g∘f)(p)). replace g from the first question by g∘f and the second question is solved.
Thus, let's consider the first case only, i.e. k=p. Denote the time complexity of the recursive procedure by T(n) and we have that:
T(n) = T(n/b) + g(n) [The free term should be multiplied by a constant c, but we can talk about complexity in "amount of c's" and the theta bound will obviously remain the same]
The solution of the recursive formula is T(n) = g(n) + g(n/b) + ... + g(n/b^i) + ... + g(1)
We cannot further simplify it unless given additional information about g. For example, if g is a polynomial, g(n) = n^k, we get that
T(n) = n^k * (1 + b^-k + b^-2k + b^-4k + ... + b^-log(n)*k) <= n^k * (1 + b^-1 + b^-2 + ....) <= n^k * c for a constant c, thus T(n) = Θ(n^k).
But, if g(n) = log_b(n), [from now on I ommit the base of the log] we get that T(n) = log(n) + log(n/b) + ... + log(n/(b^log_b(n))) = log(n^log(n) * 1/b^(1 + 2 + ... log(n))) = log(n)^2 - log(n)^2 / 2 - log(n) / 2 = Θ(log(n) ^ 2) = Θ(g(n)^2).
You can easily prove, using a similar proof to the one where g is a polynomial that when g = Ω(n), i.e., at least linear, then the complexity is g(n). But when g is sublinear the complexity may be well bigger than g(n), as g(n/b) may be much bigger then g(n) / b.
You need to apply the wort case analysis.
First,
you can approximate the solution by using powers of two:
If then clearly the algorithm takes: (where ).
If it is an odd number then after applying -1 you get an even number and you divide by 2, you can repeat this only times, and the number of steps is also , the case of b being an odd number is clearly the worst case and this gives you the answer.
(I think you need an additional base case for: b=1)
I am trying to calculate the time complexity of this function
Code
int Almacen::poner_items(id_sala s, id_producto p, int cantidad){
it_prod r = productos.find(p);
if(r != productos.end()) {
int n = salas[s - 1].size();
int m = salas[s - 1][0].size();
for(int i = n - 1; i >= 0 && cantidad > 0; --i) {
for(int j = 0; j < m && cantidad > 0; ++j) {
if(salas[s - 1][i][j] == "NULL") {
salas[s - 1][i][j] = p;
r->second += 1;
--cantidad;
}
}
}
}
else {
displayError();
return -1;
}
return cantidad;
}
the variable productos is a std::map and its find method has a time complexity of Olog(n) and other variable salas is a std::vector.
I calculated the time and I found that it was log(n) + nm but am not sure if it is the correct expression or I should leave it as nm because it is the worst or if I whould use n² only.
Thanks
The overall function is O(nm). Big-O notation is all about "in the limit of large values" (and ignores constant factors). "Small" overheads (like an O(log n) lookup, or even an O(n log n) sort) are ignored.
Actually, the O(n log n) sort case is a bit more complex. If you expect m to be typically the same sort of size as n, then O(nm + nlogn) == O(nm), if you expect n ≫ m, then O(nm + nlogn) == O(nlogn).
Incidentally, this is not a question about C++.
In general when using big O notation, you only leave the most dominant term when taking all variables to infinity.
n by itself is much larger than log n at infinity, so even without m you can (and generally should) drop the log n term, so O(nm) looks fine to me.
In non-theoretical use cases, it is sometimes important to understand the actual complexity (for non-infinite inputs), since sometimes algorithms that are slow at infinity can produce better results for shorter inputs (there are some examples where O(1) algorithms have such a terrible constant that an exponential algorithm does better in real life). quick sort is considered a practical example of an O(n^2) algorithm that often does better than it's O(n log n) counterparts.
Read about "Big O Notation" for more info.
let
k = productos.size()
n = salas[s - 1].size()
m = salas[s - 1][0].size()
your algorithm is O(log(k) + nm). You need to use a distinct name for each independent variable
Now it might be the case that there is a relation between k, n, m and you can re-label with a reduced set of variables, but that is not discernible from your code, you need to know about the data.
It may also be the case that some of these terms won't grow large, in which case they are actually constants, i.e. O(1).
E.g. you may know k << n, k << m and n ~= m , which allows you describe it as O(n^2)
So these are the for loops that I have to find the time complexity, but I am not really clearly understood how to calculate.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
For the first line, (int i = n; i > 1; i /= 3), keeps diving i by 3 and if i is less than 1 then the loop stops there, right?
But what is the time complexity of that? I think it is n, but I am not really sure.
The reason why I am thinking it is n is, if I assume that n is 30 then i will be like 30, 10, 3, 1 then the loop stops. It runs n times, doesn't it?
And for the last for loop, I think its time complexity is also n because what it does is
k starts as 2 and keeps multiplying itself to itself until k is greater than n.
So if n is 20, k will be like 2, 4, 16 then stop. It runs n times too.
I don't really think I am understanding this kind of questions because time complexity can be log(n) or n^2 or etc but all I see is n.
I don't really know when it comes to log or square. Or anything else.
Every for loop runs n times, I think. How can log or square be involved?
Can anyone help me understanding this? Please.
Since all three loops are independent of each other, we can analyse them separately and multiply the results at the end.
1. i loop
A classic logarithmic loop. There are countless examples on SO, this being a similar one. Using the result given on that page and replacing the division constant:
The exact number of times that this loop will execute is ceil(log3(n)).
2. j loop
As you correctly figured, this runs O(n / 2) times;
The exact number is floor(n / 2).
3. k loop
Another classic known result - the log-log loop. The code just happens to be an exact replicate of this SO post;
The exact number is ceil(log2(log2(n)))
Combining the above steps, the total time complexity is given by
Note that the j-loop overshadows the k-loop.
Numerical tests for confirmation
JavaScript code:
T = function(n) {
var m = 0;
for (var i = n; i > 1; i /= 3) {
for (var j = 0; j < n; j += 2)
m++;
for (var k = 2; k < n; k = k * k)
m++;
}
return m;
}
M = function(n) {
return ceil(log(n)/log(3)) * (floor(n/2) + ceil(log2(log2(n))));
}
M(n) is what the math predicts that T(n) will exactly be (the number of inner loop executions):
n T(n) M(n)
-----------------------
100000 550055 550055
105000 577555 577555
110000 605055 605055
115000 632555 632555
120000 660055 660055
125000 687555 687555
130000 715055 715055
135000 742555 742555
140000 770055 770055
145000 797555 797555
150000 825055 825055
M(n) matches T(n) perfectly as expected. A plot of T(n) against n log n (the predicted time complexity):
I'd say that is a convincing straight line.
tl;dr; I describe a couple of examples first, I analyze the complexity of the stated problem of OP at the bottom of this post
In short, the big O notation tells you something about how a program is going to perform if you scale the input.
Imagine a program (P0) that counts to 100. No matter how often you run the program, it's going to count to 100 as fast each time (give or take). Obviously right?
Now imagine a program (P1) that counts to a number that is variable, i.e. it takes a number as an input to which it counts. We call this variable n. Now each time P1 runs, the performance of P1 is dependent on the size of n. If we make n a 100, P1 will run very quickly. If we make n equal to a googleplex, it's going to take a little longer.
Basically, the performance of P1 is dependent on how big n is, and this is what we mean when we say that P1 has time-complexity O(n).
Now imagine a program (P2) where we count to the square root of n, rather than to itself. Clearly the performance of P2 is going to be worse than P1, because the number to which they count differs immensely (especially for larger n's (= scaling)). You'll know by intuition that P2's time-complexity is equal to O(n^2) if P1's complexity is equal to O(n).
Now consider a program (P3) that looks like this:
var length= input.length;
for(var i = 0; i < length; i++) {
for (var j = 0; j < length; j++) {
Console.WriteLine($"Product is {input[i] * input[j]}");
}
}
There's no n to be found here, but as you might realise, this program still depends on an input called input here. Simply because the program depends on some kind of input, we declare this input as n if we talk about time-complexity. If a program takes multiple inputs, we simply call those different names so that a time-complexity could be expressed as O(n * n2 + m * n3) where this hypothetical program would take 4 inputs.
For P3, we can discover it's time-complexity by first analyzing the number of different inputs, and then by analyzing in what way it's performance depends on the input.
P3 has 3 variables that it's using, called length, i and j. The first line of code does a simple assignment, which' performance is not dependent on any input, meaning the time-complexity of that line of code is equal to O(1) meaning constant time.
The second line of code is a for loop, implying we're going to do something that might depend on the length of something. And indeed we can tell that this first for loop (and everything in it) will be executed length times. If we increase the size of length, this line of code will do linearly more, thus this line of code's time complexity is O(length) (called linear time).
The next line of code will take O(length) time again, following the same logic as before, however since we are executing this every time execute the for loop around it, the time complexity will be multiplied by it: which results in O(length) * O(length) = O(length^2).
The insides of the second for loop do not depend on the size of the input (even though the input is necessary) because indexing on the input (for arrays!!) will not become slower if we increase the size of the input. This means that the insides will be constant time = O(1). Since this runs in side of the other for loop, we again have to multiply it to obtain the total time complexity of the nested lines of code: `outside for-loops * current block of code = O(length^2) * O(1) = O(length^2).
The total time-complexity of the program is just the sum of everything we've calculated: O(1) + O(length^2) = O(length^2) = O(n^2). The first line of code was O(1) and the for loops were analyzed to be O(length^2). You will notice 2 things:
We rename length to n: We do this because we express
time-complexity based on generic parameters and not on the ones that
happen to live within the program.
We removed O(1) from the equation. We do this because we're only
interested in the biggest terms (= fastest growing). Since O(n^2)
is way 'bigger' than O(1), the time-complexity is defined equal to
it (this only works like that for terms (e.g. split by +), not for
factors (e.g. split by *).
OP's problem
Now we can consider your program (P4) which is a little trickier because the variables within the program are defined a little cloudier than the ones in my examples.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
}
If we analyze we can say this:
The first line of code is executed O(cbrt(3)) times where cbrt is the cubic root of it's input. Since i is divided by 3 every loop, the cubic root of n is the number of times the loop needs to be executed before i is smaller or equal to 1.
The second for loop is linear in time because j is executed
O(n / 2) times because it is increased by 2 rather than 1 which
would be 'normal'. Since we know that O(n/2) = O(n), we can say
that this for loop is executed O(cbrt(3)) * O(n) = O(n * cbrt(n)) times (first for * the nested for).
The third for is also nested in the first for, but since it is not nested in the second for, we're not going to multiply it by the second one (obviously because it is only executed each time the first for is executed). Here, k is bound by n, however since it is increased by a factor of itself each time, we cannot say it is linear, i.e. it's increase is defined by a variable rather than by a constant. Since we increase k by a factor of itself (we square it), it will reach n in 2log(n) steps. Deducing this is easy if you understand how log works, if you don't get this you need to understand that first. In any case, since we analyze that this for loop will be run O(2log(n)) time, the total complexity of the third for is O(cbrt(3)) * O(2log(n)) = O(cbrt(n) *2log(n))
The total time-complexity of the program is now calculated by the sum of the different sub-timecomplexities: O(n * cbrt(n)) + O(cbrt(n) *2log(n))
As we saw before, we only care about the fastest growing term if we talk about big O notation, so we say that the time-complexity of your program is equal to O(n * cbrt(n)).