I what to calculate the worst case, time complexity for this recursive function.
list is a list of m*n pieces.
matrix is a matrix of mxn to fill with this peaces.
Backtrack(list, matrix):
if(matrix is complete) //O(1)
return true
from the list of m*n pieces, make a list of candidatePieces to put in the matrix. // O(m*n)
for every candidatePiece // Worst case of n*m calls
Put that piece in the matrix // O(1)
if(Backtrack(list, matrix) is true)
return true
What I guess is that the formula is something like:
T(n*m) = T(n*m - 1) + O(n*m) + O(1) = T(n*m - 1) + O(n*m)
Is this correct?
I can't use the master theorem, witch other method can I use to get a closed formula?
If you unwind your formula, you'll get
T(n*m) = T(n*m-1)+O(n*m) = T(n*m-2)+O(n*m-1) + O(n*m) = ...
= O(n*m) + O(n*m-1) + O(n*m-2) +... + O(1) => ~ O(n^2*m^2)
But I wonder is that algorithm complete? It does not seem to return false at all.
So we have:
Backtrack(list, matrix):
if(matrix is complete) //O(1)
return true
from the list of m*n pieces, make a list of candidatePieces to put in the matrix. // O(m*n)
for every candidatePiece // Worst case of n*m calls
Put that piece in the matrix // O(1)
if(Backtrack(list, matrix) is true)
return true
We assume worst-case behaviour, some matrix is complete is false for as long as possible. I'm going to assume this is up until O(n·m) insertions.
So consider for every candidatePiece. In order to get to the second item, you'll need Backtrack(list, matrix) is true to be false at least once. This requires termination other than by return true. As the only way to do that requires exhausting the loop, and that requires exhausting the loop (and so on), this can only happen if candidatePiece is empty.
I assume pieces can only be used once. In this case, there are O(n·m) things in the matrix. This means matrix is complete has returned true, so this actually doesn't let the loop run a second time.
If pieces don't run out, this is also obviously true.
We can then simplify to
Backtrack(list, matrix):
if(matrix is complete)
return true
if(candidatePiece is not empty)
Put the first piece in the matrix
if(Backtrack(list, matrix) is true)
return true
Then we have
T(n·m) = T(n·m - 1) + O(n·m)
as you said, which leads to Ashalynd's answer.
Assume instead that matrix is complete can return false even when the matrix is full. Assume also that pieces run out, so the matrix can never be over-full. I also assume that pieces are removed as appropriate.
The outmost loop (loop 0) costs O(n·m) and runs the inner loop (loop 1) n·m times.
The inner loop (loop 1) costs O(n·m - 1) and runs the inner inner loop (loop 2) n·m -1 times.
...
Loop n·m - 1 costs 1 and runs loop n·m once.
Loop n·m costs 0 (call overhead moves into loop n·m - 1) and terminates.
So, let T(n·m) be the cost for loop 0, and T(0) be the cost for loop n·m.
T(x) = O(x) + x · T(x - 1)
and WolframAlpha solves this to be O(Γ(x+1) + x·Γ(x, 1)).
By some research, this is equal to
O(x! + x· [ (x-1)! · e⁻¹ · eₓ₋₁(1) ])
= O(x!) + O(x· eₓ₋₁(1))
= O(x!) + O(x· ∑ 1/k! from k=0 to x-1)
= O(x!)
So for x = n·m we are talking O((n·m)!) time. That's pretty poor :/.
Related
for(i=1;i<=n;i=i*2)
{
for(j=1;j<=i;j++)
{
}
}
How the complexity of the following code is O(nlogn) ?
Time complexity in terms of what? If you want to know how many inner loop operations the algorithm performs, it is not O(n log n). If you want to take into account also the arithmetic operations, then see further below. If you literally are to plug in that code into a programming language, chances are the compiler will notice that your code does nothing and optimise the loop away, resulting in constant O(1) time complexity. But only based on what you've given us, I would interpret it as time complexity in terms of whatever might be inside the inner loop, not counting arithmetic operations of the loops themselves. If so:
Consider an iteration of your inner loop a constant-time operation, then we just need to count how many iterations the inner loop will make.
You will find that it will make
1 + 2 + 4 + 8 + ... + n
iterations, if n is a square number. If it is not square, it will stop a bit sooner, but this will be our upper limit.
We can write this more generally as
the sum of 2i where i ranges from 0 to log2n.
Now, if you do the math, e.g. using the formula for geometric sums, you will find that this sum equals
2n - 1.
So we have a time complexity of O(2n - 1) = O(n), if we don't take the arithmetic operations of the loops into account.
If you wish to verify this experimentally, the best way is to write code that counts how many times the inner loop runs. In javascript, you could write it like this:
function f(n) {
let c = 0;
for(i=1;i<=n;i=i*2) {
for(j=1;j<=i;j++) {
++c;
}
}
console.log(c);
}
f(2);
f(4);
f(32);
f(1024);
f(1 << 20);
If you do want to take the arithmetic operations into account, then it depends a bit on your assumptions but you can indeed get some logarithmic coefficients to account for. It depends on how you formulate the question and how you define an operation.
First, we need to estimate number of high-level operations executed for different n. In this case the inner loop is an operation that you want to count, if I understood the question right.
If it is difficult, you may automate it. I used Matlab for example code since there was no tag for specific language. Testing code will look like this:
% Reasonable amount of input elements placed in array, change it to fit your needs
x = 1:1:100;
% Plot linear function
plot(x,x,'DisplayName','O(n)', 'LineWidth', 2);
hold on;
% Plot n*log(n) function
plot(x, x.*log(x), 'DisplayName','O(nln(n))','LineWidth', 2);
hold on;
% Apply our function to each element of x
measured = arrayfun(#(v) test(v),x);
% Plot number of high level operations performed by our function for each element of x
plot(x,measured, 'DisplayName','Measured','LineWidth', 2);
legend
% Our function
function k = test(n)
% Counter for operations
k = 0;
% Outer loop, same as for(i=1;i<=n;i=i*2)
i = 1;
while i < n
% Inner loop
for j=1:1:i
% Count operations
k=k+1;
end
i = i*2;
end
end
And the result will look like
Our complexity is worse than linear but not worse than O(nlogn), so we choose O(nlogn) as an upper bound.
Furthermore the upper bound should be:
O(n*log2(n))
The worst case is n being in 2^x. x€real numbers
The inner loop is evaluated n times, the outer loop log2 (logarithm basis 2) times.
int a = 0, i = N;
while (i > 0)
{
a += i;
i /= 2;
}
How will I calculate the time complexity of the code? Can anyone Explain?
Time complexity is basically the number of times a loop will run. Big O is the worst case complexity that a particular loop can have. For example, if linear search were being used to find K, which is, say the (n-1)th element of an array(0 indexed, starts with 0), the program would have to loop through the entire array to find the element. This would mean that the loop has to run n times in the worst case, giving linear search a time complexity of O(n).
In the case of your problem, i is initally equal to N and decrements by half per iteration. This would mean that when (N/pow(2, m) > 0 the loop terminates. So the loop runs at most m times which log(n).
log(N) = log(pow(2,m)) ==> log(N) = m
So these are the for loops that I have to find the time complexity, but I am not really clearly understood how to calculate.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
For the first line, (int i = n; i > 1; i /= 3), keeps diving i by 3 and if i is less than 1 then the loop stops there, right?
But what is the time complexity of that? I think it is n, but I am not really sure.
The reason why I am thinking it is n is, if I assume that n is 30 then i will be like 30, 10, 3, 1 then the loop stops. It runs n times, doesn't it?
And for the last for loop, I think its time complexity is also n because what it does is
k starts as 2 and keeps multiplying itself to itself until k is greater than n.
So if n is 20, k will be like 2, 4, 16 then stop. It runs n times too.
I don't really think I am understanding this kind of questions because time complexity can be log(n) or n^2 or etc but all I see is n.
I don't really know when it comes to log or square. Or anything else.
Every for loop runs n times, I think. How can log or square be involved?
Can anyone help me understanding this? Please.
Since all three loops are independent of each other, we can analyse them separately and multiply the results at the end.
1. i loop
A classic logarithmic loop. There are countless examples on SO, this being a similar one. Using the result given on that page and replacing the division constant:
The exact number of times that this loop will execute is ceil(log3(n)).
2. j loop
As you correctly figured, this runs O(n / 2) times;
The exact number is floor(n / 2).
3. k loop
Another classic known result - the log-log loop. The code just happens to be an exact replicate of this SO post;
The exact number is ceil(log2(log2(n)))
Combining the above steps, the total time complexity is given by
Note that the j-loop overshadows the k-loop.
Numerical tests for confirmation
JavaScript code:
T = function(n) {
var m = 0;
for (var i = n; i > 1; i /= 3) {
for (var j = 0; j < n; j += 2)
m++;
for (var k = 2; k < n; k = k * k)
m++;
}
return m;
}
M = function(n) {
return ceil(log(n)/log(3)) * (floor(n/2) + ceil(log2(log2(n))));
}
M(n) is what the math predicts that T(n) will exactly be (the number of inner loop executions):
n T(n) M(n)
-----------------------
100000 550055 550055
105000 577555 577555
110000 605055 605055
115000 632555 632555
120000 660055 660055
125000 687555 687555
130000 715055 715055
135000 742555 742555
140000 770055 770055
145000 797555 797555
150000 825055 825055
M(n) matches T(n) perfectly as expected. A plot of T(n) against n log n (the predicted time complexity):
I'd say that is a convincing straight line.
tl;dr; I describe a couple of examples first, I analyze the complexity of the stated problem of OP at the bottom of this post
In short, the big O notation tells you something about how a program is going to perform if you scale the input.
Imagine a program (P0) that counts to 100. No matter how often you run the program, it's going to count to 100 as fast each time (give or take). Obviously right?
Now imagine a program (P1) that counts to a number that is variable, i.e. it takes a number as an input to which it counts. We call this variable n. Now each time P1 runs, the performance of P1 is dependent on the size of n. If we make n a 100, P1 will run very quickly. If we make n equal to a googleplex, it's going to take a little longer.
Basically, the performance of P1 is dependent on how big n is, and this is what we mean when we say that P1 has time-complexity O(n).
Now imagine a program (P2) where we count to the square root of n, rather than to itself. Clearly the performance of P2 is going to be worse than P1, because the number to which they count differs immensely (especially for larger n's (= scaling)). You'll know by intuition that P2's time-complexity is equal to O(n^2) if P1's complexity is equal to O(n).
Now consider a program (P3) that looks like this:
var length= input.length;
for(var i = 0; i < length; i++) {
for (var j = 0; j < length; j++) {
Console.WriteLine($"Product is {input[i] * input[j]}");
}
}
There's no n to be found here, but as you might realise, this program still depends on an input called input here. Simply because the program depends on some kind of input, we declare this input as n if we talk about time-complexity. If a program takes multiple inputs, we simply call those different names so that a time-complexity could be expressed as O(n * n2 + m * n3) where this hypothetical program would take 4 inputs.
For P3, we can discover it's time-complexity by first analyzing the number of different inputs, and then by analyzing in what way it's performance depends on the input.
P3 has 3 variables that it's using, called length, i and j. The first line of code does a simple assignment, which' performance is not dependent on any input, meaning the time-complexity of that line of code is equal to O(1) meaning constant time.
The second line of code is a for loop, implying we're going to do something that might depend on the length of something. And indeed we can tell that this first for loop (and everything in it) will be executed length times. If we increase the size of length, this line of code will do linearly more, thus this line of code's time complexity is O(length) (called linear time).
The next line of code will take O(length) time again, following the same logic as before, however since we are executing this every time execute the for loop around it, the time complexity will be multiplied by it: which results in O(length) * O(length) = O(length^2).
The insides of the second for loop do not depend on the size of the input (even though the input is necessary) because indexing on the input (for arrays!!) will not become slower if we increase the size of the input. This means that the insides will be constant time = O(1). Since this runs in side of the other for loop, we again have to multiply it to obtain the total time complexity of the nested lines of code: `outside for-loops * current block of code = O(length^2) * O(1) = O(length^2).
The total time-complexity of the program is just the sum of everything we've calculated: O(1) + O(length^2) = O(length^2) = O(n^2). The first line of code was O(1) and the for loops were analyzed to be O(length^2). You will notice 2 things:
We rename length to n: We do this because we express
time-complexity based on generic parameters and not on the ones that
happen to live within the program.
We removed O(1) from the equation. We do this because we're only
interested in the biggest terms (= fastest growing). Since O(n^2)
is way 'bigger' than O(1), the time-complexity is defined equal to
it (this only works like that for terms (e.g. split by +), not for
factors (e.g. split by *).
OP's problem
Now we can consider your program (P4) which is a little trickier because the variables within the program are defined a little cloudier than the ones in my examples.
for (int i = n; i > 1; i /= 3) {
for (int j = 0; j < n; j += 2) {
... ...
}
for (int k = 2; k < n; k = (k * k) {
...
}
}
If we analyze we can say this:
The first line of code is executed O(cbrt(3)) times where cbrt is the cubic root of it's input. Since i is divided by 3 every loop, the cubic root of n is the number of times the loop needs to be executed before i is smaller or equal to 1.
The second for loop is linear in time because j is executed
O(n / 2) times because it is increased by 2 rather than 1 which
would be 'normal'. Since we know that O(n/2) = O(n), we can say
that this for loop is executed O(cbrt(3)) * O(n) = O(n * cbrt(n)) times (first for * the nested for).
The third for is also nested in the first for, but since it is not nested in the second for, we're not going to multiply it by the second one (obviously because it is only executed each time the first for is executed). Here, k is bound by n, however since it is increased by a factor of itself each time, we cannot say it is linear, i.e. it's increase is defined by a variable rather than by a constant. Since we increase k by a factor of itself (we square it), it will reach n in 2log(n) steps. Deducing this is easy if you understand how log works, if you don't get this you need to understand that first. In any case, since we analyze that this for loop will be run O(2log(n)) time, the total complexity of the third for is O(cbrt(3)) * O(2log(n)) = O(cbrt(n) *2log(n))
The total time-complexity of the program is now calculated by the sum of the different sub-timecomplexities: O(n * cbrt(n)) + O(cbrt(n) *2log(n))
As we saw before, we only care about the fastest growing term if we talk about big O notation, so we say that the time-complexity of your program is equal to O(n * cbrt(n)).
I have a sparse array: term_doc
its size is 622256x715 of Float64. It is very sparse:
Of its ~444,913,040 cells, only about 22,215 of them normally are nonempty.
Of the 622256 rows only 4,699 are occupied
though of the 715 columns all are occupied.
The operator I would like to perform can be described as returning the row normalized and column normalized versions this matrix.
The Naive nonsparse version, I wrote is:
function doUnsparseWay()
gc() #Force Garbage collect before I start (and periodically during). This uses alot of memory
term_doc
N = term_doc./sum(term_doc,1)
println("N done")
gc()
P = term_doc./sum(term_doc,2)
println("P done")
gc()
N[isnan(N)] = 0.0
P[isnan(P)] = 0.0
N,P,term_doc
end
Running this:
> #time N,P,term_doc= doUnsparseWay()
outputs:
N done
P done
elapsed time: 30.97332475 seconds (14466 MB allocated, 5.15% gc time in 13 pauses with 3 full sweep)
It is fairly simple.
It chews memory, and will crash if the garbage collection does not occur at the right times (Thus I call it manually).
But it is fairly fast
I wanted to get it to work on the sparse matrix.
So as not to chew my memory out,
and because logically it is a faster operation -- less cells need operating on.
I followed suggestions from this post and from the performance page of the docs.
function doSparseWay()
term_doc::SparseMatrixCSC{Float64,Int64}
N= spzeros(size(term_doc)...)
N::SparseMatrixCSC{Float64,Int64}
for (doc,total_terms::Float64) in enumerate(sum(term_doc,1))
if total_terms == 0
continue
end
#fastmath #inbounds N[:,doc] = term_doc[:,doc]./total_terms
end
println("N done")
P = spzeros(size(term_doc)...)'
P::SparseMatrixCSC{Float64,Int64}
gfs = sum(term_doc,2)[:]
gfs::Array{Float64,1}
nterms = size(term_doc,1)
nterms::Int64
term_doc = term_doc'
#inbounds #simd for term in 1:nterms
#fastmath #inbounds P[:,term] = term_doc[:,term]/gfs[term]
end
println("P done")
P=P'
N[isnan(N)] = 0.0
P[isnan(P)] = 0.0
N,P,term_doc
end
It never completes.
It gets up to outputting "N Done",
but never outputs "P Done".
I have left it running for several hours.
How can I optimize it so it can complete in reasonable time?
Or if this is not possible, explain why.
First, you're making term_doc a global variable, which is a big problem for performance. Pass it as an argument, doSparseWay(term_doc::SparseMatrixCSC). (The type annotation at the beginning of your function does not do anything useful.)
You want to use an approach similar to the answer by walnuss:
function doSparseWay(term_doc::SparseMatrixCSC)
I, J, V = findnz(term_doc)
normI = sum(term_doc, 1)
normJ = sum(term_doc, 2)
NV = similar(V)
PV = similar(V)
for idx = 1:length(V)
NV[idx] = V[idx]/normI[J[idx]]
PV[idx] = V[idx]/normJ[I[idx]]
end
m, n = size(term_doc)
sparse(I, J, NV, m, n), sparse(I, J, PV, m, n), term_doc
end
This is a general pattern: when you want to optimize something for sparse matrices, extract the I, J, V and perform all your computations on V.
I just wanted to make sure I'm going in the right direction. I want to find a max value of an array by recursively splitting it and find the max of each separate array. Because I am splitting it, it would be 2*T(n/2). And because I have to make a comparison at the end for the 2 arrays, I have T(1).
So would my recurrence relation be like this:
T = { 2*T(n/2) + 1, when n>=2 ;T(1), when n = 1;
and and therefore my complexity would be Theta(nlgn)?
The formula you composed seems about right, but your analysis isn't perfect.
T = 2*T(n/2) + 1 = 2*(2*T(n/4) + 1) + 1 = ...
For the i-th iteration you'll get:
Ti(n) = 2^i*T(n/2^i) + i
now what you want to know for which i does n/2^i equals 1 (or just about any constant, if you like) so you reach the end-condition of n=1.
That would be the solution to n/2^I = 1 -> I = Log2(n). Plant it in the equation for Ti and you get:
TI(n) = 2^log2(n)*T(n/2^log2(n)) + log2(n) = n*1+log2(n) = n + log2(n)
and you get T(n) = O(n + log2(n) (just like #bdares said) = O(n) (just like #bdares said)
No, no... you are taking O(1) time for each recursion.
How many are there?
There are N leaves, so you know it's at least O(N).
How many do you need to compare to find the absolute maximum? That's O(log(N)).
Add them together, don't multiply. O(N+log(N)) is your time complexity.