What would be complexity of this recursive algorithm - time-complexity

How to calculate the complexity of a bit complicated recursive algorithm like this
in this case what would be the complexity of something(0,n)
void something(int b, int e) {
if (b >= e) return;
int a = 0;
for(int i=b; i < e; i++)
a++;
int k = (b+e)/2;
something(b, k);
something(k+1, e);
}
I tried to analyze this algorithm and think it has complexity is n*ln(n) but still can't get a formal proof.

Initially, loop will run for (e-b) times and it will call itself 2 times, but reducing the size of loop by half
So, ((e-b)/2) will run for 2 iteration; once by (b,(b+e)/2) and again by ((b+e)/2+1,e)
Like wise both iterations will again call themself 2 times, but reducing the iteration length by half.
So `((e-b)/4) will run 4 times, and so on.
The height of the recursion tree will be log(b-e), (each node is having 2 children)
So, time complexity = (b-e) + 2*((b-e)/2) + 4*((b-e)/4) + .....(log(b-e) times)
this expression evaluates to (b-e)*log(b-e)
therefore, time complexity = O(nlogn)

Related

Calculating the time and space complexity of function f2

I have this function and I am trying to calculate time and space complexity, I got an answer but I am not sure if it's correct, I'd like to know whether this is correct or not
void f1(int n)
{
int i,j,k=0;
for(int i=n/2; i<=n; i++)
{
for(int j=2; j<=i; j*=2)
{
k+=n;
}
}
free(malloc(k));
}
My results:
For the outer for loop it's pretty straightforward O(n).
For the inner loop, we have log((n/2)+i) each iteration, so basically log(n) each iteration.
And so the total time complexity is O(n*log(n))
For space complexity, it's O(k) whenever k receives it's final value, since final value for k is k+n nlog(n) times, we have that after all iterations, k=n^2log(n), and so space complexity is O((n^2)*log(n)).
Is this correct?

How to identify time and space complexity of recursive backtracking algorithms with step-by-step analysis

Background Information: I solved the N-Queens problem with the C# algorithm below, which returns the total number of solutions given the board of size n x n. It works, but I do not understand why this would be O(n!) time complexity, or if it is a different time complexity. I am also unsure of the space used in the recursion stack (but am aware of the extra space used in the boolean jagged array). I cannot seem to wrap my mind around understanding the time and space complexity of such solutions. Having this understanding would be especially useful during technical interviews, for complexity analysis without the ability to run code.
Preliminary Investigation: I have read several SO posts where the author directly asks the community to provide the time and space complexity of their algorithms. Rather than doing the same and asking for the quick and easy answers, I would like to understand how to calculate the time and space complexity of backtracking algorithms so that I can do so moving forward.
I have also read in numerous locations within and outside of SO that generally, recursive backtracking algorithms are O(n!) time complexity since at each of the n iterations, you look at one less item: n, then n - 1, then n - 2, ... 1. However, I have not found any explanation as to why this is the case. I also have not found any explanation for the space complexity of such algorithms.
Question: Can someone please explain the step-by-step problem-solving approach to identify time and space complexities of recursive backtracking algorithms such as these?
public class Solution {
public int NumWays { get; set; }
public int TotalNQueens(int n) {
if (n <= 0)
{
return 0;
}
NumWays = 0;
bool[][] board = new bool[n][];
for (int i = 0; i < board.Length; i++)
{
board[i] = new bool[n];
}
Solve(n, board, 0);
return NumWays;
}
private void Solve(int n, bool[][] board, int row)
{
if (row == n)
{
// Terminate since we've hit the bottom of the board
NumWays++;
return;
}
for (int col = 0; col < n; col++)
{
if (CanPlaceQueen(board, row, col))
{
board[row][col] = true; // Place queen
Solve(n, board, row + 1);
board[row][col] = false; // Remove queen
}
}
}
private bool CanPlaceQueen(bool[][] board, int row, int col)
{
// We only need to check diagonal-up-left, diagonal-up-right, and straight up.
// this is because we should not have a queen in a later row anywhere, and we should not have a queen in the same row
for (int i = 1; i <= row; i++)
{
if (row - i >= 0 && board[row - i][col]) return false;
if (col - i >= 0 && board[row - i][col - i]) return false;
if (col + i < board[0].Length && board[row - i][col + i]) return false;
}
return true;
}
}
First of all, it's definitely not true that recursive backtracking algorithms are all in O(n!): of course it depends on the algorithm, and it could well be worse. Having said that, the general approach is to write down a recurrence relation for the time complexity T(n), and then try to solve it or at least characterize its asymptotic behaviour.
Step 1: Make the question precise
Are we interested in the worst-case, best-case or average-case? What are the input parameters?
In this example, let us assume we want to analyze the worst-case behaviour, and the relevant input parameter is n in the Solve method.
In recursive algorithms, it is useful (though not always possible) to find a parameter that starts off with the value of the input parameter and then decreases with every recursive call until it reaches the base case.
In this example, we can define k = n - row. So with every recursive call, k is decremented starting from n down to 0.
Step 2: Annotate and strip down the code
No we look at the code, strip it down to just the relevant bits and annotate it with complexities.
We can boil your example down to the following:
private void Solve(int n, bool[][] board, int row)
{
if (row == n) // base case
{
[...] // O(1)
return;
}
for (...) // loop n times
{
if (CanPlaceQueen(board, row, col)) // O(k)
{
[...] // O(1)
Solve(n, board, row + 1); // recurse on k - 1 = n - (row + 1)
[...] // O(1)
}
}
}
Step 3: Write down the recurrence relation
The recurrence relation for this example can be read off directly from the code:
T(0) = 1 // base case
T(k) = k * // loop n times
(O(k) + // if (CanPlaceQueen(...))
T(k-1)) // Solve(n, board, row + 1)
= k T(k-1) + O(k)
Step 4: Solve the recurrence relation
For this step, it is useful to know a few general forms of recurrence relations and their solutions. The relation above is of the general form
T(n) = n T(n-1) + f(n)
which has the exact solution
T(n) = n!(T(0) + Sum { f(i)/i!, for i = 1..n })
which we can easily prove by induction:
T(n) = n T(n-1) + f(n) // by def.
= n((n-1)!(T(0) + Sum { f(i)/i!, for i = 1..n-1 })) + f(n) // by ind. hypo.
= n!(T(0) + Sum { f(i)/i!, for i = 1..n-1 }) + f(n)/n!)
= n!(T(0) + Sum { f(i)/i!, for i = 1..n }) // qed
Now, we don't need the exact solution; we just need the asymptotic behaviour when n approaches infinity.
So let's look at the infinite series
Sum { f(i)/i!, for i = 1..infinity }
In our case, f(n) = O(n), but let's look at the more general case where f(n) is an arbitary polynomial in n (because it will turn out that it really doesn't matter). It is easy to see that the series converges, using the ratio test:
L = lim { | (f(n+1)/(n+1)!) / (f(n)/n!) |, for n -> infinity }
= lim { | f(n+1) / (f(n)(n+1)) |, for n -> infinity }
= 0 // if f is a polynomial
< 1, and hence the series converges
Therefore, for n -> infinity,
T(n) -> n!(T(0) + Sum { f(i)/i!, for i = 1..infinity })
= T(0) n!, if f is a polynomial
Step 5: The result
Since the limit of T(n) is T(0) n!, we can write
T(n) ∈ Θ(n!)
which is a tight bound on the worst-case complexity of your algorithm.
In addition, we've proven that it doesn't matter how much work you do within the for-loop in adddition to the recursive calls, as long as it's polynomial, the complexity stays Θ(n!) (for this form of recurrence relations). (In bold because there are lots of SO answers that get this wrong.)
For a similar analysis with a different form of recurrence relation, see here.
Update
I made a mistake in the annotation of the code (I'll leave it because it is still instructive). Actually, both the loop and the work done within the loop do not depend on k = n - row but on the initial value n (let's call it n0 to make it clear).
So the recurrence relation becomes
T(k) = n0 T(k-1) + n0
for which the exact solution is
T(k) = n0^k (T(0) + Sum { n0^(1-i), for i = 1..k })
But since initially n0 = k, we have
T(k) = k^k (T(0) + Sum { n0^(1-i), for i = 1..k })
∈ Θ(k^k)
which is a bit worse than Θ(k!).

Time complexity of 2 nested loops

I want to detect the time complexity for this block of code that consists of 2 nested loops:
function(x) {
for (var i = 4; i <= x; i++) {
for (var k = 0; k <= ((x * x) / 2); k++) {
// data processing
}
}
}
I guess the time complexity here is O(n^3), could you please correct me if I am wrong with a brief explanation?
the time complexity here is O(n^2). Loops like this where you increment the value of i and k by one have the complexity of O(n) and since there are two nested it is O(n^2). This complexity doesn't depend on X. If X is a smaller number the program will execute faster but still the complexity stays the same.

How is it possible that O(1) constant time code is slower than O(n) linear time code?

"...It is very possible for O(N) code to run faster than O(1) code for specific inputs. Big O just describes the rate of increase."
According to my understanding:
O(N) - Time taken for an algorithm to run based on the varying values of input N.
O(1) - Constant time taken for the algorithm to execute irrespective of the size of the input e.g. int val = arr[10000];
Can someone help me understand based on the author's statement?
O(N) code run faster than O(1)?
What are the specific inputs the author is alluding to?
Rate of increase of what?
O(n) constant time can absolutely be faster than O(1) linear time. The reason is that constant-time operations are totally ignored in Big O, which is a measure of how fast an algorithm's complexity increases as input size n increases, and nothing else. It's a measure of growth rate, not running time.
Here's an example:
int constant(int[] arr) {
int total = 0;
for (int i = 0; i < 10000; i++) {
total += arr[0];
}
return total;
}
int linear(int[] arr) {
int total = 0;
for (int i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
In this case, constant does a lot of work, but it's fixed work that will always be the same regardless of how large arr is. linear, on the other hand, appears to have few operations, but those operations are dependent on the size of arr.
In other words, as arr increases in length, constant's performance stays the same, but linear's running time increases linearly in proportion to its argument array's size.
Call the two functions with a single-item array like
constant(new int[] {1});
linear(new int[] {1});
and it's clear that constant runs slower than linear.
But call them like:
int[] arr = new int[10000000];
constant(arr);
linear(arr);
Which runs slower?
After you've thought about it, run the code given various inputs of n and compare the results.
Just to show that this phenomenon of run time != Big O isn't just for constant-time functions, consider:
void exponential(int n) throws InterruptedException {
for (int i = 0; i < Math.pow(2, n); i++) {
Thread.sleep(1);
}
}
void linear(int n) throws InterruptedException {
for (int i = 0; i < n; i++) {
Thread.sleep(10);
}
}
Exercise (using pen and paper): up to which n does exponential run faster than linear?
Consider the following scenario:
Op1) Given an array of length n where n>=10, print the first ten elements twice on the console. --> This is a constant time (O(1)) operation, because for any array of size>=10, it will execute 20 steps.
Op2) Given an array of length n where n>=10, find the largest element in the array. This is a constant time (O(N)) operation, because for any array, it will execute N steps.
Now if the array size is between 10 and 20 (exclusive), Op1 will be slower than Op2. But let's say, we take an array of size>20 (for eg, size =1000), Op1 will still take 20 steps to complete, but Op2 will take 1000 steps to complete.
That's why the big-o notation is about growth(rate of increase) of an algorithm's complexity

determine the time complexity of the algorithems

I have just started to learn time complexity, but I don't really get the idea, could you help with those questions and explain the way of thinking:
int Fun1(int n)
{
for (i = 0; i < n; i += 1) {
for (j = 0; j < i; j += 1) {
for (k = j; k < i; i += 1) {
// do something
}
}
}
}
void Fun2(int n){
i=o
while(i<n){
for (j = 0; j < i; j += 1) {
k=n
while(k>j){
k=k/2
}
k=j
while(k>1){
k=k/2
}
}
}
int Fun3(int n){
for (i = 0; i < n; i += 1) {
print("*")
}
if(n<=1){
print("*")
return
}
if (n%2 != 0){
Fun3(n-1)
}
else{
Fun3(n/2)
}
}
for function 1, I think its Theta(n^3) because it runs at most
n*n*n times but I am not sure how to prove this.
for the second I think its Theta (n^2log(n))
I am not sure
Could you help, please?
First a quick note, in Fun2(n) there should be a i++ before closing the while loop, anyway, time complexity is important in order to understand the efficiency of your algorithms. In this case you have these 3 functions:
Fun1(n)
In this function you have three nested for loops, each for loops iterates n times over a given input, we know that the complexity of this iteration is O(n). Since there are three nested for loops, the second for loop will iterate n times over each iteration of the outer for loop. The same will do the most inner loop. The resulting complexity, as you correctly said, is O(n) * O(n) * O(n) = O(n^3)
Fun2(n)
This function has a while loop that iterates n times over a given input. So the outer loop complexity is O(n). Same as before we have an inner for loop that iterates n times on each cycle of the outer loop. So far we have O(n) * O(n) which is O(n^2) as complexity. Inside the for loop we have a while loop, that differs from the other loops, since does not iterate on each element in a specific range, but it divides the range by 2 at each iteration. For example from 0 to 31 we have 0-31 -> 0-15 -> 0-7 -> 0-3 -> 0-1
As you know the number of iteration is the result of the logarithmic function, log(n), so we end up with O(n^2) * O(log(n)) = O(n^2(log(n)) as time complexity
Fun3(n)
In this function we have a for loop with no more inner loops, but then we have a recursive call. The complexity of the for loop as we know is O(n), but how many times will this function be called?
If we take a small number (like 6) as example we have a first loop with 6 iteration, then we call again the function with n = 6-1 since 6 mod 2 = 0
Now we have a call to Fun3(5), we do 5 iteration and the recursively we call Fun3(2) since 5 mod 2 != 0
What are we having here? We having a recursive call that in the worst case will call itself n times
The complexity result is O(n!)
Note that when we calculate time complexity we ignore the coefficients since are not relevant, usually the function we consider, especially in CS, are:
O(1), O(n), O(log(n)), O(n^a) with a > 1, O(n!)
and we combine and simplify them in order to know who has the best (lowest) time complexity to have an idea of which algorithm could be used