Background Information: I solved the N-Queens problem with the C# algorithm below, which returns the total number of solutions given the board of size n x n. It works, but I do not understand why this would be O(n!) time complexity, or if it is a different time complexity. I am also unsure of the space used in the recursion stack (but am aware of the extra space used in the boolean jagged array). I cannot seem to wrap my mind around understanding the time and space complexity of such solutions. Having this understanding would be especially useful during technical interviews, for complexity analysis without the ability to run code.
Preliminary Investigation: I have read several SO posts where the author directly asks the community to provide the time and space complexity of their algorithms. Rather than doing the same and asking for the quick and easy answers, I would like to understand how to calculate the time and space complexity of backtracking algorithms so that I can do so moving forward.
I have also read in numerous locations within and outside of SO that generally, recursive backtracking algorithms are O(n!) time complexity since at each of the n iterations, you look at one less item: n, then n - 1, then n - 2, ... 1. However, I have not found any explanation as to why this is the case. I also have not found any explanation for the space complexity of such algorithms.
Question: Can someone please explain the step-by-step problem-solving approach to identify time and space complexities of recursive backtracking algorithms such as these?
public class Solution {
public int NumWays { get; set; }
public int TotalNQueens(int n) {
if (n <= 0)
{
return 0;
}
NumWays = 0;
bool[][] board = new bool[n][];
for (int i = 0; i < board.Length; i++)
{
board[i] = new bool[n];
}
Solve(n, board, 0);
return NumWays;
}
private void Solve(int n, bool[][] board, int row)
{
if (row == n)
{
// Terminate since we've hit the bottom of the board
NumWays++;
return;
}
for (int col = 0; col < n; col++)
{
if (CanPlaceQueen(board, row, col))
{
board[row][col] = true; // Place queen
Solve(n, board, row + 1);
board[row][col] = false; // Remove queen
}
}
}
private bool CanPlaceQueen(bool[][] board, int row, int col)
{
// We only need to check diagonal-up-left, diagonal-up-right, and straight up.
// this is because we should not have a queen in a later row anywhere, and we should not have a queen in the same row
for (int i = 1; i <= row; i++)
{
if (row - i >= 0 && board[row - i][col]) return false;
if (col - i >= 0 && board[row - i][col - i]) return false;
if (col + i < board[0].Length && board[row - i][col + i]) return false;
}
return true;
}
}
First of all, it's definitely not true that recursive backtracking algorithms are all in O(n!): of course it depends on the algorithm, and it could well be worse. Having said that, the general approach is to write down a recurrence relation for the time complexity T(n), and then try to solve it or at least characterize its asymptotic behaviour.
Step 1: Make the question precise
Are we interested in the worst-case, best-case or average-case? What are the input parameters?
In this example, let us assume we want to analyze the worst-case behaviour, and the relevant input parameter is n in the Solve method.
In recursive algorithms, it is useful (though not always possible) to find a parameter that starts off with the value of the input parameter and then decreases with every recursive call until it reaches the base case.
In this example, we can define k = n - row. So with every recursive call, k is decremented starting from n down to 0.
Step 2: Annotate and strip down the code
No we look at the code, strip it down to just the relevant bits and annotate it with complexities.
We can boil your example down to the following:
private void Solve(int n, bool[][] board, int row)
{
if (row == n) // base case
{
[...] // O(1)
return;
}
for (...) // loop n times
{
if (CanPlaceQueen(board, row, col)) // O(k)
{
[...] // O(1)
Solve(n, board, row + 1); // recurse on k - 1 = n - (row + 1)
[...] // O(1)
}
}
}
Step 3: Write down the recurrence relation
The recurrence relation for this example can be read off directly from the code:
T(0) = 1 // base case
T(k) = k * // loop n times
(O(k) + // if (CanPlaceQueen(...))
T(k-1)) // Solve(n, board, row + 1)
= k T(k-1) + O(k)
Step 4: Solve the recurrence relation
For this step, it is useful to know a few general forms of recurrence relations and their solutions. The relation above is of the general form
T(n) = n T(n-1) + f(n)
which has the exact solution
T(n) = n!(T(0) + Sum { f(i)/i!, for i = 1..n })
which we can easily prove by induction:
T(n) = n T(n-1) + f(n) // by def.
= n((n-1)!(T(0) + Sum { f(i)/i!, for i = 1..n-1 })) + f(n) // by ind. hypo.
= n!(T(0) + Sum { f(i)/i!, for i = 1..n-1 }) + f(n)/n!)
= n!(T(0) + Sum { f(i)/i!, for i = 1..n }) // qed
Now, we don't need the exact solution; we just need the asymptotic behaviour when n approaches infinity.
So let's look at the infinite series
Sum { f(i)/i!, for i = 1..infinity }
In our case, f(n) = O(n), but let's look at the more general case where f(n) is an arbitary polynomial in n (because it will turn out that it really doesn't matter). It is easy to see that the series converges, using the ratio test:
L = lim { | (f(n+1)/(n+1)!) / (f(n)/n!) |, for n -> infinity }
= lim { | f(n+1) / (f(n)(n+1)) |, for n -> infinity }
= 0 // if f is a polynomial
< 1, and hence the series converges
Therefore, for n -> infinity,
T(n) -> n!(T(0) + Sum { f(i)/i!, for i = 1..infinity })
= T(0) n!, if f is a polynomial
Step 5: The result
Since the limit of T(n) is T(0) n!, we can write
T(n) ∈ Θ(n!)
which is a tight bound on the worst-case complexity of your algorithm.
In addition, we've proven that it doesn't matter how much work you do within the for-loop in adddition to the recursive calls, as long as it's polynomial, the complexity stays Θ(n!) (for this form of recurrence relations). (In bold because there are lots of SO answers that get this wrong.)
For a similar analysis with a different form of recurrence relation, see here.
Update
I made a mistake in the annotation of the code (I'll leave it because it is still instructive). Actually, both the loop and the work done within the loop do not depend on k = n - row but on the initial value n (let's call it n0 to make it clear).
So the recurrence relation becomes
T(k) = n0 T(k-1) + n0
for which the exact solution is
T(k) = n0^k (T(0) + Sum { n0^(1-i), for i = 1..k })
But since initially n0 = k, we have
T(k) = k^k (T(0) + Sum { n0^(1-i), for i = 1..k })
∈ Θ(k^k)
which is a bit worse than Θ(k!).
For those Big O experts out there... I'm trying to deduce the time complexity of a function with two recursive calls, where the input is halved each time:
function myFunc(n) {
if (n > 1) {
let something = 0
for (let i=0; i < n; i++) {
// Do some linear stuff in here
}
myFunc(n/2)
myFunc(n/2)
}
return something;
}
I'm unsure how, exactly, the halving effects the analysis. Any help super appreciated!
The first step in the analysis of a recursive function would be to write down the recurrence relation. In this case you have:
T(n) = 2T(n/2) + O(n)
This is one of the most common forms of recurrence relations, so we can immediately state the solution without any further calculations:
T(n) = O(n log n)
It's easy to prove this result by induction. See e.g. here.
What is the exact time complexity of this code ?
I know its exponential but what kind of exponential like 2^n , sqrt(n) ^ sqrt(n) etc.
If attach some proof, that would be great.
https://www.geeksforgeeks.org/minimum-number-of-squares-whose-sum-equals-to-given-number-n/
class squares {
// Returns count of minimum squares that sum to n
static int getMinSquares(int n)
{
// base cases
if (n <= 3)
return n;
// getMinSquares rest of the table using recursive
// formula
int res = n; // Maximum squares required is
// n (1*1 + 1*1 + ..)
// Go through all smaller numbers
// to recursively find minimum
for (int x = 1; x <= n; x++) {
int temp = x * x;
if (temp > n)
break;
else
res = Math.min(res, 1 + getMinSquares(n - temp));
}
return res;
}
public static void main(String args[])
{
System.out.println(getMinSquares(6));
}
}
In my opinion, since each for loop is calling the same recursion for sqrt(n) number of time and each call is for (n - x*x)) ~ n...
So it should be n ^ sqrt(n).
Is this answer correct ?
The recurrence relation for that function is
T(n) = sum from i=1 to i=sqrt(n) of T(n - i*i)
which is bounded above by
T(n) = sqrt(n) * T(n-1)
as each term in the sum above will be at most T(n-1) and there are sqrt(n) of them. T(n) = sqrt(n) * T(n-1) is O( sqrt(n)^n ). I'm sure there is some clever way to get a better bound, but this function looks like it is exponential.
In the following code, i know that the time complexity is O(n) but how do i proof it in a proper way?
Is saying that searching array is O(n) enough?
int f[N];
F(n)
{
if (f[n] >= 0) return f[n];
f[n] = F(n-1) + F(n-2);
return f[n];
}
int main()
{
read n;
f[0] = 0; f[1] = 1;
for (i = 2; i <= n; i++)
f[i] = -1;
print F(n);
}
For each element of the array you call F. it may seem as a recursion to you but a bad implementation. each of the f[n-1] and f[n-2] calls actually just returns the values.
You will have 3n call to F(n), so still O(n).
If you are not obligated to recursion, you can program it with a single loop.
I was training in Codility solving the first lesson: Tape-Equilibrium.
It is said it has to be of complexity O(N). Therefore I was trying to solve the problem with just one for. I knew how to do it with two for but I understood it would have implied a complexity of O(2N), therefore I skipped those solutions.
I looked for it in Internet and of course, in SO there was an answer
To my astonishment, all the solutions first calculate the sum of the elements of the vector and afterwards make the calculations. I understand this is complexity O(2N), but it gets an score of 100%.
At this point, I think I am mistaken about my comprehension of the time complexity limits. If they ask you to get a time complexity of O(N), is it right to get O(X*N)? Being X a value not really high ?
How does this works?
Let f and g be functions.
The Big-O notation f in O(g) means that you can find a constant number c such that f(n) ≤ c⋅g(n). So if your algorithm has complexity 2N (or XN for a constant X) this is in O(N) due to c = 2 (or c = X) holds 2N ≤ c⋅N = 2⋅N (or XN ≤ c⋅N = X⋅N).
This is how I managed to keep it O(N) along with a 100% score:
// you can also use imports, for example:
// import java.util.*;
// you can use System.out.println for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
int result = Integer.MAX_VALUE;
int[] s1 = new int[A.length-1];
int[] s2 = new int[A.length-1];
for(int i=0;i<A.length-1;i++){
if(i>0){
s1[i] = s1[i-1] + A[i];
}
else {
s1[i] = A[i];
}
}
for(int i=A.length-1;i>0;i--){
if(i<A.length-1){
s2[i-1] = s2[i] + A[i];
}
else {
s2[i-1] = A[A.length-1];
}
}
for(int i=0;i<A.length-1;i++){
if(Math.abs(s1[i]-s2[i])<result){
result = Math.abs(s1[i]-s2[i]);
}
}
return result;
}
}