Finding the time complexity of a recursive algorithm with a double for loop - time-complexity

I am trying to find the tightest upper bound for the following upper bound. However, I am not able to get the correct answer. The algorithm is as follows:
public staticintrecursiveloopy(int n){
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
System.out.println("Hello.");
}
} if(n <= 2) {
return 1;
} else if(n % 2 == 0) {
return(staticintrecursiveloopy(n+1));
} else{
return(staticintrecursiveloopy(n-2));
}
}
I tried to draw out the recursion tree for this. I know that for each run of the algorithm the time complexity will be O(n2) plus the time taken for each of the recursive calls. Also, the recursion tree will have n levels. I then calculated the total time taken for each level:
For the first level, the time taken will be n2. For the second level, since there are two recursive calls, the time taken will be 2n2. For the third level, the time taken will be 4n 2 and so on until n becomes <= 2.
Thus, the time complexity should be n2 * (1 + 2 + 4 + .... + 2n). 1 + 2 + 4 + .... + 2n is a geometric sequence and its sum is equal to 2n - 1.Thus, the total time complexity should be O(2nn2). However, the answer says O(n3). What am I doing wrong?

Consider the below fragment
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
System.out.println("Hello.");
}
}
This doesn't need any introduction and is O(n2)
Now consider the below fragment
if(n <= 2) {
return 1;
} else if(n % 2 == 0) {
return(staticintrecursiveloopy(n+1));
} else {
return(staticintrecursiveloopy(n-2));
}
How many times do you think this fragment will be executed?
If n%2 == 0 then the method staticintrecursiveloopy will be executed 1 extra time. Otherwise it goes about decresing it by 2, thus it'll be executed n/2 times (or (n+1)/2 if you include the other condition).
Thus the total number of times the method staticintrecursiveloopy will be executed is roughly n/2 which when expressed in terms of complexity becomes O(n).
And the method staticintrecursiveloopy calls a part with complexity O(n2) in each iteration, thus the total time complexity becomes
O(n) * O(n2) = O(n3).

Related

Best time complexity of a single loop?

I have a really simple question, I have this loop:
for (int i=0; i<n; i++) {
"some O(n) stuff here)"
}
what will be the BEST time complexity of this algorithm?
O(n)? (for loop O(1) * O(n) stuff)
or
O(n^2)? (for loop O(n) * O(n) stuff inside the loop)
Will the for loop itself be considered as O(n) as normally, or will it be considered as O(1)
since it will only make only 1 loop for the BEST case scenario?
You are right, the best time complexity is O(N) (and even Θ(N)), if the best running time of "stuff" is constant (even zero).
Anyway, if "stuff" is known to be best case Ω(f(N)), then the best total time is Ω(N f(N)).
If your loop is doing O(n) stuff for n times then the time complexity will be O(n^2). May you call it worst case. The best case and average case will be based on your some O(n) stuff that is executed with every iteration of your loop.
Lets take a simple example of bubble sort algorithm:
for (int i = 0; i < n - 1; ++i) {
for (int j = 0; j < n - i - 1; ++j) {
if (a[j] > a[j + 1]) {
swap(&a[j], &a[j + 1]);
}
}
}
Time complexity this will always be O(n^2), whether array is sorted (irrespective of order - ascending or descending) or not.
But this can be optimised by observing that the nth pass finds the nth largest element and puts it into its final place. So, the inner loop can avoid looking at the last n − 1 items when running for the nth time:
for (int i = 0; i < n - 1; ++i) {
swapped = false;
for (int j = 0; j < n - i - 1; ++j) {
if (a[j] > a[j + 1]) {
swap(&a[j], &a[j + 1]);
swapped = true;
}
}
if (swapped == false) {
break;
}
}
Now the best case time complexity is O(n) i.e. when the array is sorted in ascending order (in context of above implementation). Average and worst case are still O(n^2).
So, to identify the best case time complexity of your algorithm you have to show us the implementation of some O(n) stuff and if not implementation then at least show the algorithm that you are trying to implement.
As you stated it, it's O(n^2).
'Cause You are doing n times a O(n) operation.

determine the time complexity of the algorithems

I have just started to learn time complexity, but I don't really get the idea, could you help with those questions and explain the way of thinking:
int Fun1(int n)
{
for (i = 0; i < n; i += 1) {
for (j = 0; j < i; j += 1) {
for (k = j; k < i; i += 1) {
// do something
}
}
}
}
void Fun2(int n){
i=o
while(i<n){
for (j = 0; j < i; j += 1) {
k=n
while(k>j){
k=k/2
}
k=j
while(k>1){
k=k/2
}
}
}
int Fun3(int n){
for (i = 0; i < n; i += 1) {
print("*")
}
if(n<=1){
print("*")
return
}
if (n%2 != 0){
Fun3(n-1)
}
else{
Fun3(n/2)
}
}
for function 1, I think its Theta(n^3) because it runs at most
n*n*n times but I am not sure how to prove this.
for the second I think its Theta (n^2log(n))
I am not sure
Could you help, please?
First a quick note, in Fun2(n) there should be a i++ before closing the while loop, anyway, time complexity is important in order to understand the efficiency of your algorithms. In this case you have these 3 functions:
Fun1(n)
In this function you have three nested for loops, each for loops iterates n times over a given input, we know that the complexity of this iteration is O(n). Since there are three nested for loops, the second for loop will iterate n times over each iteration of the outer for loop. The same will do the most inner loop. The resulting complexity, as you correctly said, is O(n) * O(n) * O(n) = O(n^3)
Fun2(n)
This function has a while loop that iterates n times over a given input. So the outer loop complexity is O(n). Same as before we have an inner for loop that iterates n times on each cycle of the outer loop. So far we have O(n) * O(n) which is O(n^2) as complexity. Inside the for loop we have a while loop, that differs from the other loops, since does not iterate on each element in a specific range, but it divides the range by 2 at each iteration. For example from 0 to 31 we have 0-31 -> 0-15 -> 0-7 -> 0-3 -> 0-1
As you know the number of iteration is the result of the logarithmic function, log(n), so we end up with O(n^2) * O(log(n)) = O(n^2(log(n)) as time complexity
Fun3(n)
In this function we have a for loop with no more inner loops, but then we have a recursive call. The complexity of the for loop as we know is O(n), but how many times will this function be called?
If we take a small number (like 6) as example we have a first loop with 6 iteration, then we call again the function with n = 6-1 since 6 mod 2 = 0
Now we have a call to Fun3(5), we do 5 iteration and the recursively we call Fun3(2) since 5 mod 2 != 0
What are we having here? We having a recursive call that in the worst case will call itself n times
The complexity result is O(n!)
Note that when we calculate time complexity we ignore the coefficients since are not relevant, usually the function we consider, especially in CS, are:
O(1), O(n), O(log(n)), O(n^a) with a > 1, O(n!)
and we combine and simplify them in order to know who has the best (lowest) time complexity to have an idea of which algorithm could be used

time complexity ( studying for exam)

I am currently studying to an examen in algorithms and I am trying to solve a question about time complexity in java, but can't really figure out how to do it. I am suppose to calculate the expected time complexity. N is a positive integer.
for (int i=0; i < N; i++)
for (int j=i+1; j < N; i++) {
int x=j+1; int h=N-1; int k;
while(x<h) {
k=(x+h)/2;
if (a[i]+a[j]+a[k] == 0) { cnt++; break;}
if (a[i]+a[j]+a[k] < 0) x=k+1;
else h=k-1;
}}
The first for loop should run N times and the second should run N-1. Since x is j+1 I guessed that x= N-2. I dont know how to think after that with the while loop or if I have done anything right. Would really appreciate help!
Create your time complexity function in parts.
for (int i=0; i < N; i++) //Takes linear O(n)
for (int j=i+1; j < N; i++) { //Takes linear O(n) and in computer science we can safely assume -1 is irrelevant at N-1 in big O notation
int x=j+1; int h=N-1; int k; // 3 x O(1)
while( x < h ) { // Worst case is when j equals i + 1 where i = 0 so x is at lowest 2 and h equals to N-1 so h depends on N. So again loop takes linear O(n) time.
k=(x+h)/2; // Takes O(1) time
if (a[i]+a[j]+a[k] == 0) { // Takes O(1) time and if this gives true we do break from the while loop
cnt++; // Takes O(1) time
break; // Takes O(1) time
}
if ( a[i]+a[j]+a[k] < 0 ) { // Takes O(1) time
x=k+1; // Takes O(1) time
} else {
h=k-1; // Takes O(1) time
}
}
}
}
So in summary T(N) equals to O(N^3) and Ω(N^2)
More specific T(N) = N * N-1 * N-2 + 10 and this last while loop in avarage takes O(N/2) time but still in computer science it is same as O(N).
We are only interested in worst and best cases.
To confuse even more big O notation actually
T(N)=O(g(N)) means this:
I hope this answer helps even little bit...

When can an algorithm have square root(n) time complexity?

Can someone give me example of an algorithm that has square root(n) time complexity. What does square root time complexity even mean?
Square root time complexity means that the algorithm requires O(N^(1/2)) evaluations where the size of input is N.
As an example for an algorithm which takes O(sqrt(n)) time, Grover's algorithm is one which takes that much time. Grover's algorithm is a quantum algorithm for searching an unsorted database of n entries in O(sqrt(n)) time.
Let us take an example to understand how can we arrive at O(sqrt(N)) runtime complexity, given a problem. This is going to be elaborate, but is interesting to understand. (The following example, in the context for answering this question, is taken from Coding Contest Byte: The Square Root Trick , very interesting problem and interesting trick to arrive at O(sqrt(n)) complexity)
Given A, containing an n elements array, implement a data structure for point updates and range sum queries.
update(i, x)-> A[i] := x (Point Updates Query)
query(lo, hi)-> returns A[lo] + A[lo+1] + .. + A[hi]. (Range Sum Query)
The naive solution uses an array. It takes O(1) time for an update (array-index access) and O(hi - lo) = O(n) for the range sum (iterating from start index to end index and adding up).
A more efficient solution splits the array into length k slices and stores the slice sums in an array S.
The update takes constant time, because we have to update the value for A and the value for the corresponding S. In update(6, 5) we have to change A[6] to 5 which results in changing the value of S1 to keep S up to date.
The range-sum query is interesting. The elements of the first and last slice (partially contained in the queried range) have to be traversed one by one, but for slices completely contained in our range we can use the values in S directly and get a performance boost.
In query(2, 14) we get,
query(2, 14) = A[2] + A[3]+ (A[4] + A[5] + A[6] + A[7]) + (A[8] + A[9] + A[10] + A[11]) + A[12] + A[13] + A[14] ;
query(2, 14) = A[2] + A[3] + S[1] + S[2] + A[12] + A[13] + A[14] ;
query(2, 14) = 0 + 7 + 11 + 9 + 5 + 2 + 0;
query(2, 14) = 34;
The code for update and query is:
def update(S, A, i, k, x):
S[i/k] = S[i/k] - A[i] + x
A[i] = x
def query(S, A, lo, hi, k):
s = 0
i = lo
//Section 1 (Getting sum from Array A itself, starting part)
while (i + 1) % k != 0 and i <= hi:
s += A[i]
i += 1
//Section 2 (Getting sum from Slices directly, intermediary part)
while i + k <= hi:
s += S[i/k]
i += k
//Section 3 (Getting sum from Array A itself, ending part)
while i <= hi:
s += A[i]
i += 1
return s
Let us now determine the complexity.
Each query takes on average
Section 1 takes k/2 time on average. (you might iterate atmost k/2)
Section 2 takes n/k time on average, basically number of slices
Section 3 takes k/2 time on average. (you might iterate atmost k/2)
So, totally, we get k/2 + n/k + k/2 = k + n/k time.
And, this is minimized for k = sqrt(n). sqrt(n) + n/sqrt(n) = 2*sqrt(n)
So we get a O(sqrt(n)) time complexity query.
Prime numbers
As mentioned in some other answers, some basic things related to prime numbers take O(sqrt(n)) time:
Find number of divisors
Find sum of divisors
Find Euler's totient
Below I mention two advanced algorithms which also bear sqrt(n) term in their complexity.
MO's Algorithm
try this problem: Powerful array
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 1E6 + 10, k = 500;
struct node {
int l, r, id;
bool operator<(const node &a) {
if(l / k == a.l / k) return r < a.r;
else return l < a.l;
}
} q[N];
long long a[N], cnt[N], ans[N], cur_count;
void add(int pos) {
cur_count += a[pos] * cnt[a[pos]];
++cnt[a[pos]];
cur_count += a[pos] * cnt[a[pos]];
}
void rm(int pos) {
cur_count -= a[pos] * cnt[a[pos]];
--cnt[a[pos]];
cur_count -= a[pos] * cnt[a[pos]];
}
int main() {
int n, t;
cin >> n >> t;
for(int i = 1; i <= n; i++) {
cin >> a[i];
}
for(int i = 0; i < t; i++) {
cin >> q[i].l >> q[i].r;
q[i].id = i;
}
sort(q, q + t);
memset(cnt, 0, sizeof(cnt));
memset(ans, 0, sizeof(ans));
int curl(0), curr(0), l, r;
for(int i = 0; i < t; i++) {
l = q[i].l;
r = q[i].r;
/* This part takes O(n * sqrt(n)) time */
while(curl < l)
rm(curl++);
while(curl > l)
add(--curl);
while(curr > r)
rm(curr--);
while(curr < r)
add(++curr);
ans[q[i].id] = cur_count;
}
for(int i = 0; i < t; i++) {
cout << ans[i] << '\n';
}
return 0;
}
Query Buffering
try this problem: Queries on a Tree
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10, k = 333;
vector<int> t[N], ht;
int tm_, h[N], st[N], nd[N];
inline int hei(int v, int p) {
for(int ch: t[v]) {
if(ch != p) {
h[ch] = h[v] + 1;
hei(ch, v);
}
}
}
inline void tour(int v, int p) {
st[v] = tm_++;
ht.push_back(h[v]);
for(int ch: t[v]) {
if(ch != p) {
tour(ch, v);
}
}
ht.push_back(h[v]);
nd[v] = tm_++;
}
int n, tc[N];
vector<int> loc[N];
long long balance[N];
vector<pair<long long,long long>> buf;
inline long long cbal(int v, int p) {
long long ans = balance[h[v]];
for(int ch: t[v]) {
if(ch != p) {
ans += cbal(ch, v);
}
}
tc[v] += ans;
return ans;
}
inline void bal() {
memset(balance, 0, sizeof(balance));
for(auto arg: buf) {
balance[arg.first] += arg.second;
}
buf.clear();
cbal(1,1);
}
int main() {
int q;
cin >> n >> q;
for(int i = 1; i < n; i++) {
int x, y; cin >> x >> y;
t[x].push_back(y); t[y].push_back(x);
}
hei(1,1);
tour(1,1);
for(int i = 0; i < ht.size(); i++) {
loc[ht[i]].push_back(i);
}
vector<int>::iterator lo, hi;
int x, y, type;
for(int i = 0; i < q; i++) {
cin >> type;
if(type == 1) {
cin >> x >> y;
buf.push_back(make_pair(x,y));
}
else if(type == 2) {
cin >> x;
long long ans(0);
for(auto arg: buf) {
hi = upper_bound(loc[arg.first].begin(), loc[arg.first].end(), nd[x]);
lo = lower_bound(loc[arg.first].begin(), loc[arg.first].end(), st[x]);
ans += arg.second * (hi - lo);
}
cout << tc[x] + ans/2 << '\n';
}
else assert(0);
if(i % k == 0) bal();
}
}
There are many cases.
These are the few problems which can be solved in root(n) complexity [better may be possible also].
Find if a number is prime or not.
Grover's Algorithm: allows search (in quantum context) on unsorted input in time proportional to the square root of the size of the input.link
Factorization of the number.
There are many problems that you will face which will demand use of sqrt(n) complexity algorithm.
As an answer to second part:
sqrt(n) complexity means if the input size to your algorithm is n then there approximately sqrt(n) basic operations ( like **comparison** in case of sorting). Then we can say that the algorithm has sqrt(n) time complexity.
Let's analyze the 3rd problem and it will be clear.
let's n= positive integer. Now there exists 2 positive integer x and y such that
x*y=n;
Now we know that whatever be the value of x and y one of them will be less than sqrt(n). As if both are greater than sqrt(n)
x>sqrt(n) y>sqrt(n) then x*y>sqrt(n)*sqrt(n) => n>n--->contradiction.
So if we check 2 to sqrt(n) then we will have all the factors considered ( 1 and n are trivial factors).
Code snippet:
int n;
cin>>n;
print 1,n;
for(int i=2;i<=sqrt(n);i++) // or for(int i=2;i*i<=n;i++)
if((n%i)==0)
cout<<i<<" ";
Note: You might think that not considering the duplicate we can also achieve the above behaviour by looping from 1 to n. Yes that's possible but who wants to run a program which can run in O(sqrt(n)) in O(n).. We always look for the best one.
Go through the book of Cormen Introduction to Algorithms.
I will also request you to read following stackoverflow question and answers they will clear all the doubts for sure :)
Are there any O(1/n) algorithms?
Plain english explanation Big-O
Which one is better?
How do you calculte big-O complexity?
This link provides a very basic beginner understanding of O() i.e., O(sqrt n) time complexity. It is the last example in the video, but I would suggest that you watch the whole video.
https://www.youtube.com/watch?v=9TlHvipP5yA&list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O&index=6
The simplest example of an O() i.e., O(sqrt n) time complexity algorithm in the video is:
p = 0;
for(i = 1; p <= n; i++) {
p = p + i;
}
Mr. Abdul Bari is reknowned for his simple explanations of data structures and algorithms.
Primality test
Solution in JavaScript
const isPrime = n => {
for(let i = 2; i <= Math.sqrt(n); i++) {
if(n % i === 0) return false;
}
return true;
};
Complexity
O(N^1/2) Because, for a given value of n, you only need to find if its divisible by numbers from 2 to its root.
JS Primality Test
O(sqrt(n))
A slightly more performant version, thanks to Samme Bae, for enlightening me with this. 😉
function isPrime(n) {
if (n <= 1)
return false;
if (n <= 3)
return true;
// Skip 4, 6, 8, 9, and 10
if (n % 2 === 0 || n % 3 === 0)
return false;
for (let i = 5; i * i <= n; i += 6) {
if (n % i === 0 || n % (i + 2) === 0)
return false;
}
return true;
}
isPrime(677);

Performance analysis of 3 sum

I have a method that finds 3 numbers in an array that add up to a desired number.
code:
public static void threeSum(int[] arr, int sum) {
quicksort(arr, 0, arr.length - 1);
for (int i = 0; i < arr.length - 2; i++) {
for (int j = 1; j < arr.length - 1; j++) {
for (int k = arr.length - 1; k > j; k--) {
if ((arr[i] + arr[j] + arr[k]) == sum) {
System.out.println(Integer.toString(i) + "+" + Integer.toString(j) + "+" + Integer.toString(k) + "=" + sum);
}
}
}
}
}
I'm not sure about the big O of this method. I have a hard time wrapping my head around this right now. My guess is O(n^2) or O(n^2logn). But these are complete guesses. I can't prove this. Could someone help me wrap my head around this?
You have three runs over the array (the i, j and k loops), in sizes that depend primarily on n, the size of the array. Hence, this is an O(n3) operation.
Even though your quicksort is O(nlogn), it is overshadowed by the fact that you have 3 nested for loops. So the time complexity w.r.t number of elements (n) is O(n^3)
It's an O(n^3) complexity because there are three nested forloops. The inner forloop only runs if k>j so you can think of n^2*(n/2) but in big O notation you can ignore this.
Methodically speaking, the order of growth complexity can be accurately inferred like the following: