Performance analysis of 3 sum - time-complexity

I have a method that finds 3 numbers in an array that add up to a desired number.
code:
public static void threeSum(int[] arr, int sum) {
quicksort(arr, 0, arr.length - 1);
for (int i = 0; i < arr.length - 2; i++) {
for (int j = 1; j < arr.length - 1; j++) {
for (int k = arr.length - 1; k > j; k--) {
if ((arr[i] + arr[j] + arr[k]) == sum) {
System.out.println(Integer.toString(i) + "+" + Integer.toString(j) + "+" + Integer.toString(k) + "=" + sum);
}
}
}
}
}
I'm not sure about the big O of this method. I have a hard time wrapping my head around this right now. My guess is O(n^2) or O(n^2logn). But these are complete guesses. I can't prove this. Could someone help me wrap my head around this?

You have three runs over the array (the i, j and k loops), in sizes that depend primarily on n, the size of the array. Hence, this is an O(n3) operation.

Even though your quicksort is O(nlogn), it is overshadowed by the fact that you have 3 nested for loops. So the time complexity w.r.t number of elements (n) is O(n^3)

It's an O(n^3) complexity because there are three nested forloops. The inner forloop only runs if k>j so you can think of n^2*(n/2) but in big O notation you can ignore this.

Methodically speaking, the order of growth complexity can be accurately inferred like the following:

Related

Prove that the time complexity of a function is O(n^3)

public void function2(long input) {
long s = 0;
for (long i = 1; i < input * input; i++){
for(long j = 1; j < i * i; j++){
s++;
}
}
}
l'm pretty certain that the time complexity of this function is n^3, however if someone could provide a line by line explanation of this, that would be great.
First of all, you need to define what n is if you write something like O(n^3), otherwise it doesn't make any sense. Let's say n is the value (as opposed to e.g. the bit-length) of input, so n = input.
The outer loop has k iterations, where k = n^2. The inner loop has 1^2, 2^2, 3^2, ... up to k^2 iterations, so summing up everything you get O(k^3) iterations (since the sum of the p-th powers of the first m integers is always O(m^(p+1))).
Hence the overall time complexity is O(n^6).

Finding the time complexity of a recursive algorithm with a double for loop

I am trying to find the tightest upper bound for the following upper bound. However, I am not able to get the correct answer. The algorithm is as follows:
public staticintrecursiveloopy(int n){
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
System.out.println("Hello.");
}
} if(n <= 2) {
return 1;
} else if(n % 2 == 0) {
return(staticintrecursiveloopy(n+1));
} else{
return(staticintrecursiveloopy(n-2));
}
}
I tried to draw out the recursion tree for this. I know that for each run of the algorithm the time complexity will be O(n2) plus the time taken for each of the recursive calls. Also, the recursion tree will have n levels. I then calculated the total time taken for each level:
For the first level, the time taken will be n2. For the second level, since there are two recursive calls, the time taken will be 2n2. For the third level, the time taken will be 4n 2 and so on until n becomes <= 2.
Thus, the time complexity should be n2 * (1 + 2 + 4 + .... + 2n). 1 + 2 + 4 + .... + 2n is a geometric sequence and its sum is equal to 2n - 1.Thus, the total time complexity should be O(2nn2). However, the answer says O(n3). What am I doing wrong?
Consider the below fragment
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
System.out.println("Hello.");
}
}
This doesn't need any introduction and is O(n2)
Now consider the below fragment
if(n <= 2) {
return 1;
} else if(n % 2 == 0) {
return(staticintrecursiveloopy(n+1));
} else {
return(staticintrecursiveloopy(n-2));
}
How many times do you think this fragment will be executed?
If n%2 == 0 then the method staticintrecursiveloopy will be executed 1 extra time. Otherwise it goes about decresing it by 2, thus it'll be executed n/2 times (or (n+1)/2 if you include the other condition).
Thus the total number of times the method staticintrecursiveloopy will be executed is roughly n/2 which when expressed in terms of complexity becomes O(n).
And the method staticintrecursiveloopy calls a part with complexity O(n2) in each iteration, thus the total time complexity becomes
O(n) * O(n2) = O(n3).

Nested loops with time complexity log(log n)

Can there be an algorithm with two loops (nested) such that the overall time complexity is O(log(log n))
This arised after solving the following :
for(i=N; i>0; i=i/2){
for(j=0; j<i; j++){
print "hello world"
}
}
The above code has time complexity of N. (Using concept of Geometric Progression). Can there exists similar loop with time complexity O(log(log n)) ?
For a loop to iterate O(log log n) times, where the loop index variable counts up to n, then the index variable has to grow like the inverse function of log log k where k is the number of iterations; i.e. it has to grow like 2^2^k, or some other base than 2.
One way to achieve this is by starting at 2 and repeatedly squaring until you reach n - if the index variable is (((2^2)^2)...^2) with k squarings, then this equals 2^2^k as required:
for(int i = 2; i < n; i = i*i) {
//...
}
This loop iterates O(log log n) times, as required. If you absolutely must do it with nested loops, we can trivially add an extra loop which iterates O(1) times, leaving the total number of iterations asymptotically the same:
for(int i = 2; i < n; i = i*i) {
for(int j = 0; j < 10; j++) {
// ...
}
}

determine the time complexity of the algorithems

I have just started to learn time complexity, but I don't really get the idea, could you help with those questions and explain the way of thinking:
int Fun1(int n)
{
for (i = 0; i < n; i += 1) {
for (j = 0; j < i; j += 1) {
for (k = j; k < i; i += 1) {
// do something
}
}
}
}
void Fun2(int n){
i=o
while(i<n){
for (j = 0; j < i; j += 1) {
k=n
while(k>j){
k=k/2
}
k=j
while(k>1){
k=k/2
}
}
}
int Fun3(int n){
for (i = 0; i < n; i += 1) {
print("*")
}
if(n<=1){
print("*")
return
}
if (n%2 != 0){
Fun3(n-1)
}
else{
Fun3(n/2)
}
}
for function 1, I think its Theta(n^3) because it runs at most
n*n*n times but I am not sure how to prove this.
for the second I think its Theta (n^2log(n))
I am not sure
Could you help, please?
First a quick note, in Fun2(n) there should be a i++ before closing the while loop, anyway, time complexity is important in order to understand the efficiency of your algorithms. In this case you have these 3 functions:
Fun1(n)
In this function you have three nested for loops, each for loops iterates n times over a given input, we know that the complexity of this iteration is O(n). Since there are three nested for loops, the second for loop will iterate n times over each iteration of the outer for loop. The same will do the most inner loop. The resulting complexity, as you correctly said, is O(n) * O(n) * O(n) = O(n^3)
Fun2(n)
This function has a while loop that iterates n times over a given input. So the outer loop complexity is O(n). Same as before we have an inner for loop that iterates n times on each cycle of the outer loop. So far we have O(n) * O(n) which is O(n^2) as complexity. Inside the for loop we have a while loop, that differs from the other loops, since does not iterate on each element in a specific range, but it divides the range by 2 at each iteration. For example from 0 to 31 we have 0-31 -> 0-15 -> 0-7 -> 0-3 -> 0-1
As you know the number of iteration is the result of the logarithmic function, log(n), so we end up with O(n^2) * O(log(n)) = O(n^2(log(n)) as time complexity
Fun3(n)
In this function we have a for loop with no more inner loops, but then we have a recursive call. The complexity of the for loop as we know is O(n), but how many times will this function be called?
If we take a small number (like 6) as example we have a first loop with 6 iteration, then we call again the function with n = 6-1 since 6 mod 2 = 0
Now we have a call to Fun3(5), we do 5 iteration and the recursively we call Fun3(2) since 5 mod 2 != 0
What are we having here? We having a recursive call that in the worst case will call itself n times
The complexity result is O(n!)
Note that when we calculate time complexity we ignore the coefficients since are not relevant, usually the function we consider, especially in CS, are:
O(1), O(n), O(log(n)), O(n^a) with a > 1, O(n!)
and we combine and simplify them in order to know who has the best (lowest) time complexity to have an idea of which algorithm could be used

When can an algorithm have square root(n) time complexity?

Can someone give me example of an algorithm that has square root(n) time complexity. What does square root time complexity even mean?
Square root time complexity means that the algorithm requires O(N^(1/2)) evaluations where the size of input is N.
As an example for an algorithm which takes O(sqrt(n)) time, Grover's algorithm is one which takes that much time. Grover's algorithm is a quantum algorithm for searching an unsorted database of n entries in O(sqrt(n)) time.
Let us take an example to understand how can we arrive at O(sqrt(N)) runtime complexity, given a problem. This is going to be elaborate, but is interesting to understand. (The following example, in the context for answering this question, is taken from Coding Contest Byte: The Square Root Trick , very interesting problem and interesting trick to arrive at O(sqrt(n)) complexity)
Given A, containing an n elements array, implement a data structure for point updates and range sum queries.
update(i, x)-> A[i] := x (Point Updates Query)
query(lo, hi)-> returns A[lo] + A[lo+1] + .. + A[hi]. (Range Sum Query)
The naive solution uses an array. It takes O(1) time for an update (array-index access) and O(hi - lo) = O(n) for the range sum (iterating from start index to end index and adding up).
A more efficient solution splits the array into length k slices and stores the slice sums in an array S.
The update takes constant time, because we have to update the value for A and the value for the corresponding S. In update(6, 5) we have to change A[6] to 5 which results in changing the value of S1 to keep S up to date.
The range-sum query is interesting. The elements of the first and last slice (partially contained in the queried range) have to be traversed one by one, but for slices completely contained in our range we can use the values in S directly and get a performance boost.
In query(2, 14) we get,
query(2, 14) = A[2] + A[3]+ (A[4] + A[5] + A[6] + A[7]) + (A[8] + A[9] + A[10] + A[11]) + A[12] + A[13] + A[14] ;
query(2, 14) = A[2] + A[3] + S[1] + S[2] + A[12] + A[13] + A[14] ;
query(2, 14) = 0 + 7 + 11 + 9 + 5 + 2 + 0;
query(2, 14) = 34;
The code for update and query is:
def update(S, A, i, k, x):
S[i/k] = S[i/k] - A[i] + x
A[i] = x
def query(S, A, lo, hi, k):
s = 0
i = lo
//Section 1 (Getting sum from Array A itself, starting part)
while (i + 1) % k != 0 and i <= hi:
s += A[i]
i += 1
//Section 2 (Getting sum from Slices directly, intermediary part)
while i + k <= hi:
s += S[i/k]
i += k
//Section 3 (Getting sum from Array A itself, ending part)
while i <= hi:
s += A[i]
i += 1
return s
Let us now determine the complexity.
Each query takes on average
Section 1 takes k/2 time on average. (you might iterate atmost k/2)
Section 2 takes n/k time on average, basically number of slices
Section 3 takes k/2 time on average. (you might iterate atmost k/2)
So, totally, we get k/2 + n/k + k/2 = k + n/k time.
And, this is minimized for k = sqrt(n). sqrt(n) + n/sqrt(n) = 2*sqrt(n)
So we get a O(sqrt(n)) time complexity query.
Prime numbers
As mentioned in some other answers, some basic things related to prime numbers take O(sqrt(n)) time:
Find number of divisors
Find sum of divisors
Find Euler's totient
Below I mention two advanced algorithms which also bear sqrt(n) term in their complexity.
MO's Algorithm
try this problem: Powerful array
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 1E6 + 10, k = 500;
struct node {
int l, r, id;
bool operator<(const node &a) {
if(l / k == a.l / k) return r < a.r;
else return l < a.l;
}
} q[N];
long long a[N], cnt[N], ans[N], cur_count;
void add(int pos) {
cur_count += a[pos] * cnt[a[pos]];
++cnt[a[pos]];
cur_count += a[pos] * cnt[a[pos]];
}
void rm(int pos) {
cur_count -= a[pos] * cnt[a[pos]];
--cnt[a[pos]];
cur_count -= a[pos] * cnt[a[pos]];
}
int main() {
int n, t;
cin >> n >> t;
for(int i = 1; i <= n; i++) {
cin >> a[i];
}
for(int i = 0; i < t; i++) {
cin >> q[i].l >> q[i].r;
q[i].id = i;
}
sort(q, q + t);
memset(cnt, 0, sizeof(cnt));
memset(ans, 0, sizeof(ans));
int curl(0), curr(0), l, r;
for(int i = 0; i < t; i++) {
l = q[i].l;
r = q[i].r;
/* This part takes O(n * sqrt(n)) time */
while(curl < l)
rm(curl++);
while(curl > l)
add(--curl);
while(curr > r)
rm(curr--);
while(curr < r)
add(++curr);
ans[q[i].id] = cur_count;
}
for(int i = 0; i < t; i++) {
cout << ans[i] << '\n';
}
return 0;
}
Query Buffering
try this problem: Queries on a Tree
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10, k = 333;
vector<int> t[N], ht;
int tm_, h[N], st[N], nd[N];
inline int hei(int v, int p) {
for(int ch: t[v]) {
if(ch != p) {
h[ch] = h[v] + 1;
hei(ch, v);
}
}
}
inline void tour(int v, int p) {
st[v] = tm_++;
ht.push_back(h[v]);
for(int ch: t[v]) {
if(ch != p) {
tour(ch, v);
}
}
ht.push_back(h[v]);
nd[v] = tm_++;
}
int n, tc[N];
vector<int> loc[N];
long long balance[N];
vector<pair<long long,long long>> buf;
inline long long cbal(int v, int p) {
long long ans = balance[h[v]];
for(int ch: t[v]) {
if(ch != p) {
ans += cbal(ch, v);
}
}
tc[v] += ans;
return ans;
}
inline void bal() {
memset(balance, 0, sizeof(balance));
for(auto arg: buf) {
balance[arg.first] += arg.second;
}
buf.clear();
cbal(1,1);
}
int main() {
int q;
cin >> n >> q;
for(int i = 1; i < n; i++) {
int x, y; cin >> x >> y;
t[x].push_back(y); t[y].push_back(x);
}
hei(1,1);
tour(1,1);
for(int i = 0; i < ht.size(); i++) {
loc[ht[i]].push_back(i);
}
vector<int>::iterator lo, hi;
int x, y, type;
for(int i = 0; i < q; i++) {
cin >> type;
if(type == 1) {
cin >> x >> y;
buf.push_back(make_pair(x,y));
}
else if(type == 2) {
cin >> x;
long long ans(0);
for(auto arg: buf) {
hi = upper_bound(loc[arg.first].begin(), loc[arg.first].end(), nd[x]);
lo = lower_bound(loc[arg.first].begin(), loc[arg.first].end(), st[x]);
ans += arg.second * (hi - lo);
}
cout << tc[x] + ans/2 << '\n';
}
else assert(0);
if(i % k == 0) bal();
}
}
There are many cases.
These are the few problems which can be solved in root(n) complexity [better may be possible also].
Find if a number is prime or not.
Grover's Algorithm: allows search (in quantum context) on unsorted input in time proportional to the square root of the size of the input.link
Factorization of the number.
There are many problems that you will face which will demand use of sqrt(n) complexity algorithm.
As an answer to second part:
sqrt(n) complexity means if the input size to your algorithm is n then there approximately sqrt(n) basic operations ( like **comparison** in case of sorting). Then we can say that the algorithm has sqrt(n) time complexity.
Let's analyze the 3rd problem and it will be clear.
let's n= positive integer. Now there exists 2 positive integer x and y such that
x*y=n;
Now we know that whatever be the value of x and y one of them will be less than sqrt(n). As if both are greater than sqrt(n)
x>sqrt(n) y>sqrt(n) then x*y>sqrt(n)*sqrt(n) => n>n--->contradiction.
So if we check 2 to sqrt(n) then we will have all the factors considered ( 1 and n are trivial factors).
Code snippet:
int n;
cin>>n;
print 1,n;
for(int i=2;i<=sqrt(n);i++) // or for(int i=2;i*i<=n;i++)
if((n%i)==0)
cout<<i<<" ";
Note: You might think that not considering the duplicate we can also achieve the above behaviour by looping from 1 to n. Yes that's possible but who wants to run a program which can run in O(sqrt(n)) in O(n).. We always look for the best one.
Go through the book of Cormen Introduction to Algorithms.
I will also request you to read following stackoverflow question and answers they will clear all the doubts for sure :)
Are there any O(1/n) algorithms?
Plain english explanation Big-O
Which one is better?
How do you calculte big-O complexity?
This link provides a very basic beginner understanding of O() i.e., O(sqrt n) time complexity. It is the last example in the video, but I would suggest that you watch the whole video.
https://www.youtube.com/watch?v=9TlHvipP5yA&list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O&index=6
The simplest example of an O() i.e., O(sqrt n) time complexity algorithm in the video is:
p = 0;
for(i = 1; p <= n; i++) {
p = p + i;
}
Mr. Abdul Bari is reknowned for his simple explanations of data structures and algorithms.
Primality test
Solution in JavaScript
const isPrime = n => {
for(let i = 2; i <= Math.sqrt(n); i++) {
if(n % i === 0) return false;
}
return true;
};
Complexity
O(N^1/2) Because, for a given value of n, you only need to find if its divisible by numbers from 2 to its root.
JS Primality Test
O(sqrt(n))
A slightly more performant version, thanks to Samme Bae, for enlightening me with this. 😉
function isPrime(n) {
if (n <= 1)
return false;
if (n <= 3)
return true;
// Skip 4, 6, 8, 9, and 10
if (n % 2 === 0 || n % 3 === 0)
return false;
for (let i = 5; i * i <= n; i += 6) {
if (n % i === 0 || n % (i + 2) === 0)
return false;
}
return true;
}
isPrime(677);