Big-O Complexity of Two Problems - time-complexity

I was practicing a few Big-O complexity problems for one of my classes and these two problems seem to stump me the most.
For both of these, I need to determine the best and worst-case complexity.
Q1
function FUNC3(int array[n], int n, int key)
int i = 1;
while (i < n) do {
if (key == array[0]) then
i = i + n^0.25;
else
i = i + n^0.5;
}
The best-case I got was: O(n / n^0.5) while my worst-case was: O(n / n^0.25)
Q2
function FUNC4(int array[n], int n, int key)
for (int i=1; i<n; i = i * 2) do
for (int j=0; j<sqrt(n); j++) do {
if(array[0] == key) then {
int k = 1;
while (k < n) do
k = k * sqrt(n);
}
}
For this one, I got best-case: O(logn x sqrt(n)), with a worst-case of: O(logn x n)
Although, I am not very confident in these answers, do any of these look about right?

Let's visit each of these individually. Here's your first function:
function FUNC3(int array[n], int n, int key)
int i = 1;
while (i < n) do {
if (key == array[0]) then
i = i + n^0.25;
else
i = i + n^0.5;
}
You are correct that the best-case runtime is Θ(n / n0.5) and that the worst case is Θ(n / n0.25). It might help to rewrite these by simplifying the exponents; the first runtime is
Θ(n / n0.5) = Θ(n0.5) = Θ(√n)
and the second runtime is
Θ(n / n0.25) = Θ(n3/4).
Now, let's look at the second function:
function FUNC4(int array[n], int n, int key)
for (int i=1; i<n; i = i * 2) do
for (int j=0; j<sqrt(n); j++) do {
if(array[0] == key) then {
int k = 1;
while (k < n) do
k = k * sqrt(n);
}
}
To determine the runtime, let's use the time-honored maxim
"When in doubt, work inside out!"
Let's begin with the innermost loop:
int k = 1;
while (k < n) do
k = k * sqrt(n);
This loop is sneaky - it never runs more than three times because the values of k will be 1, then √n, then n. This means that the loop does O(1) total work. As a result, we can rewrite the overall code as
function FUNC4(int array[n], int n, int key)
for (int i=1; i<n; i = i * 2) do
for (int j=0; j<sqrt(n); j++) do {
if(array[0] == key) then {
do O(1) work;
}
}
Since the if statement does O(1) work regardless of whether it executes, we're left with
function FUNC4(int array[n], int n, int key)
for (int i=1; i<n; i = i * 2) do
for (int j=0; j<sqrt(n); j++) do {
do O(1) work;
}
If we do O(1) work √n times, then the runtime is Θ(√n), so the inner loop becomes
function FUNC4(int array[n], int n, int key)
for (int i=1; i<n; i = i * 2) do
do sqrt(n) work
Since the work done in the inner loop is independent of the value of i, the work done here is simply the product of the number of outer loop iterations and the work done by one iteration. The outer loop runs Θ(log n) times, so the work here is Θ(√n log n), regardless of the array contents. So that makes the best- and worst-case runtimes for the function the same, since the total work done is (asymptotically) always the same.
Hope this helps!

Related

3 Nested for loops where third loop is dependent on first time complexity

I'm trying to find the time complexity for 3 nested for loops. I'm a little lost on how to do this because the the first and third are dependent. From what I did I found that the pattern is n(1 + 2 + 3) so O(n^2) but I'm unsure if that's right. I'm also unsure if this includes the j loop or would I have to multiply a n to my current answer. Any help is much appreciated.
for (int i = 0; i < n*n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < i; k++) {
// print some statement here
}
}
}
Short Answer:
Assuming the innermost loop operation is O(1), the time compexity of your code is O(n^5).
Longer Answer:
Let's start with a simpler example of 2 dependent loops:
for (int i=0; i<n; ++i) {
for (int j=0; j<i; ++j) {
// Some O(1) operation
}
}
The outer loop will run n times and the inner loop will run 1...n times, and on average:
(1 + 2 + ... + n)/n = n(n+1)/2/n = O(n)
So the overall complexity for this simpler example is O(n^2).
Now to your case:
Note that I assumed the operation in the innermost loop is done in O(1).
for (int i=0; i< n*n; i++){
for (int j=0; j<n; j++){
for (int k=0; k<i; k++){
// Some O(1) operation
}
}
}
The 1st outer loop will run n^2 times.
The 2nd outer loop (i.e. the middle loop) will run n times.
So the 2 outer loop together will run in O(n^3).
The number of times the inner loop will run on average is now O(n^2) because the number of iterations will now be 1..n^2 (instead of 1..n):
(1 + 2 + ... n^2)/n^2 = (n^2)(n^2+1)/2/(n^2) = O(n^2).
Therefore the overall time complexity is O(n^5).
Addendum:
The code below is not in any case a proof regarding the complexity, since measuring for specific values of n does not prove anything about the asymptotic behavior of the time function, but it can give you a "feel" about the number of operations that are done.
#include <iostream>
#include <ctype.h>
void test(int n)
{
int64_t counter = 0;
for (int i = 0; i < n * n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < i; k++) {
counter++;
}
}
}
std::cout << "n:" << n << ", counter:" << counter << std::endl;
}
int main()
{
test(10);
test(100);
test(1000);
}
Output:
n:10, counter:49500
n:100, counter:4999500000
n:1000, counter:499999500000000
I believe it is quite clear that the number of operations is close to n^5/2, and since constants like 1/2 do not apply: O(n^5).

Am i calculating the big-o correctly?

Following loops:
for(var i = 0; i < A; i++) {
for(var j = 0; j < B; j++) {
for(var k = 0; k < C; k++) {
//not concerned with instructions here
}
}
}
As I understand each loop complexity is 2n+2 so based on that I calculate the complexity of above nested loops to be (2A+2)*((2B+2)*(2C+2)). Is this correct? if so, how do I get the big-o out of it?
Edit 1
I've learned so much about big-o since this question was asked and have found an interesting visualization that I'd like to place here in case others come across this thread. For detailed reference (way better than student textbooks) and original drawing check out Wikipedia. There are a variety of time complexities explained there.
Since the original question involves three nested loops each with a different n, then the big-o is O(A * B * C) as mentioned in the answers. The difficulty arises when we try to determine the big-o for something like this where A is an array of objects (aka hash in some languages). The algorithm itself non-sense and is for demonstration only (although I've been asked non-sense in interviews before):
var cache = {}
for(var i = 0; i < A.length; i++) {
var obj = A[i]
if(!obj.someProperty) {
continue;
}
else if(cache[obj.someProperty]) {
return obj;
}
else if(obj.someProperty === 'some value') {
for(var j = 1; j < A.length; j++) {
if(A[j].someProperty === obj.someProperty) {
cache[obj.someProperty] = obj.someProperty
break
}
}
}
else {
for(var j = i; j < A.length; j++) {
//do something linear here
}
}
}
The outer loop is O(A.length). For inner loops:
obj.someProperty does not exist, we have no complexity per theory.
obj.someProperty is in the cache, we have no complexity per theory.
obj.someProperty is equal to some value either of:
we have O(A.length - 1) where there are no duplicates
We have O(A.length - x) where A.length - x refers to a duplicate's index within A.
Everything else, we have O(log A.length)
At best performance this algorithm gives us O(3) when A[0] and A[1] are considered duplicates and A[0].someProperty === 'some value' as we'll have one iteration for outer loop and one iteration of inner loop (3.2 A.length - x = index 1, finally 3rd iteration returns cached value breaking out of the outer loop entirely. At worse we'll have O(A.length log A.length) as the outer loop and inner loop at 4 are exhausted when no object has someProperty === 'some value'.
To "optimize" this algorithm we can simply write as follows:
for(var i = 0; i < A.length; i++) {
if(A[i].someProperty === 'some value') {
return obj
}
else {
for(var j = i; j < A.length; j++) {
//do something linear here
}
}
}
The outermost for-loop runs a total of A times. For each iteration, the second-level for-loop runs B times, each time triggering C iterations of the omitted instructions.
Thus, the time complexity is O(A * B * C).
Constants are ignored while calculating the time complexity
O((2A+2)*((2B+2)*(2C+2)))
=> O((2A)(2B)(2C))
=> O(8*ABC)
=> O(ABC)

Binary search to solve 'Kth Smallest Element in a Sorted Matrix'. How can one ensure the correctness of the algorithm,

I'm referring to the leetcode question: Kth Smallest Element in a Sorted Matrix
There are two well-known solutions to the problem. One using Heap/PriorityQueue and other is using Binary Search. The Binary Search solution goes like this (top post):
public class Solution {
public int kthSmallest(int[][] matrix, int k) {
int lo = matrix[0][0], hi = matrix[matrix.length - 1][matrix[0].length - 1] + 1;//[lo, hi)
while(lo < hi) {
int mid = lo + (hi - lo) / 2;
int count = 0, j = matrix[0].length - 1;
for(int i = 0; i < matrix.length; i++) {
while(j >= 0 && matrix[i][j] > mid) j--;
count += (j + 1);
}
if(count < k) lo = mid + 1;
else hi = mid;
}
return lo;
}
}
While I understand how this works, I have trouble figuring out one issue.
How can we be sure that the returned lo is always in the matrix?
Since the search space is min and max value of the array, the mid need NOT be a value that is in the array. However, the returned lo always is.
Why is this happening?
For the sake of argument, we can move the calculation of count to a separate function like the following:
bool valid(int mid, int[][] matrix, int k) {
int count = 0, m = matrix.length;
for (int i = 0; i < m; i++) {
int j = 0;
while (j < m && matrix[i][j] <= mid) j++;
count += j;
}
return (count < k);
}
This predicate will do exactly same as your specified operation. Here, the loop invariant is that, the range [lo, hi] always contains the kth smallest number of the 2D array.
In other words, lo <= solution <= hi
Now, when the loop terminates, it is evident that lo >= hi
Merging those two properties, we get, lo = solution = hi, since solution is a member of array, it can be said that, lo is always in the array after loop termination and will rightly point to the kth smallest element.
Because We are finding the lower_bound using binary search and there cannot be any number smaller than the number(lo) in the array which could be the kth smallest element.

time complexity ( studying for exam)

I am currently studying to an examen in algorithms and I am trying to solve a question about time complexity in java, but can't really figure out how to do it. I am suppose to calculate the expected time complexity. N is a positive integer.
for (int i=0; i < N; i++)
for (int j=i+1; j < N; i++) {
int x=j+1; int h=N-1; int k;
while(x<h) {
k=(x+h)/2;
if (a[i]+a[j]+a[k] == 0) { cnt++; break;}
if (a[i]+a[j]+a[k] < 0) x=k+1;
else h=k-1;
}}
The first for loop should run N times and the second should run N-1. Since x is j+1 I guessed that x= N-2. I dont know how to think after that with the while loop or if I have done anything right. Would really appreciate help!
Create your time complexity function in parts.
for (int i=0; i < N; i++) //Takes linear O(n)
for (int j=i+1; j < N; i++) { //Takes linear O(n) and in computer science we can safely assume -1 is irrelevant at N-1 in big O notation
int x=j+1; int h=N-1; int k; // 3 x O(1)
while( x < h ) { // Worst case is when j equals i + 1 where i = 0 so x is at lowest 2 and h equals to N-1 so h depends on N. So again loop takes linear O(n) time.
k=(x+h)/2; // Takes O(1) time
if (a[i]+a[j]+a[k] == 0) { // Takes O(1) time and if this gives true we do break from the while loop
cnt++; // Takes O(1) time
break; // Takes O(1) time
}
if ( a[i]+a[j]+a[k] < 0 ) { // Takes O(1) time
x=k+1; // Takes O(1) time
} else {
h=k-1; // Takes O(1) time
}
}
}
}
So in summary T(N) equals to O(N^3) and Ω(N^2)
More specific T(N) = N * N-1 * N-2 + 10 and this last while loop in avarage takes O(N/2) time but still in computer science it is same as O(N).
We are only interested in worst and best cases.
To confuse even more big O notation actually
T(N)=O(g(N)) means this:
I hope this answer helps even little bit...

When can an algorithm have square root(n) time complexity?

Can someone give me example of an algorithm that has square root(n) time complexity. What does square root time complexity even mean?
Square root time complexity means that the algorithm requires O(N^(1/2)) evaluations where the size of input is N.
As an example for an algorithm which takes O(sqrt(n)) time, Grover's algorithm is one which takes that much time. Grover's algorithm is a quantum algorithm for searching an unsorted database of n entries in O(sqrt(n)) time.
Let us take an example to understand how can we arrive at O(sqrt(N)) runtime complexity, given a problem. This is going to be elaborate, but is interesting to understand. (The following example, in the context for answering this question, is taken from Coding Contest Byte: The Square Root Trick , very interesting problem and interesting trick to arrive at O(sqrt(n)) complexity)
Given A, containing an n elements array, implement a data structure for point updates and range sum queries.
update(i, x)-> A[i] := x (Point Updates Query)
query(lo, hi)-> returns A[lo] + A[lo+1] + .. + A[hi]. (Range Sum Query)
The naive solution uses an array. It takes O(1) time for an update (array-index access) and O(hi - lo) = O(n) for the range sum (iterating from start index to end index and adding up).
A more efficient solution splits the array into length k slices and stores the slice sums in an array S.
The update takes constant time, because we have to update the value for A and the value for the corresponding S. In update(6, 5) we have to change A[6] to 5 which results in changing the value of S1 to keep S up to date.
The range-sum query is interesting. The elements of the first and last slice (partially contained in the queried range) have to be traversed one by one, but for slices completely contained in our range we can use the values in S directly and get a performance boost.
In query(2, 14) we get,
query(2, 14) = A[2] + A[3]+ (A[4] + A[5] + A[6] + A[7]) + (A[8] + A[9] + A[10] + A[11]) + A[12] + A[13] + A[14] ;
query(2, 14) = A[2] + A[3] + S[1] + S[2] + A[12] + A[13] + A[14] ;
query(2, 14) = 0 + 7 + 11 + 9 + 5 + 2 + 0;
query(2, 14) = 34;
The code for update and query is:
def update(S, A, i, k, x):
S[i/k] = S[i/k] - A[i] + x
A[i] = x
def query(S, A, lo, hi, k):
s = 0
i = lo
//Section 1 (Getting sum from Array A itself, starting part)
while (i + 1) % k != 0 and i <= hi:
s += A[i]
i += 1
//Section 2 (Getting sum from Slices directly, intermediary part)
while i + k <= hi:
s += S[i/k]
i += k
//Section 3 (Getting sum from Array A itself, ending part)
while i <= hi:
s += A[i]
i += 1
return s
Let us now determine the complexity.
Each query takes on average
Section 1 takes k/2 time on average. (you might iterate atmost k/2)
Section 2 takes n/k time on average, basically number of slices
Section 3 takes k/2 time on average. (you might iterate atmost k/2)
So, totally, we get k/2 + n/k + k/2 = k + n/k time.
And, this is minimized for k = sqrt(n). sqrt(n) + n/sqrt(n) = 2*sqrt(n)
So we get a O(sqrt(n)) time complexity query.
Prime numbers
As mentioned in some other answers, some basic things related to prime numbers take O(sqrt(n)) time:
Find number of divisors
Find sum of divisors
Find Euler's totient
Below I mention two advanced algorithms which also bear sqrt(n) term in their complexity.
MO's Algorithm
try this problem: Powerful array
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 1E6 + 10, k = 500;
struct node {
int l, r, id;
bool operator<(const node &a) {
if(l / k == a.l / k) return r < a.r;
else return l < a.l;
}
} q[N];
long long a[N], cnt[N], ans[N], cur_count;
void add(int pos) {
cur_count += a[pos] * cnt[a[pos]];
++cnt[a[pos]];
cur_count += a[pos] * cnt[a[pos]];
}
void rm(int pos) {
cur_count -= a[pos] * cnt[a[pos]];
--cnt[a[pos]];
cur_count -= a[pos] * cnt[a[pos]];
}
int main() {
int n, t;
cin >> n >> t;
for(int i = 1; i <= n; i++) {
cin >> a[i];
}
for(int i = 0; i < t; i++) {
cin >> q[i].l >> q[i].r;
q[i].id = i;
}
sort(q, q + t);
memset(cnt, 0, sizeof(cnt));
memset(ans, 0, sizeof(ans));
int curl(0), curr(0), l, r;
for(int i = 0; i < t; i++) {
l = q[i].l;
r = q[i].r;
/* This part takes O(n * sqrt(n)) time */
while(curl < l)
rm(curl++);
while(curl > l)
add(--curl);
while(curr > r)
rm(curr--);
while(curr < r)
add(++curr);
ans[q[i].id] = cur_count;
}
for(int i = 0; i < t; i++) {
cout << ans[i] << '\n';
}
return 0;
}
Query Buffering
try this problem: Queries on a Tree
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10, k = 333;
vector<int> t[N], ht;
int tm_, h[N], st[N], nd[N];
inline int hei(int v, int p) {
for(int ch: t[v]) {
if(ch != p) {
h[ch] = h[v] + 1;
hei(ch, v);
}
}
}
inline void tour(int v, int p) {
st[v] = tm_++;
ht.push_back(h[v]);
for(int ch: t[v]) {
if(ch != p) {
tour(ch, v);
}
}
ht.push_back(h[v]);
nd[v] = tm_++;
}
int n, tc[N];
vector<int> loc[N];
long long balance[N];
vector<pair<long long,long long>> buf;
inline long long cbal(int v, int p) {
long long ans = balance[h[v]];
for(int ch: t[v]) {
if(ch != p) {
ans += cbal(ch, v);
}
}
tc[v] += ans;
return ans;
}
inline void bal() {
memset(balance, 0, sizeof(balance));
for(auto arg: buf) {
balance[arg.first] += arg.second;
}
buf.clear();
cbal(1,1);
}
int main() {
int q;
cin >> n >> q;
for(int i = 1; i < n; i++) {
int x, y; cin >> x >> y;
t[x].push_back(y); t[y].push_back(x);
}
hei(1,1);
tour(1,1);
for(int i = 0; i < ht.size(); i++) {
loc[ht[i]].push_back(i);
}
vector<int>::iterator lo, hi;
int x, y, type;
for(int i = 0; i < q; i++) {
cin >> type;
if(type == 1) {
cin >> x >> y;
buf.push_back(make_pair(x,y));
}
else if(type == 2) {
cin >> x;
long long ans(0);
for(auto arg: buf) {
hi = upper_bound(loc[arg.first].begin(), loc[arg.first].end(), nd[x]);
lo = lower_bound(loc[arg.first].begin(), loc[arg.first].end(), st[x]);
ans += arg.second * (hi - lo);
}
cout << tc[x] + ans/2 << '\n';
}
else assert(0);
if(i % k == 0) bal();
}
}
There are many cases.
These are the few problems which can be solved in root(n) complexity [better may be possible also].
Find if a number is prime or not.
Grover's Algorithm: allows search (in quantum context) on unsorted input in time proportional to the square root of the size of the input.link
Factorization of the number.
There are many problems that you will face which will demand use of sqrt(n) complexity algorithm.
As an answer to second part:
sqrt(n) complexity means if the input size to your algorithm is n then there approximately sqrt(n) basic operations ( like **comparison** in case of sorting). Then we can say that the algorithm has sqrt(n) time complexity.
Let's analyze the 3rd problem and it will be clear.
let's n= positive integer. Now there exists 2 positive integer x and y such that
x*y=n;
Now we know that whatever be the value of x and y one of them will be less than sqrt(n). As if both are greater than sqrt(n)
x>sqrt(n) y>sqrt(n) then x*y>sqrt(n)*sqrt(n) => n>n--->contradiction.
So if we check 2 to sqrt(n) then we will have all the factors considered ( 1 and n are trivial factors).
Code snippet:
int n;
cin>>n;
print 1,n;
for(int i=2;i<=sqrt(n);i++) // or for(int i=2;i*i<=n;i++)
if((n%i)==0)
cout<<i<<" ";
Note: You might think that not considering the duplicate we can also achieve the above behaviour by looping from 1 to n. Yes that's possible but who wants to run a program which can run in O(sqrt(n)) in O(n).. We always look for the best one.
Go through the book of Cormen Introduction to Algorithms.
I will also request you to read following stackoverflow question and answers they will clear all the doubts for sure :)
Are there any O(1/n) algorithms?
Plain english explanation Big-O
Which one is better?
How do you calculte big-O complexity?
This link provides a very basic beginner understanding of O() i.e., O(sqrt n) time complexity. It is the last example in the video, but I would suggest that you watch the whole video.
https://www.youtube.com/watch?v=9TlHvipP5yA&list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O&index=6
The simplest example of an O() i.e., O(sqrt n) time complexity algorithm in the video is:
p = 0;
for(i = 1; p <= n; i++) {
p = p + i;
}
Mr. Abdul Bari is reknowned for his simple explanations of data structures and algorithms.
Primality test
Solution in JavaScript
const isPrime = n => {
for(let i = 2; i <= Math.sqrt(n); i++) {
if(n % i === 0) return false;
}
return true;
};
Complexity
O(N^1/2) Because, for a given value of n, you only need to find if its divisible by numbers from 2 to its root.
JS Primality Test
O(sqrt(n))
A slightly more performant version, thanks to Samme Bae, for enlightening me with this. 😉
function isPrime(n) {
if (n <= 1)
return false;
if (n <= 3)
return true;
// Skip 4, 6, 8, 9, and 10
if (n % 2 === 0 || n % 3 === 0)
return false;
for (let i = 5; i * i <= n; i += 6) {
if (n % i === 0 || n % (i + 2) === 0)
return false;
}
return true;
}
isPrime(677);