How to find the time complexity of the following code? - time-complexity

Could you explain how to find time complexity of the folowing code? Any help appreciated.
int boo(n) {
if (n > 0)
{
return 1 + boo(n/2) + boo(n/2);
}
else
{
return 0;
}
}

Sometimes it is good to write it down. When you start, it sum 1 + boo(n/2) + boo(n/2), which is on the second line.
And each of that n/2 is run also
etc. etc.
So at the end, while the number of calls is growing exponencially, the number of repetions is only logharitmic, which at the end remove each other and you got O(N).
PS : It is enough to count down the last line, the whole tree has always only once time more nodes (minus one), which in complexity theory is neglible (you dont care about constants, which multiplicating by two is)

Related

Iterative solution to compute powers

I am working on developing an efficient iterative code to compute mn. After some thinking and googling I found this code;
public static int power(int n, int m)
// Efficiently calculates m to the power of n iteratively
{
int pow=m, acc=1, count=n;
while(count!=0)
{
if(count%2==1)
acc=acc*pow;
pow=pow*pow;
count=count/2;
}
return acc;
}
This logic makes sense to me except the fact that why are we squaring value of pow towards the end each time. I am familiar with similar recursive approach, but this squaring is not looking very intuitive to me. Can I kindly get some help her? An example with explanation will be really helpful.
The accumulator is being squared each iteration because count (which is the inverse cumulative power) is being halved each iteration.
If the count is odd, the accumulator is multiplied by the number. This algorithm relies on integer arithmetic, which discards the fractional part of a division, effectively further decrementing by 1 when the count is odd.
This is a very tricky solution to understand. I am solving this problem in leetcode and have found the iterative solution. I have spent a whole day to understand this beautiful iterative solution. The main problem is this iterative solution does not work as like its recursive solution.
Let's pick an example to demonstrate. But first I have to re-write your code by replacing some variable name as the name in your given code is very confusing.
// Find m^n
public static int power(int n, int m)
{
int pow=n, result=1, base=m;
while(pow > 0)
{
if(pow%2 == 1) result = result * base;
base = base * base;
pow = pow/2;
}
return result;
}
Let's understand the beauty step by step.
Let say, base = 2 and power = 10.
Calculation
Description
2^10= (2*2)^5 = 4^5
even
We have changed the base to 4 and power to 5. So it is now enough to find 4^5. [base multiplied with itself and power is half
4^5= 4*(4)^4 = 4^5
odd
We separate single 4 which is the base for current iteration. We store the base to result variable. We will now find the value of 4^4 and then multiply with the result variable.
4^4= (4*4)^2 = 16^2
even
We change the base to 16 and power to 2. It is now enough to find 16^2
16^2= (16*16)^1 = 256^1
even
We change the base to 256 and power to 1. It is now enough to find 256^1
256^1 = 256 * 256^0
odd
We separate single 256 which is the base for current iteration. This value comes from evaluation of 4^4.So, we have to multiply it with our previous result variable. And continue evaluating the rest value of 256^0.
256^0
zero
Power 0. So stop the iteration.
So, after translating the process in pseudo code it will be similar to this,
If power is even:
base = base * base
power /= 2
If power is odd:
result = result * base
power -= 1
Now, let have another observation. It is observed that floor(5 / 2) and (5-1) / 2 is same.
So, for odd power, we can directly set power / 2 instead of power -= 1. So, the pseudo code will be like the below,
if power is both odd or even:
base = base * base
power /= 2
If power is odd:
result = result * base
I hope you got the behind the scene.

What is the time complexity of below function?

I was reading book about competitive programming and was encountered to problem where we have to count all possible paths in the n*n matrix.
Now the conditions are :
`
1. All cells must be visited for once (cells must not be unvisited or visited more than once)
2. Path should start from (1,1) and end at (n,n)
3. Possible moves are right, left, up, down from current cell
4. You cannot go out of the grid
Now this my code for the problem :
typedef long long ll;
ll path_count(ll n,vector<vector<bool>>& done,ll r,ll c){
ll count=0;
done[r][c] = true;
if(r==(n-1) && c==(n-1)){
for(ll i=0;i<n;i++){
for(ll j=0;j<n;j++) if(!done[i][j]) {
done[r][c]=false;
return 0;
}
}
count++;
}
else {
if((r+1)<n && !done[r+1][c]) count+=path_count(n,done,r+1,c);
if((r-1)>=0 && !done[r-1][c]) count+=path_count(n,done,r-1,c);
if((c+1)<n && !done[r][c+1]) count+=path_count(n,done,r,c+1);
if((c-1)>=0 && !done[r][c-1]) count+=path_count(n,done,r,c-1);
}
done[r][c] = false;
return count;
}
Here if we define recurrence relation then it can be like: T(n) = 4T(n-1)+n2
Is this recurrence relation true? I don't think so because if we use masters theorem then it would give us result as O(4n*n2) and I don't think it can be of this order.
The reason, why I am telling, is this because when I use it for 7*7 matrix it takes around 110.09 seconds and I don't think for n=7 O(4n*n2) should take that much time.
If we calculate it for n=7 the approx instructions can be 47*77 = 802816 ~ 106. For such amount of instruction it should not take that much time. So here I conclude that my recurrene relation is false.
This code generates output as 111712 for 7 and it is same as the book's output. So code is right.
So what is the correct time complexity??
No, the complexity is not O(4^n * n^2).
Consider the 4^n in your notation. This means, going to a depth of at most n - or 7 in your case, and having 4 choices at each level. But this is not the case. In the 8th, level you still have multiple choices where to go next. In fact, you are branching until you find the path, which is of depth n^2.
So, a non tight bound will give us O(4^(n^2) * n^2). This bound however is far from being tight, as it assumes you have 4 valid choices from each of your recursive calls. This is not the case.
I am not sure how much tighter it can be, but a first attempt will drop it to O(3^(n^2) * n^2), since you cannot go from the node you came from. This bound is still far from optimal.

Determining growth function and Big O

Before anyone asks, yes this was a previous test question I got wrong and knew I got wrong because I honestly just don't understand growth functions and Big O. I've read the technical definition, I know what they are but not how to calculate them. My textbook gives examples off of real-life situations, but I still find it hard to interpret code. If someone can tell me their thought process on how they determine these, that would seriously help. (i.e. this section of code tells me to multiply n by x, etc, etc).
public static int sort(int lowI, int highI, int nums[]) {
int i = lowI;
int j = highI;
int pivot = nums[lowI +(highI-lowI)/2];
int counter = 0;
while (i <= j) {
while (nums[i] < pivot) {
i++;
counter++;
}
while (nums[j] > pivot) {
j--;
counter++;
}
count++;
if (i <= j) {
NumSwap(i, j, nums); //saves i to temp and makes i = j, j = temp
i++;
j--;
}
}
if(lowI< j)
{
return counter + sort(lowI, j, nums);
}
if(i < highI)
{
return counter + sort(i, highI, nums);
}
return counter;
}
It might help for you to read some explanations of Big-O. I think of Big-O as the number of "basic operations" computed as the "input size" increases. For sorting algorithms, "basic operations" usually means comparisons (or counter increments, in your case), and the "input size" is the size of the list to sort.
When I analyze for runtime, I'll start by mentally dividing the code into sections. I ignore one-off lines (like int i = lowI;) because they're only run once, and Big-O doesn't care about constants (though, note in your case that int i = lowI; runs once with each recursion, so it's not only run once overall).
For example, I'd mentally divide your code into three overall parts to analyze: there's the main while loop while (i <= j), the two while loops inside of it, and the two recursive calls at the end. How many iterations will those loops run for, depending on the values of i and j? How many times will the function recurse, depending on the size of the list?
If I'm having trouble thinking about all these different parts at once, I'll isolate them. For example, how long will one of the inner for loops run for, depending on the values of i and j? Then, how long does the outer while loop run for?
Once I've thought about the runtime of each part, I'll bring them back together. At this stage, it's important to think about the relationships between the different parts. "Nested" relationships (i.e. the nested block loops a bunch of times each time the outer thing loops once) usually mean that the run times are multiplied. For example, since the inner while loops are nested within the outer while loop, the total number of iterations is (inner run time + other inner) * outer. It also seems like the total run time would look something like this - ((inner + other inner) * outer) * recursions - too.

Calculating the expected running time of function

I have a question about calculating the expected running time of a given function. I understand just fine, how to calculate code fragments with cycles in them (for / while / if , etc.) but functions without them seems a bit odd to me. For example, lets say that we have the following code fragment:
public void Add(T item)
{
var newArr = new T[this.arr.Length + 1];
Array.Copy(this.arr, newArr, this.arr.Length);
newArr[newArr.Length - 1] = item;
this.arr = newArr;
}
If my logic works correctly, the function Add has a complexity of O(1), because in the best/worst/average case it will just read every line of code once, right?
You always have to consider the time complexity of the function calls, too. I don't know how Array.Copy is implemented, but I'm going to guess it's O(N), making the whole Add function O(N) as well. Your intuition is right, though - the rest of it is in fact O(1).
If you have multiple sub-operations with O(n) + O(log(n)) etc and the costliest step is the cost of the whole operation - by default big O refers to the worst case. Here as you copy the array, it is an O(n) operation
Complexity is calculated following this 2 rules :
-Calling a method (complexity+ 1)
-Encountering the following keywords : if, while, repeat, for, &&, ||, catch, case, etc … (complexity+ 1)
In your case , given you are trying to copy an array and not a single value , the algorithm will complete N copy operations giving you an O(N) operation.

Complexity of subset sum solver algorithm

I made a subset sum problem but I am still confused about its complexity.
Please find the algorithm here:
http://www.vinaysofs.com/P=NP%20proved%20with%20Subset%20Sum%20Problem%20solution%20manuscript.pdf
Basically the essence is:
i = 0;
func1(i)
{
if(func1(i + 1))
return true;
else
func1(i + 2, with few modifications in other arguments);
}
Here, i is index of an element in an integer set like {1,1,1,1,1,1}.
Worst case for above algorithm comes when all elements are 1 and we need a sum which is 1 more than the total sum of all elements.
I want to know the complexity of this algorithm as someone told me it is non polynomial(Exponential time - 2^n. If it is polynomial then it would be a big achievement.
In my view also it is not polynomial as T(n)=2T(n-1)+6 but that is in worst case that is every time the 1st recursive call fails.
Please help..
Thanks
Vinay