Homework: How do i calculate the time complexity of this function? - time-complexity

void func(int n){
int i=1, k=n;
while (i<=k){
k=k/2;
i = i*2;
}
}
How do i calculate the time complexity of this function? I understand that the assignment of i=1, k=n takes two basic steps and to divide the value of k and multiply the value of i takes two basic steps as well, but because the values of i and k are increasing and decreasing exponentially, will the time complexity be O(log base 4 N) or O(log base 2 sqrt(N))?

Your answer is O(log √n), in the comments #Eraklon says it's O((log2 n)/2), and #matri70boss says it's O(log4 n). All three of you are correct, but the answer in its simplest form is O(log n).
log √n = log n0.5 = 0.5 log n, and we discard the constant factor 0.5 when we write in big O notation.
(log2 n)/2 = (log n)/(2 log 2) by the change of base identity, and 1/(2 log 2) is another constant factor we can discard.
Likewise, log4 n = (log n)/(log 4), and we can discard the constant factor 1/(log 4).

Related

Not sure whether it's smaller or larger - Big O notation

Could one of you kindly to tell me whether it's smaller or bigger?
Is O(N * logK) bigger than O(N)? I think it is bigger because O(NlogN) is bigger than O(N), the linear one.
Yes, it should increase, unless for some reason K is always one, in which you wouldnt put the 'logK' in O(N*logK) and it would just be O(N) which is obv equal to O(N)
Think of it this way: What is O(N) and O(N*logK) saying?
Well O(N) is saying, for example, that you have something like an array with N elements in it. For each element you are doing an operation that takes constant time, ie adding a number to that element
While O(N*logK) is saying, not only do you need to do an operation for each element, you need to do an operation that takes logK time. Its important to note that K would denote something different than N in this case, for example you could have the array from the O(N) example plus another array with K elements. Heres a code example
public void SomeNLogKOperation(int[] nElements, int[] kElements){
//for each element in nElements, ie O(N)
for(int i = 0; i < nElements.length; i++){
//do operation that takes O(logK) time, now we have O(N*logK)
int val = operationThatTakesLogKTime(nElements[i], kElements)
}
}
public void SomeNOperation(int[] nElements){
//for each element in nElements, ie O(N)
for(int i = 0; i < nElements.length; i++){
//simple operation that takes O(1) time, so we have O(N*1) = O(N)
int val = nElements[i] + 1;
}
}
I absolutely missed you used log(K) in the expression - this answer is invalid if K is not dependent on N and more, less than 1. But the you use O NlogN in the next
sentence so lets go with N log N.
So for N = 1000 O(N) is exactly that.
O(NlogN) is logN more. Usually we are looking at a base 2 log, so O(NlogN) is about 10000.
The difference is not large but very measurable.
For N = 1,000,000
You have O(N) at 1 million
O(NlogN) would sit comfortably at 20 million.
It is helpful to know your logs to common values
8-bit max 255 => log 255 = 8
10 bit max 1024 => log 1024 = 10: Conclude log 1000 is very close to 10.
16 bit 65735 => log 65735 = 16
20 bits max 1024072 = 20 bits very close to 1 million.
This question is not asked in the context of algorithmic time complexity. Only math is required here.
So we are comparing too functions. It all depends on context. What do we know of N and K? If K and N are both free variables that tend to infinity, then yes, O(N * log k) is "bigger" than O(N), in the sense that
N = O(N * log k) but
N * log k ≠ O(N).
However, if K is some constant parameter > 0, then they are the same complexity class.
On the other hand, K could be 0 or negative, in which case we obtain different relationships. So you need to define/provide more context to be able to make this comparison.

Why Are Time Complexities Like O(N + N) Equal To O(N)? [duplicate]

This question already has answers here:
Why is the constant always dropped from big O analysis?
(7 answers)
Closed 2 years ago.
I commonly use a site called LeetCode for practice on problems. On a lot of answers in the discuss section of a problem, I noticed that run times like O(N + N) or O(2N) gets changed to O(N). For example:
int[] nums = {1, 2, 3, 4, 5};
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
for(int i = 0; i < nums.length; i ++) {
System.out.println(nums[i]);
}
This becomes O(N), even though it iterates through nums twice. Why is it not O(2N) or O(N + N)?
In time complexity, constant coefficients do not play a role. This is because the actual time it takes an algorithm to run depends also on the physical constraints of the machine. This means that if you run your code on a machine which is twice as fast as another, all other conditions being equal, it would run in about half the time with the same input.
But that’s not the same thing when you compare two algorithms with different time complexities. For example, when you compare the running time of an algorithm of O( N ^ 2 ) to an algorithm of O(N), the running time of O( N ^ 2 ) grows so fast with the growth of input size that the O(N) one cannot catch up with it, no matter how big you choose its constant coefficient.
Let’s say your constant coefficient is 1000, instead of just 2, then for input sizes of ( N > 1000 ) the running time of O( N ^ 2 ) algorithm becomes a proportion of ( N * N ) while N would be growing proportional to the input size, while the running time of the O(N) algorithm only remains proportional to ( 1000 * N ).
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.

How to Understand Time Complexity of Happy Number Problem Solution from Leetcode

I have some difficulties in understanding the time complexity analysis for one solution for the Happy Number Question from Leet code, for my doubts on complexity analysis, I marked them in bold and really appreciate your advice
Here is the question:
Link: https://leetcode.com/problems/happy-number/
Question:
Write an algorithm to determine if a number is "happy".
A happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers.
Example:
Input: 19
Output: true
Explanation:
1^2(square of 1) + 9^2 = 82
8^2 + 2^2 = 68
6^2 + 8^2 = 100
1^2 + 0^2 + 0^2 = 1
Here is the code:
class Solution(object):
def isHappy(self, n):
#getnext function will compute the sum of square of each digit of n
def getnext(n):
totalsum = 0
while n>0:
n,v = divmod(n,10)
totalsum+=v**2
return totalsum
#we declare seen as a set to track the number we already visited
seen = set()
#we stop checking if: either the number reaches one or the number was visited #already(ex.a cycle)
while n!=1 and (n not in seen):
seen.add(n)
n = getnext(n)
return n==1
Note: feel free to let me know if I need to explain how the code works
Time Complexity Analysis:
Time complexity : O(243 * 3 + logN + loglogN + log loglog N)...=O(logN).
Finding the next value for a given number has a cost of O(log n)because we are processing each digit in the number, and the number of digits in a number is given by logN.
My doubt: why the number of digits in a number is given by logN? what is N here? the value of a specific number or something else?
To work out the total time complexity, we'll need to think carefully about how many numbers are in the chain, and how big they are.
We determined above that once a number is below 243, it is impossible for it to go back up above 243.Therefore, based on our very shallow analysis we know for sure that once a number is below 243, it is impossible for it to take more than another 243 steps to terminate.
Each of these numbers has at most 3 digits. With a little more analysis, we could replace the 243 with the length of the longest number chain below 243, however because the constant doesn't matter anyway, we won't worry about it.
My doubt: I think the above paragraph is related to the time complexity component of 243*3, but I cannot understand why we multiply 243 by 3
For an n above 243, we need to consider the cost of each number in the chain that is above 243. With a little math, we can show that in the worst case, these costs will be O(log n) + O(log log n) + O(log log log N)... Luckily for us, the O(logN) is the dominating part, and the others are all tiny in comparison (collectively, they add up to less than logN), so we can ignore them. My doubt: what is the reasoning behind O(log log n) O(log log log N) for an n above 243?
Well, my guess for the first doubt is that the number of digits of a base 10 number is given by it's value (N) taken to the logarithm at base 10, rounded down. So for example, 1023 would have floor(log10(1023)) digits, which is 3. So yes, the N is the value of the number. the log in time complexity indicates a logarithm, not specifically that of base 2 or base e.
As for the second doubt, it probably has to do with the work required to reduce a number to below 243, but I am not sure. I'll edit this answer once I work that bit out.
Let's say N has M digits. Than getnext(N) <= 81*M. The equality happens when N only has 9's.
When N < 1000, i.e. at most 3 digits, getnext(N) <= 3*81 = 243. Now, you will have to call getnext(.) at most O(243) times to figure out if N is indeed happy.
If M > 3, number of digits of getnext(N) must be less than M. Try getnext(9999), getnext(99999), and so on [1].
Notes:
[1] Adding a digit to N can make it at most 10*N + 9, i.e. adding a 9 at the end. But the number of digits increases to M+1 only. It's a logarithmic relationship between N and M. Hence, the same relationship holds between N and 81*M.
Using the Leetcode solution
class Solution {
private int getNext(int n) {
int totalSum = 0;
while (n > 0) {
int d = n % 10;
n = n / 10;
totalSum += d * d;
}
return totalSum;
}
public boolean isHappy(int n) {
Set<Integer> seen = new HashSet<>();
while (n != 1 && !seen.contains(n)) {
seen.add(n);
n = getNext(n);
}
return n == 1;
}
}
}
O(243*3) for n < 243
3 is the max number of digits in n
e.g. For n = 243
getNext() will take a maximum of 3 iterations because there are 3 digits for us to loop over.
isHappy() can take a maximum of 243 iterations to find a cycle or terminate, because we can store a max of 243 numbers in our hash set.
O(log n) + O(log log n) + O(log log log N)... for n > 243
1st iteration + 2nd iteration + 3rd iteration ...
getNext() will be called a maximum of O(log n) times. Because log10 n is the number of digits.
isHappy() will be called a maximum of 9^2 per digit. This is the max we can store in the hash set before we find a cycle or terminate.
First Iteration
9^2 * number of digits
O(81*(log n)) drop the constant
O(log n)
+
Second Iteration
O(log (81*(log n))) drop the constant
O(log log n)
+
Third Iteration
O(log log log N)
+
ect ...

Time complexity for while loop

What is the time complexity of
x = 1
while( x < SomeValue )
{
x *= 2;
}
I believe it is O(N), as the loop will continue for a fixed number of iteration.
Is my assumption correct?
The time complexity would be O(log(n)) because x is increasing exponentially.
The loop will execute in O(log n) time. Hopefully the math makes the reasoning more clear. Each iteration is constant time. To find the number of iterations relative to SomeValue, which we'll call t, you can see that after the nth iteration, x will have the value 2ⁿ. The loop ends once x≥t. So to find the number of iterations needed to meet or exceed t, plug in 2ⁿ for x and solve for t. You get t=log₂(n). Hence, O(log n) time.

Total time complexity when each step takes O(log n) operations

Consider a tree where the cost of an insertion is in O(log n). Say you start from an empty tree and add N elements iteratively. We want to know the total time complexity. I did this:
nb of operations in iteration i = log i
nb of operations in all iterations from 1 to N = log 1 + log 2 + ... + log N = log( N! )
total complexity = O(N!) ~ O(N log N)
(cf the Stirling approximation http://en.wikipedia.org/wiki/Stirling%27s_approximation )
Is this correct?
Yes, it's nearly correct.
A small correction: in the ith step, the number of operations is not log i, as most of the time that's an irrational number, it's O(log i). So for a mathematically tight proof you have to work a bit harder, but in short, what you wrote is the essence of the proof.