Time complexity of calculating the final digit sum of a number - time-complexity

I want to calculate the time complexity of calculating the "final" digit sum of a number n: sum of the digits until I get A single digit number.
I know that in the first iteration an algorithm will perform O(log n) actions, in the second iteration it will be O(log log n) and so on, up until log(log(...(log(n))...)) < 10. Thus the number of iteration is O(log* n), where log* n is the log-star value of n.
Is there a closed form for this sum?

This sum works out to Θ(log n). One way to see this is to notice that log n < n / 2 for all but tiny values of n, so we have that
log n + log log n + log log log n + …
≤ log n + (log n) / 2 + (log n) / 4 + (log n) / 8 + …
≤ log n (1 + 1/2 + 1/4 + 1/8 + …)
= 2 log n
= O(log n).
To see that the sum is Ω(log n), note that the first term itself is log n, so the sum is at least log n.

Related

What is the time complexity of a log n while loop nested inside a for loop?

I'm having trouble figuring out the time complexity of the following code.
for i in range(0,n):
x = 1
while x < n:
x = x * 2
I understand that the outer for loop runs n times, and that the while loop runs log n times (I think). So does that mean because outer for loop has more of an impact so the run time is o(n)?
It's O(n * log(n)).
The outer loop is running n times and each time it runs it runs the inner loop. The inner loop is indeed O(log n). To see this imagine n being a power of 2. If n is 4,
x = 1
while x < n:
x = x * 2
will iterate twice, once when x is 1 and once when x is 2. If n is 8, this loop will iterate 3 times (on x = 1, 2, and 4). Note that 2^2 is four and 2^3 is eight. The number of iterations is the exponent; you could prove this by mathematical induction if you want to be rigorous. We can get that exponent by taking the base-2 log of n but by the definition the asymptotic notation we can ignore the base and just say that the loop above runs in O(log n) time.
If we have a loop that runs in O(log n) that we are running O(n) times we can say that the whole thing runs in O(n * log(n)) time.

Time Complexity on a TSP

I am trying to analyze for each step of a TSP problem with a Greedy Algorithm and a Look-Ahead algorithm the Time Complexity.
It would be great if someone could check whether the logic that I have applied on each step makes sens to build the time complexity or not.
For the Greedy algorithm I have:
Iterations: max n itinerations (n: nº of locations)
Select available arc (i,j) with lowest cost n log n operations
and label this arc as unavailable 1 operation
Label all arcs from location i or to location j as unavailable 1 operation
Label all arcs as unavailable which could result in sub cycles if more than one arc is available. 1 operation
If there are still available arcs, checking of max. n locations
go back to 1.
Time Complexity
n * ( n log n + 1 + 1 + 1 + n) --> n^2 log n + n^2 + 3n --> O(n^2 log n)
And for the Look-Ahead algorithm I have tried:
Iterations: n itinerations (n: nº of locations)
For each row and column, n*n locations
determine available arcs with smallest and second smallest cost, 1 operation
and the corresponding cost difference, respectively. 1 operation
Select row or column with largest cost difference and select available arc (i,j)in that row or column with lowest cost 1 operation
and label this arc as unavailable 1 operation
Label all arcs from location ior to location j as unavailable 1 operation
Label all arcs as unavailable which could result in sub cycles. 1 operation
If there are still more arcs available than arcs still to be selected, go back to 1 check n locations
Else: select remaining arcs. 1 operation
Time complexity
n * ( n*n + 1 + 1 + 1 + 1 + 1 + 1 + n + 1) --> n^3 + 8n + n^2 --> n^3 --> O(n^3)

How to Understand Time Complexity of Happy Number Problem Solution from Leetcode

I have some difficulties in understanding the time complexity analysis for one solution for the Happy Number Question from Leet code, for my doubts on complexity analysis, I marked them in bold and really appreciate your advice
Here is the question:
Link: https://leetcode.com/problems/happy-number/
Question:
Write an algorithm to determine if a number is "happy".
A happy number is a number defined by the following process: Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers.
Example:
Input: 19
Output: true
Explanation:
1^2(square of 1) + 9^2 = 82
8^2 + 2^2 = 68
6^2 + 8^2 = 100
1^2 + 0^2 + 0^2 = 1
Here is the code:
class Solution(object):
def isHappy(self, n):
#getnext function will compute the sum of square of each digit of n
def getnext(n):
totalsum = 0
while n>0:
n,v = divmod(n,10)
totalsum+=v**2
return totalsum
#we declare seen as a set to track the number we already visited
seen = set()
#we stop checking if: either the number reaches one or the number was visited #already(ex.a cycle)
while n!=1 and (n not in seen):
seen.add(n)
n = getnext(n)
return n==1
Note: feel free to let me know if I need to explain how the code works
Time Complexity Analysis:
Time complexity : O(243 * 3 + logN + loglogN + log loglog N)...=O(logN).
Finding the next value for a given number has a cost of O(log n)because we are processing each digit in the number, and the number of digits in a number is given by logN.
My doubt: why the number of digits in a number is given by logN? what is N here? the value of a specific number or something else?
To work out the total time complexity, we'll need to think carefully about how many numbers are in the chain, and how big they are.
We determined above that once a number is below 243, it is impossible for it to go back up above 243.Therefore, based on our very shallow analysis we know for sure that once a number is below 243, it is impossible for it to take more than another 243 steps to terminate.
Each of these numbers has at most 3 digits. With a little more analysis, we could replace the 243 with the length of the longest number chain below 243, however because the constant doesn't matter anyway, we won't worry about it.
My doubt: I think the above paragraph is related to the time complexity component of 243*3, but I cannot understand why we multiply 243 by 3
For an n above 243, we need to consider the cost of each number in the chain that is above 243. With a little math, we can show that in the worst case, these costs will be O(log n) + O(log log n) + O(log log log N)... Luckily for us, the O(logN) is the dominating part, and the others are all tiny in comparison (collectively, they add up to less than logN), so we can ignore them. My doubt: what is the reasoning behind O(log log n) O(log log log N) for an n above 243?
Well, my guess for the first doubt is that the number of digits of a base 10 number is given by it's value (N) taken to the logarithm at base 10, rounded down. So for example, 1023 would have floor(log10(1023)) digits, which is 3. So yes, the N is the value of the number. the log in time complexity indicates a logarithm, not specifically that of base 2 or base e.
As for the second doubt, it probably has to do with the work required to reduce a number to below 243, but I am not sure. I'll edit this answer once I work that bit out.
Let's say N has M digits. Than getnext(N) <= 81*M. The equality happens when N only has 9's.
When N < 1000, i.e. at most 3 digits, getnext(N) <= 3*81 = 243. Now, you will have to call getnext(.) at most O(243) times to figure out if N is indeed happy.
If M > 3, number of digits of getnext(N) must be less than M. Try getnext(9999), getnext(99999), and so on [1].
Notes:
[1] Adding a digit to N can make it at most 10*N + 9, i.e. adding a 9 at the end. But the number of digits increases to M+1 only. It's a logarithmic relationship between N and M. Hence, the same relationship holds between N and 81*M.
Using the Leetcode solution
class Solution {
private int getNext(int n) {
int totalSum = 0;
while (n > 0) {
int d = n % 10;
n = n / 10;
totalSum += d * d;
}
return totalSum;
}
public boolean isHappy(int n) {
Set<Integer> seen = new HashSet<>();
while (n != 1 && !seen.contains(n)) {
seen.add(n);
n = getNext(n);
}
return n == 1;
}
}
}
O(243*3) for n < 243
3 is the max number of digits in n
e.g. For n = 243
getNext() will take a maximum of 3 iterations because there are 3 digits for us to loop over.
isHappy() can take a maximum of 243 iterations to find a cycle or terminate, because we can store a max of 243 numbers in our hash set.
O(log n) + O(log log n) + O(log log log N)... for n > 243
1st iteration + 2nd iteration + 3rd iteration ...
getNext() will be called a maximum of O(log n) times. Because log10 n is the number of digits.
isHappy() will be called a maximum of 9^2 per digit. This is the max we can store in the hash set before we find a cycle or terminate.
First Iteration
9^2 * number of digits
O(81*(log n)) drop the constant
O(log n)
+
Second Iteration
O(log (81*(log n))) drop the constant
O(log log n)
+
Third Iteration
O(log log log N)
+
ect ...

Why is Time complexity of the code O(log n)?

Here is the code given in the book "Cracking the Coding Interview" by Gayle Laakmann. Here time complexity of the code to find:-
int sumDigits(int n)
{ int sum=0;
while(n >0)
{
sum+=n%10;
n/=10
}
return sum ;
}
I know the time complexity should be the number of digits in n.
According to the book, its run time complexity is O(log n). Book provided brief description but I don't understand.
while(n > 0)
{
sum += n % 10;
n /= 10;
}
so, how much steps will this while loop take so that n comes to 0? What you do in each step, you divide n with 10. So, you need to do it k times in order to come to 0. Note, k is the number of digits in n.
Lets go step by step:
First step is when n > 0, you divide n by 10. If n is still positive, you will divide it by 10. What you get there is n/10/10 or n / (10^2). After third time, its n / (10^3). And after k times, its n/(10^k) = 0. And loop will end. But this is not 0 in mathematical sense, its 0 because we deal with integers. What you really have is |n|/(10^k) < 1, where k∈N.
So, we have this now:
n/(10^k) < 1
n < 10^k
logn < k
Btw. its also n/(10^(k-1)) > 1, so its:
k-1 < logn < k. (btw. don't forget, this is base 10).
So, you need to do logn + 1 steps in order to finish the loop, and thats why its O(log(n)).
The number of times the logic would run is log(n) to the base of 10 which is same as (log(n) to the base 2)/ (log (10) to the base of 2). In terms of time complexity, this would simply be O(log (n)). Note that log(n) to the base of 10 is how you would represent the number of digits in n.

time complexities (Big-O notation) of the following running times expressed as a function of the input size N

Give the time complexities (Big-O notation) of the
following running times expressed as a function of
the input size N
a) N^12 + 25N^10 + 8
b) N + 3logN + 12n√n
c) 12NlogN + 15N2logN
a) N^12 is the dominant term - O(N^12)
b) N√N is the dominant term - O(N√N). For a proof of why log n is smaller than any power term, see this page
c) N^2 > N so N^2 log N is the dominant term - O(N^2 log N)