Evaluate how many times a double sum is calculated - sum

Assume I have a double sum (I cannot figure out how to write latex-code here), and a variablec which is being calculated in each iteration i.e
sum_{i=1}^n sum_{j>i}^n c
How many times are the sums being evaluated, i.e how many c do we have when the sums are finished? I would say that we have n(n-1)c (since the outer-sum runs n times, and the inner runs (n-1) times) but if I write some quick code to do it numerically i get n(n-1)/2
n=5
c=0
for i in range(1,n+1):
for j in range(i+1,n+1):
print(f"j: {j}")
print(f"i: {i}")
c+=1
print(c)
print(n*(n-1)/2)

Related

Is this O(N) algorithm actually O(logN)?

I have an integer, N.
I denote f[i] = number of appearances of the digit i in N.
Now, I have the following algorithm.
FOR i = 0 TO 9
FOR j = 1 TO f[i]
k = k*10 + i;
My teacher said this is O(N). It seems to me more like a O(logN) algorithm.
Am I missing something?
I think that you and your teacher are saying the same thing but it gets confused because the integer you are using is named N but it is also common to refer to an algorithm that is linear in the size of its input as O(N). N is getting overloaded as the specific name and the generic figure of speech.
Suppose we say instead that your number is Z and its digits are counted in the array d and then their frequencies are in f. For example, we could have:
Z = 12321
d = [1,2,3,2,1]
f = [0,2,2,1,0,0,0,0,0,0]
Then the cost of going through all the digits in d and computing the count for each will be O( size(d) ) = O( log (Z) ). This is basically what your second loop is doing in reverse, it's executing one time for each occurence of each digits. So you are right that there is something logarithmic going on here -- the number of digits of Z is logarithmic in the size of Z. But your teacher is also right that there is something linear going on here -- counting those digits is linear in the number of digits.
The time complexity of an algorithm is generally measured as a function of the input size. Your algorithm doesn't take N as an input; the input seems to be the array f. There is another variable named k which your code doesn't declare, but I assume that's an oversight and you meant to initialise e.g. k = 0 before the first loop, so that k is not an input to the algorithm.
The outer loop runs 10 times, and the inner loop runs f[i] times for each i. Therefore the total number of iterations of the inner loop equals the sum of the numbers in the array f. So the complexity could be written as O(sum(f)) or O(Σf) where Σ is the mathematical symbol for summation.
Since you defined that N is an integer which f counts the digits of, it is in fact possible to prove that O(Σf) is the same thing as O(log N), so long as N must be a positive integer. This is because Σf equals how many digits the number N has, which is approximately (log N) / (log 10). So by your definition of N, you are correct.
My guess is that your teacher disagrees with you because they think N means something else. If your teacher defines N = Σf then the complexity would be O(N). Or perhaps your teacher made a genuine mistake; that is not impossible. But the first thing to do is make sure you agree on the meaning of N.
I find your explanation a bit confusing, but lets assume N = 9075936782959 is an integer. Then O(N) doesn't really make sense. O(length of N) makes more sense. I'll use n for the length of N.
Then f(i) = iterate over each number in N and sum to find how many times i is in N, that makes O(f(i)) = n (it's linear). I'm assuming f(i) is a function, not an array.
Your algorithm loops at most:
10 times (first loop)
0 to n times, but the total is n (the sum of f(i) for all digits must be n)
It's tempting to say that algorithm is then O(algo) = 10 + n*f(i) = n^2 (removing the constant), but f(i) is only calculated 10 times, each time the second loops is entered, so O(algo) = 10 + n + 10*f(i) = 10 + 11n = n. If f(i) is an array, it's constant time.
I'm sure I didn't see the problem the same way as you. I'm still a little confused about the definition in your question. How did you come up with log(n)?

time complexity for loop justification

Hi could anyone explain why the first one is True and second one is False?
First loop , number of times the loop gets executed is k times,
Where for a given n, i takes values 1,2,4,......less than n.
2 ^ k <= n
Or, k <= log(n).
Which implies , k the number of times the first loop gets executed is log(n), that is time complexity here is O(log(n)).
Second loop does not get executed based on p as p is not used in the decision statement of for loop. p does take different values inside the loop, but doesn't influence the decision statement, number of times the p*p gets executed, its time complexity is O(n).
O(logn):
for(i=0;i<n;i=i*c){// Any O(1) expression}
Here, time complexity is O(logn) when the index i is multiplied/divided by a constant value.
In the second case,
for(p=2,i=1,i<n;i++){ p=p*p }
The incremental increase is constant i.e i=i+1, the loop will run n times irrespective of the value of p. Hence the loop alone has a complexity of O(n). Considering naive multiplication p = p*p is an O(n) expression where n is the size of p. Hence the complexity should be O(n^2)
Let me summarize with an example, suppose the value of n is 8 then the possible values of i are 1,2,4,8 as soon as 8 comes look will break. You can see loop run for 3 times i.e. log(n) times as the value of i keeps on increasing by 2X. Hence, True.
For the second part, its is a normal loop which runs for all values of i from 1 to n. And the value of p is increasing be the factor p^2n. So it should be O(p^2n). Thats why it is wrong.
In order to understand why some algorithm is O(log n) it is enough to check what happens when n = 2^k (i.e., we can restrict ourselves to the case where log n happens to be an integer k).
If we inject this into the expression
for(i=1; i<2^k; i=i*2) s+=i;
we see that i will adopt the values 2, 4, 8, 16,..., i.e., 2^1, 2^2, 2^3, 2^4,... until reaching the last one 2^k. In other words, the body of the loop will be evaluated k times. Therefore, if we assume that the body is O(1), we see that the complexity is k*O(1) = O(k) = O(log n).

Time complexity of dependent and conditional triple for-loop

for i in xrange(1,n+1):
for j in xrange(1,i*i):
if j%i==0:
for k in xrange(0,j):
print("*")
What will be the time complexity of the above algorithm?
It sounds like a homework problem, but is very interesting so I'll take a shot. We will just count number of times that asterisk is printed, because it dominates.
For each j, only those that are divisible by i trigger the execution of the innermost loop. How many of them are there? Well, in range [1, i*i) those are i, 2*i, 3*i, ..., (i-1)*i. Let's go further. k iterates from 0 to j, so first we will have i iterations (for j=i), then 2*i (for j=2*i), then 3*i.. until we iterate (i-1)*i times. This is a total of i + 2*i + 3*i + (i-1)*i printed asterisks for each i. Since i goes from 0 to n, total number of iterations is sum of all i + 2*i + 3*i + (i-1)*i where i goes from 0 to n. Let's sum it up:
Here we used multiple times the formula for sum of first n numbers. The factor which dominates in the final sum is obviously k^3, and since the formula for sum of first n-1 cubes is
,
the total complexity is O(n^4).

time complexity exercise (pseudo code)

Just started Data Structure. Got stuck on this one:
I am having trouble with the inner while and for loops, Because it changes if the N number is odd or even.
My best case will be - the inner for loop runs logn (base 2) times,
And the while loop - logn times (base 2)
Would love some help.
Concentrate on how many times do_something() is called.
The outer for loop clearly runs n times, and the while loop inside it is independent of the variable i. Thus do_something() is called n times the total number of times it is called in the while loop.
In the first pass through the while loop, do_something() is called once. The second time, it is called twice, the third time it is called, 4 times, etc.
The total number of times it is called is thus
1 + 2 + 4 + 8 + ... + 2^(k-1)
where k is maximal such that 2^(k-1) <= n.
There is a standard formula for the above sum. Use it then solve for k in terms of n and multiply the result by the n from the outer loop, and you are done.

Runtime complexity of the function

I have to find the time complexity of the following program:
function(int n)
{
for(int i=0;i<n;i++) //O(n) times
for(int j=i;j<i*i;j++) //O(n^2) times
if(j%i==0)
{ //O(n) times
for(int k=0;k<j;k++) //O(n^2) times
printf("8");
}
}
I analysed this function as follows:
i : O(n) : 1 2 3 4 5
j : : 1 2..3 3..8 4..15 5..24 (values taken by j)
O(n^2): 1 2 6 12 20 (Number of times executed)
j%i==0 : 1 2 3,6 4,8,12 5,10,15,20 (Values for which the condition is true)
O(n) : 1 1 2 3 4
k : 1 2 3,6 4,8,12 5,10,15,20 (Number of times printf is executed)
Total : 1 2 9 24 50 (Total)
However I am unable to bring about any conclusions since I don't find any correlation between $i$ which is essentially O(n) and Total of k (last line). In fact I don't understand if we should be looking at the time complexity in terms of number of times printf is executed since that will neglect O(n^2) execution of j-for loop. The answer given was O(n^5) which I presume is wrong but then whats correct? To be more specific about my confusion I am not able to figure out how that if(j%i==0) condition have effect on the overall runtime complexity of the function.
The answer is definitely not O(n^5). It can be seen very easily. Suppose your second inner loop always runs n^2 times and your innermost loop always runs n times, even then total time complexity would be O(n^4).
Now let us see what is actual time complexity.
1.The outermost loop always runs O(n) times.
2.Now let us see how many times second inner loop runs for a single iteration of outer loop:
The loop will run
0 time for i = 0
0 time for i = 1
2 times for i = 2
....
i*i - i times for j = i.
i*i - i is O(i^2)
3. Coming to the innermost loop, it runs only when j is divisble by i and j varies from i to i*i-1.
This means j goes through i*1, i*2 , i*3 ..... till last multiple of i less than i*i. Which is clearly O(i), Hence for a single iteration of second inner loop innermost loop runs O(i) times, this means total iterations of two inner loops is O(i^3).
Summing up O(i^3) for i = 0 to n-1 will definitely give a term that is O(n^4).
Therefore, the correct time complexity is O(n^4).