Complexity of an example with only one for loop - time-complexity

I'm getting started with complexity and I wanted to know why the example given below is O(n^2) and not O(n), would you guys help me? I need it fast for an exam.
l1 = []
for e in range(0, n):
if e in range(n, 2n):
l1.append(e**3)

I think the
if e in range(n, 2n):
is supposed to be
for e in range(n, 2n):
or even
for j in range(n, 2n):
If this is the case, then the O(n^2) complexity would make sense since are n iterations happening n times. Meaning the complexity is n * n = n^2.
Otherwise, the loop simply runs n times (range of 0,n) and does not execute
if e in range(n, 2n)
since e would not fall within this range.

Related

Time complexity of these two while loops

x = 1;
while(x<n)
{
x = x + n / 10;
m = n*n
y = n;
while(m>y)
{
m=m-100;
y=y+20;
}
}
The way I solved it: we are adding to x 1/10 of n every time, so no matter how big n will be, the number of repitation we are doing is always 10.
The inner loop is from n to n^2, and each variable in it is increasing linearly, so the inner loop should be O(n)
and because the outer loop is O(1), we get O(n) for all of the function.
but the optional answers for the question are: O(n^2), O(n^3), O(n^4), O(nlogn)
What am I missing? thanks.
You are correct that the outer loop runs a constant number of times (10), but your reasoning about the inner loop isn't all the way there. You say that there is a "linear increase" with each iteration of the inner loop , and if that were the case you would be correct that the whole function runs in O(n), but it's not. Try and figure it out from here, but if you're still stuck:
The inner loop has to make up the difference between n^2 and n in terms of the difference between m and y. That difference between m and y is reduced by a constant amount with each iteration: 120, not a linear amount. Therefore the inner loop runs (n^2 - n)/120 times. Multiplying the number of times the outer loop runs by the times the inner loop runs, we get:
O(10) * O((n^2 - n)/120)
= O(1) * O(n^2)
= O(n^2)

Is this O(N) algorithm actually O(logN)?

I have an integer, N.
I denote f[i] = number of appearances of the digit i in N.
Now, I have the following algorithm.
FOR i = 0 TO 9
FOR j = 1 TO f[i]
k = k*10 + i;
My teacher said this is O(N). It seems to me more like a O(logN) algorithm.
Am I missing something?
I think that you and your teacher are saying the same thing but it gets confused because the integer you are using is named N but it is also common to refer to an algorithm that is linear in the size of its input as O(N). N is getting overloaded as the specific name and the generic figure of speech.
Suppose we say instead that your number is Z and its digits are counted in the array d and then their frequencies are in f. For example, we could have:
Z = 12321
d = [1,2,3,2,1]
f = [0,2,2,1,0,0,0,0,0,0]
Then the cost of going through all the digits in d and computing the count for each will be O( size(d) ) = O( log (Z) ). This is basically what your second loop is doing in reverse, it's executing one time for each occurence of each digits. So you are right that there is something logarithmic going on here -- the number of digits of Z is logarithmic in the size of Z. But your teacher is also right that there is something linear going on here -- counting those digits is linear in the number of digits.
The time complexity of an algorithm is generally measured as a function of the input size. Your algorithm doesn't take N as an input; the input seems to be the array f. There is another variable named k which your code doesn't declare, but I assume that's an oversight and you meant to initialise e.g. k = 0 before the first loop, so that k is not an input to the algorithm.
The outer loop runs 10 times, and the inner loop runs f[i] times for each i. Therefore the total number of iterations of the inner loop equals the sum of the numbers in the array f. So the complexity could be written as O(sum(f)) or O(Σf) where Σ is the mathematical symbol for summation.
Since you defined that N is an integer which f counts the digits of, it is in fact possible to prove that O(Σf) is the same thing as O(log N), so long as N must be a positive integer. This is because Σf equals how many digits the number N has, which is approximately (log N) / (log 10). So by your definition of N, you are correct.
My guess is that your teacher disagrees with you because they think N means something else. If your teacher defines N = Σf then the complexity would be O(N). Or perhaps your teacher made a genuine mistake; that is not impossible. But the first thing to do is make sure you agree on the meaning of N.
I find your explanation a bit confusing, but lets assume N = 9075936782959 is an integer. Then O(N) doesn't really make sense. O(length of N) makes more sense. I'll use n for the length of N.
Then f(i) = iterate over each number in N and sum to find how many times i is in N, that makes O(f(i)) = n (it's linear). I'm assuming f(i) is a function, not an array.
Your algorithm loops at most:
10 times (first loop)
0 to n times, but the total is n (the sum of f(i) for all digits must be n)
It's tempting to say that algorithm is then O(algo) = 10 + n*f(i) = n^2 (removing the constant), but f(i) is only calculated 10 times, each time the second loops is entered, so O(algo) = 10 + n + 10*f(i) = 10 + 11n = n. If f(i) is an array, it's constant time.
I'm sure I didn't see the problem the same way as you. I'm still a little confused about the definition in your question. How did you come up with log(n)?

Time Complexity of nested loops including if statement

I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.
Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
&iff;
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.
Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)

Is O(mn) in O(n^2)?

Simple question. Working with an m x n matrix and I'm doing some O(mn) operations. My question is if O(mn) is in O(n^2). Looking at the Wikipedia on big O I would think so but I've always been pretty bad at complexity bounds so I was hoping someone could clarify.
O(mn) for a m x n matrix means that you're doing constant work for each value of the matrix.
O(n^2) means that, for each column, you're doing work that is O(# columns). Note this runtime increases trivially with # of rows.
So, in the end, it's a matter of if m is greater than n. if m >> n, O(n^2) is faster. if m << n, O(mn) is faster.
m * n is O(n2) if m is O(n).
I assume that for matrix you probably will have m = O(n) which is the number of columns while n is a number of rows. So m * n = O(n2). But who knows how many columns your matrix will have.
It all depends on what bounds does m have.
Have a look at definition of O(n).

T ( n ) = 4 T ( n/ 3) + lg n

How do i solve this recurance relation? T(n) = 4T(n/3) + lg n
I know that Master-Theorem-Case 1 applies but i dont understand why. The way i approach this until now is this one.
a=4, b=3, f(n)=lg n.
Is lg n = (lg10 n) or (lg2 n) and i know that because of (lge n) it doesn´t really matter but i still dont understand why it doesn´t matter if it is lg10 or lg2. I could calculate (lg10 n) / (lg2 n) or sth. and for some reason it doesn´t matter but why?... but lets go on.
n^log3^4 ~ 1.26 but what is lg n in terms of n^someting.
Another example so maybe you understand me.
If i had f(n) = square root of n and not lg n it would be f(n) = n^0.5.
So n^1.26 <= n^0.5 for an e > 0. For e = 1, first case, f(n) becomes element of n^logb^(a-e) = n^log3^(4-1) = n^log3^3. Is n^0.5 element of n^1? Yes? because it is smaller?, so this leads to n^logb^a, or T(N) = O(N^logba) or O(n^log3^4).
If this is correct how do i follow this way for f(n) = lg n?
I hope you understood my question, i cannot format properly all the n^logba stuff.
No. The growth rate of logarithmic function is less than any polynomial function with the exponential greater than 0. That is to say even something like x^0.0000001 will eventually grow faster than log x.
So in this case its O(n^log_3 4).