Time Complexity of nested loops including if statement - time-complexity

I'm unsure of the general time complexity of the following code.
Sum = 0
for i = 1 to N
if i > 10
for j = 1 to i do
Sum = Sum + 1
Assuming i and j are incremented by 1.
I know that the first loop is O(n) but the second loop is only going to run when N > 10. Would the general time complexity then be O(n^2)? Any help is greatly appreciated.

Consider the definition of Big O Notation.
________________________________________________________________
Let f: ℜ → ℜ and g: ℜ → ℜ.
Then, f(x) = O(g(x))
⇔
∃ k ∈ ℜ ∋ ∃ M > 0 ∈ ℜ ∋ ∀ x ≥ k, |f(x)| ≤ M ⋅ |g(x)|
________________________________________________________________
Which can be read less formally as:
________________________________________________________________
Let f and g be functions defined on a subset of the real numbers.
Then, f is O of g if, for big enough x's (this is what the k is for in the formal definition) there is a constant M (from the real numbers, of course) such that M times g(x) will always be greater than or equal to (really, you can just increase M and it will always be greater, but I regress) f(x).
________________________________________________________________
(You may note that if a function is O(n), then it is also O(n²) and O(e^n), but of course we are usually interested in the "smallest" function g such that it is O(g). In fact, when someone says f is O of g then they almost always mean that g is the smallest such function.)
Let's translate this to your problem. Let f(N) be the amount of time your process takes to complete as a function of N. Now, pretend that addition takes one unit of time to complete (and checking the if statement and incrementing the for-loop take no time), then
f(1) = 0
f(2) = 0
...
f(10) = 0
f(11) = 11
f(12) = 23
f(13) = 36
f(14) = 50
We want to find a function g(N) such that for big enough values of N, f(N) ≤ M ⋅g(N). We can satisfy this by g(N) = N² and M can just be 1 (maybe it could be smaller, but we don't really care). In this case, big enough means greater than 10 (of course, f is still less than M⋅g for N <11).
tl;dr: Yes, the general time complexity is O(n²) because Big O assumes that your N is going to infinity.

Let's assume your code is
Sum = 0
for i = 1 to N
for j = 1 to i do
Sum = Sum + 1
There are N^2 sum operations in total. Your code with if i > 10 does 10^2 sum operations less. As a result, for enough big N we have
N^2 - 10^2
operations. That is
O(N^2) - O(1) = O(N^2)

Related

Proving or Refuting Time Complexity

I have an exam soon and I wasn't at university for a long time, cause I was at the hospital
Prove or refute the following statements:
log(n)= O(
√
n)
3^(n-1)= O(2^n)
f(n) + g(n) = O(f(g(n)))
2^(n+1) = O(2^n)
Could someone please help me and explain to me ?
(1) is true because log(n) grows asymptotically slower than any polynomial, including sqrt(n) = n^(1/2). To prove this we can observe that both log(n) and sqrt(n) are strictly increasing functions for n > 0 and then focus on a sequence where both evaluate easily, e.g., 2^(2k). Now we see log(2^(2k)) = 2k, but sqrt(2^(2k)) = 2^k. For k = 2, 2k = 2^k, and for k > 2, 2k < 2^k. This glosses over some details but the idea is sound. You can finish this by arguing that between 2^(2k) and 2^(2(k+1)) both functions have values greater than one for k >= 2 and thus any crossings can be eliminated by multiplying sqrt(n) by some constant.
(2) it is not true that 3^(n-1) is O(2^n). Suppose this were true. Then there exists an n0 and c such that for n > n0, 3^(n-1) <= c*2^n. First, eliminate the -1 by adding a (1/3) to the front; so (1/3)*3^n <= c*2^n. Next, divide through by 2^n: (1/3)*(3/2)^n <= c. Multiply by 3: (3/2)^n <= 3c. Finally, take the log of both sides with base 3/2: n <= log_3/2 (3c). The RHS is a constant expression and n is a variable; so this cannot be true of arbitrarily large n as required. This is a contradiction so our supposition was wrong; that is, 3^(n-1) is not O(2^n).
(3) this is not true. f(n) = 1 and g(n) = n is an easy counterexample. In this case, f(n) + g(n) = 1 + n but O(f(g(n)) = O(f(n)) = O(1).
(4) this is true. Rewrite 2^(n+1) as 2*2^n and it becomes obvious that this is true for n >= 1 by choosing c > 2.

BIG(O) time complexity

What is the time Complexity for below code:
1)
function(values,xlist,ylist):
sum =0
n=0
for r from 0 to xlist:
for c from 0 to ylist:
sum+= values[r][c]
n+1
return sum/n
2)
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters
print(character)
According to me the 1st code has O(xlist*ylist) complexity and 2nd code has O(n).
Is this right?
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
for r from 0 to xlist: // --> n time
for c from 0 to ylist: // n time
sum+= values[r][c]
n+1
}
function PrintCharacters():
characters= {"a","b","c","d"}
foreach character in characters --> # This loop will run as many time as there are characters suppose n characters than it will run time so O(n)
print(character)
Big O Notation gives an assumption when value is very big outer loop
will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times

Big O time complexity of n^1.001

Why is the growth of n^1.001 greater than n log n in Big O notation?
The n^0.001 doesn't seem significant...
For any exponent (x) greater than 1, nx is eventually greater than n * log(n). In the case of x = 1.001, the n in question is unbelievably large. Even if you lower x to 1.01, nx doesn't get bigger than n * log(n) until beyond n = 1E+128 (but before you reach 1E+256).
So, for problems where n is less than astronomical, n1.001 will be less than n * log(n), but you will eventually reach a point where it will be greater.
In case someone is interested, here is a formal proof:
For the sake of simplicity, let's assume we are using logarithms in base e.
Let a > 1 be any exponent (e.g., a = 1.001). Then a-1 > 0. Now consider the function
f(x) = x^(a-1)/log(x)
Using L'Hôpital's rule it is not hard to see that this function is unbounded. Moreover, computing the derivative of f(x), one can also see that the function is increasing for x > exp(1/(a-1)).
Therefore, there must exist an integer N such that, for all n > N, is f(n) > 1. In other words
n^(a-1)/log(n) > 1
or
n^(a-1) > log(n)
so
n^a > n log(n).
This shows that O(n^a) >= O(n log(n)).
But wait a minute. We wanted >, not >=, right? Fortunately this is easy to see. For instance, in the case a = 1.001, we have
O(n^1.001) > O(n^1.0001) >= O(n log(n))
and we are done.

Is O(mn) in O(n^2)?

Simple question. Working with an m x n matrix and I'm doing some O(mn) operations. My question is if O(mn) is in O(n^2). Looking at the Wikipedia on big O I would think so but I've always been pretty bad at complexity bounds so I was hoping someone could clarify.
O(mn) for a m x n matrix means that you're doing constant work for each value of the matrix.
O(n^2) means that, for each column, you're doing work that is O(# columns). Note this runtime increases trivially with # of rows.
So, in the end, it's a matter of if m is greater than n. if m >> n, O(n^2) is faster. if m << n, O(mn) is faster.
m * n is O(n2) if m is O(n).
I assume that for matrix you probably will have m = O(n) which is the number of columns while n is a number of rows. So m * n = O(n2). But who knows how many columns your matrix will have.
It all depends on what bounds does m have.
Have a look at definition of O(n).

Order of growth

for
f = n(log(n))^5
g = n^1.01
is
f = O(g)
f = 0(g)
f = Omega(g)?
I tried dividing both by n and i got
f = log(n)^5
g = n^0.01
But I am still clueless to which one grows faster. Can someone help me with this and explain the reasoning to the answer? I really want to know how (without calculator) one can determine which one grows faster.
Probably easiest to compare their logarithmic profiles:
If (for some C1, C2, a>0)
f < C1 n log(n)^a
g < C2 n^(1+k)
Then (for large enough n)
log(f) < log(n) + a log(log(n)) + log(C1)
log(g) < log(n) + k log(n) + log(C2)
Both are dominated by log(n) growth, so the question is which residual is bigger. The log(n) residual grows faster than log(log(n)), regardless of how small k or how large a is, so g would grow faster than f.
So in terms of big-O notation: g grows faster than f, so f can (asymptotically) be bounded from above by a function like g:
f(n) < C3 g(n)
So f = O(g). Similarly, g can be bounded from below by f, so g = Omega(f). But f cannot be bounded from below by a function like g, since g will eventually outgrow it. So f != Omega(g) and f != Theta(g).
But aaa makes a very good point: g does not begin to dominate over f until n becomes obscenely large.
I don't have a lot of experience with algorithm scaling, so corrections are welcome.
I would break this up into several easy, reusable lemmas:
Lemma 1: For a positive constant k, f = O(g) if and only if f = O(k g).
Proof: Suppose f = O(g). Then there exist constants c and N such that |f(n)| < c |g(n)| for n > N.
Thus |f(n)| < (c/k) (k |g(n)| ) for n > N and constant (c/k), so f = O (k g). The converse is trivially similar.
Lemma 2: If h is a positive monotonically increasing function and f and g are positive for sufficiently large n, then f = O(g) if and only if h(f) = O( h(g) ).
Proof: Suppose f = O(g). Then there exist constants c and N such that |f(n)| < c |g(n)| for n > N. Since f and g are positive for n > M, f(n) < c g(n) for n > max(N, M). Since h is monotonically increasing, h(f(n)) < c h(g(n)) for n > max(N, M), and lastly |h(f(n))| < c |h(g(n))| for n > max(N, M) since h is positive. Thus h(f) = O(h(g)).
The converse follows similarly; the key fact being that if h is monotonically increasing, then h(a) < h(b) => a < b.
Lemma 3: If h is an invertible monotonically increasing function, then f = O(g) if and only if f(h) + O(g(h)).
Proof: Suppose f = O(g). Then there exist constants c, N such that |f(n)| < c |g(n)| for n > N. Thus |f(h(n))| < c |g(h(n))| for h(n) > N. Since h(n) is invertible and monotonically increasing, h(n) > N whenever n > h^-1(N). Thus h^-1(N) is the new constant we need, and f(h(n)) = O(g(h(n)).
The converse follows similarly, using g's inverse.
Lemma 4: If h(n) is nonzero for n > M, f = O(g) if and only if f(n)h(n) = O(g(n)h(n)).
Proof: If f = O(g), then for constants c, N, |f(n)| < c |g(n)| for n > N. Since |h(n)| is positive for n > M, |f(n)h(n)| < c |g(n)h(n)| for n > max(N, M) and so f(n)h(n) = O(g(n)h(n)).
The converse follows similarly by using 1/h(n).
Lemma 5a: log n = O(n).
Proof: Let f = log n, g = n. Then f' = 1/n and g' = 1, so for n > 1, g increases more quickly than f. Moreover g(1) = 1 > 0 = f(1), so |f(n)| < |g(n)| for n > 1 and f = O(g).
Lemma 5b: n != O(log n).
Proof: Suppose otherwise for contradiction, and let f = n and g = log n. Then for some constants c, N, |n| < c |log n| for n > N.
Let d = max(2, 2c, sqrt(N+1) ). By the calculation in lemma 5a, since d > 2 > 1, log d < d. Thus
|f(2d^2)| = 2d^2 > 2d(log d) >= d log d + d log 2 = d (log 2d) > 2c log 2d > c log (2d^2) = c g(2d^2) = c |g(2d^2)| for 2d^2 > N, a contradiction. Thus f != O(g).
So now we can assemble the answer to the question you originally asked.
Step 1:
log n = O(n^a)
n^a != O(log n)
For any positive constant a.
Proof: log n = O(n) by Lemma 5a. Thus log n = 1/a log n^a = O(1/a n^a) = O(n^a) by Lemmas 3 (for h(n) = n^a), 4, and 1. The second fact follows similarly by using Lemma 5b.
Step 2:
log^5 n = O(n^0.01)
n^0.01 != O(log^5 n)
Proof: log n = O(n^0.002) by step 1. Then by Lemma 2 (with h(n) = n^5), log^5 n = O( (n^0.002)^5 ) = O(n^0.01). The second fact follows similarly.
Final answer:
n log^5 n = O(n^1.01)
n^1.01 != O(n log^5 n)
In other words,
f = O(g)
f != 0(g)
f != Omega(g)
Proof: Apply Lemma 4 (using h(n) = n) to step 2.
With practice these rules become "obvious" and second nature. and unless your test requires that you prove your answer you'll find yourself whipping through these kinds of big-O problems.
how about checking their intersections?
Solve[Log[n] == n^(0.01/5), n]
1809
{{n -> 2.72374}, {n -> 8.70811861815 10 }}
I cheated with Mathematica
you can also reason with derivatives,
In[71]:= D[Log[n], n]
1
-
n
In[72]:= D[n^(0.01/5), n]
0.002
------
0.998
n
consider what happens as n gets really large, change in first tends to zero, later function doesnt lose its derivative (exponent is greater than 0).
this tells you which is more complex theoretically.
however in the practical region, first function is going to grow faster.
This is not 100% mathematically kosher without proving something about logs, but here he goes:
f = log(n)^5
g = n^0.01
We take logs of both:
log(f) = log(log(n)^5)) = 5*log(log(n)) = O(log(log(n)))
log(g) = log(n^0.01) = 0.01*log(n) = O(log(n))
From this we see that the first one grows asymptotically slower, because it has a double log in it and logs grow slowly. An non-formal argument why this reasoning by taking logs is valid is this: log(n) tells you roughly how many digits there are in the number n. So if the number of digits of g is growing asymptotically faster than the number of digits of f, then surely the actual number g is growing faster than the number f!